Re: [PVE-User] Ceph Octopus

2020-07-02 Thread Alexandre DERUMIER
octopus test repo is available here 

[ http://download.proxmox.com/debian/ceph-octopus/dists/buster/test/ | 
http://download.proxmox.com/debian/ceph-octopus/dists/buster/test/ ] 


I'm already using for a customer, the proxmox management && gui is already 
patched to handle it. 



    

Alexandre Derumier 
Ingénieur système et stockage 

Manager Infrastructure 


Fixe : +33 3 59 82 20 10 



125 Avenue de la république 
59110 La Madeleine 
[ https://twitter.com/OdisoHosting ] [ https://twitter.com/mindbaz ] [ 
https://www.linkedin.com/company/odiso ] [ 
https://www.viadeo.com/fr/company/odiso ] [ 
https://www.facebook.com/monsiteestlent ] 

[ https://www.monsiteestlent.com/ | MonSiteEstLent.com ] - Blog dédié à la 
webperformance et la gestion de pics de trafic 






De: "Lindsay Mathieson"  
À: "proxmoxve"  
Envoyé: Vendredi 3 Juillet 2020 07:38:55 
Objet: Re: [PVE-User] Ceph Octopus 

On 3/07/2020 3:29 pm, Devin A wrote: 
> Yes they are working on the new version. Will be released sometime soon I 
> suspect. 

Ta 

-- 
Lindsay 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] High I/O waits, not sure if it's a ceph issue.

2020-07-02 Thread Alexandre DERUMIER
Hi,

you should give it a try to ceph octopus. librbd have greatly improved for 
write, and I can recommand to enable writeback now by default


Here some iops result with 1vm - 1disk -  4k block   iodepth=64, librbd, no 
iothread.


nautilus-cache=none nautilus-cache=writeback
  octopus-cache=none octopus-cache=writeback
  
randread 4k  62.1k 25.2k
61.1k 60.8k
randwrite 4k 27.7k 19.5k
34.5k 53.0k
seqwrite 4k  7850  37.5k
24.9k 82.6k




- Mail original -
De: "Mark Schouten" 
À: "proxmoxve" 
Envoyé: Jeudi 2 Juillet 2020 15:15:20
Objet: Re: [PVE-User] High I/O waits, not sure if it's a ceph issue.

On Thu, Jul 02, 2020 at 09:06:54AM +1000, Lindsay Mathieson wrote: 
> I did some adhoc testing last night - definitely a difference, in KRBD's 
> favour. Both sequential and random IO was much better with it enabled. 

Interesting! I just did some testing too on our demo cluster. Ceph with 
6 osd's over three nodes, size 2. 

root@node04:~# pveversion 
pve-manager/6.2-6/ee1d7754 (running kernel: 5.4.41-1-pve) 
root@node04:~# ceph -v 
ceph version 14.2.9 (bed944f8c45b9c98485e99b70e11bbcec6f6659a) nautilus 
(stable) 

rbd create fio_test --size 10G -p Ceph 
rbd create map_test --size 10G -p Ceph 
rbd map Ceph/map_test 

When just using a write test (rw=randwrite) krbd wins, big. 
rbd: 
WRITE: bw=37.9MiB/s (39.8MB/s), 37.9MiB/s-37.9MiB/s (39.8MB/s-39.8MB/s), 
io=10.0GiB (10.7GB), run=269904-269904msec 
krbd: 
WRITE: bw=207MiB/s (217MB/s), 207MiB/s-207MiB/s (217MB/s-217MB/s), io=10.0GiB 
(10.7GB), run=49582-49582msec 

However, using rw=randrw (rwmixread=75), things change a lot: 
rbd: 
READ: bw=49.0MiB/s (52.4MB/s), 49.0MiB/s-49.0MiB/s (52.4MB/s-52.4MB/s), 
io=7678MiB (8051MB), run=153607-153607msec 
WRITE: bw=16.7MiB/s (17.5MB/s), 16.7MiB/s-16.7MiB/s (17.5MB/s-17.5MB/s), 
io=2562MiB (2687MB), run=153607-153607msec 

krbd: 
READ: bw=5511KiB/s (5643kB/s), 5511KiB/s-5511KiB/s (5643kB/s-5643kB/s), 
io=7680MiB (8053MB), run=1426930-1426930msec 
WRITE: bw=1837KiB/s (1881kB/s), 1837KiB/s-1837KiB/s (1881kB/s-1881kB/s), 
io=2560MiB (2685MB), run=1426930-1426930msec 



Maybe I'm interpreting or testing stuff wrong, but it looks like simply writing 
to krbd is much faster, but actually trying to use that data seems slower. Let 
me know what you guys think. 


Attachments are being stripped, IIRC, so here's the config and the full output 
of the tests: 

RESULTS== 
rbd_write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=rbd, iodepth=32 
rbd_readwrite: (g=1): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=rbd, iodepth=32 
krbd_write: (g=2): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=32 
krbd_readwrite: (g=3): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=libaio, iodepth=32 
fio-3.12 
Starting 4 processes 
Jobs: 1 (f=1): [_(3),f(1)][100.0%][eta 00m:00s] 
rbd_write: (groupid=0, jobs=1): err= 0: pid=1846441: Thu Jul 2 15:08:42 2020 
write: IOPS=9712, BW=37.9MiB/s (39.8MB/s)(10.0GiB/269904msec); 0 zone resets 
slat (nsec): min=943, max=1131.9k, avg=6367.94, stdev=10934.84 
clat (usec): min=1045, max=259066, avg=3286.70, stdev=4553.24 
lat (usec): min=1053, max=259069, avg=3293.06, stdev=4553.20 
clat percentiles (usec): 
| 1.00th=[ 1844], 5.00th=[ 2114], 10.00th=[ 2311], 20.00th=[ 2573], 
| 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 3064], 60.00th=[ 3228], 
| 70.00th=[ 3425], 80.00th=[ 3621], 90.00th=[ 3982], 95.00th=[ 4359], 
| 99.00th=[ 5538], 99.50th=[ 6718], 99.90th=[ 82314], 99.95th=[125305], 
| 99.99th=[187696] 
bw ( KiB/s): min=17413, max=40282, per=83.81%, avg=32561.17, stdev=3777.39, 
samples=539 
iops : min= 4353, max=10070, avg=8139.93, stdev=944.34, samples=539 
lat (msec) : 2=2.64%, 4=87.80%, 10=9.37%, 20=0.08%, 50=0.01% 
lat (msec) : 100=0.02%, 250=0.09%, 500=0.01% 
cpu : usr=8.73%, sys=5.27%, ctx=1254152, majf=0, minf=8484 
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0% 
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0% 
issued rwts: total=0,2621440,0,0 short=0,0,0,0 dropped=0,0,0,0 
latency : target=0, window=0, percentile=100.00%, depth=32 
rbd_readwrite: (groupid=1, jobs=1): err= 0: pid=1852029: Thu Jul 2 15:08:42 
2020 
read: IOPS=12.8k, BW=49.0MiB/s (52.4MB/s)(7678MiB/153607msec) 
slat (nsec): min=315, max=4467.8k, avg=3247.91, stdev=7360.28 
clat (usec): min=276, max=160495, avg=1412.53, stdev=656.11 
lat (usec): min=281, max=160497, avg=1415.78, stdev=656.02 
clat percentiles (usec): 
| 1.00th=[ 494], 5.00th=[ 6

Re: [PVE-User] CEPH performance

2020-06-10 Thread Alexandre DERUMIER
>>AFAIK, it will be a challenge to get more that 2000 IOPS from one VM
>>using Ceph...

with iodetph=1, single queue, you'll have indeed the latency , and you 
shouldn't be to reach more than 4000-5000iops.
(depend mainly of cpu frequency on client + cpu frequency on cluster + network 
latency)

but with more parallel read/write, you should be able to reach 70-8 iops 
without any problem by disk.
(if you need more, you can use multiple disks with iothreads, I was able to 
scale up to 500-60 iops with 5-6 disks).


depend of your workload, you can enable writeback, it'll improve performance of 
sequential write of small coalesced blocks.
(it's regrouping them in bigger block before sending it to ceph.)
But currently (nautilus), enabling writeback slowdown read.

with octopus (actually in test 
http://download.proxmox.com/debian/ceph-octopus/dists/buster/test/),
it's solved, and you can always enabled writeback

octopus have also others optimisations, and writeback is able to regroup also 
random non coalesced blocks

See my last benchmarks:

"
Here some iops result with 1vm - 1disk -  4k block   iodepth=64, librbd, no 
iothread.


nautilus-cache=none nautilus-cache=writeback
  octopus-cache=none octopus-cache=writeback
  
randread 4k  62.1k 25.2k
61.1k 60.8k
randwrite 4k 27.7k 19.5k
34.5k 53.0k
seqwrite 4k  7850  37.5k
24.9k 82.6k
"



- Mail original -
De: "Eneko Lacunza" 
À: "proxmoxve" 
Envoyé: Mercredi 10 Juin 2020 08:30:08
Objet: Re: [PVE-User] CEPH performance

Hi Marco, 

El 9/6/20 a las 19:46, Marco Bellini escribió: 
> Dear All, 
> I'm trying to use proxmox on a 4 nodes cluster with ceph. 
> every node has a 500G NVME drive, with dedicated 10G ceph network with 
> 9000bytes MTU. 
> 
> despite off nvme warp speed I can reach when used as lvm volume, as soon as I 
> convert it into a 4-osd ceph, performance are very very poor. 
> 
> is there any trick to have ceph intro proxmox working fast? 
> 
What is "very very poor"? What specs have the Proxmox nodes (CPU, RAM)? 

AFAIK, it will be a challenge to get more that 2000 IOPS from one VM 
using Ceph... 

How are you performing the benchmark? 

Cheers 

-- 
Zuzendari Teknikoa / Director Técnico 
Binovo IT Human Project, S.L. 
Telf. 943569206 
Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa) 
www.binovo.es 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Invalid PVE Ticket (401)

2020-05-28 Thread Alexandre DERUMIER
Hi,

Are you host clocks correctly synced ?

- Mail original -
De: "Sivakumar SARAVANAN" 
À: "proxmoxve" 
Envoyé: Mercredi 27 Mai 2020 12:21:00
Objet: Re: [PVE-User] Invalid PVE Ticket (401)

Hello, 

Thanks for the reply., 

Yes we are using 20 server Proxmox in the datacenter and each host will be 
working as a single node cluster. 


Mit freundlichen Grüßen / Best regards / Cordialement, 

Sivakumar SARAVANAN 

Externer Dienstleister für / External service provider for 
Valeo Siemens eAutomotive Germany GmbH 
Research & Development 
R & D SWENG TE 1 INFTE 
Frauenauracher Straße 85 
91056 Erlangen, Germany 
Tel.: +49 9131 9892  
Mobile: +49 176 7698 5441 
sivakumar.saravanan.jv@valeo-siemens.com 
valeo-siemens.com 

Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger 
Schwab, Michael 
Axmann; Chairman of the Supervisory Board: Hartmut Klötzer; Registered 
office: Erlangen, Germany; Commercial registry: Fürth, HRB 15655 


On Wed, May 27, 2020 at 11:37 AM harrim4n  wrote: 

> Hi, 
> 
> are you running a cluster or a single host? 
> I had the issue a few times if the times on the hosts are out of sync 
> (either client to host or between cluster hosts). 
> 
> On May 27, 2020 11:35:21 AM GMT+02:00, Sivakumar SARAVANAN < 
> sivakumar.saravanan.jv@valeo-siemens.com> wrote: 
> >Hello, 
> > 
> >I am frequently getting this error message from Proxmox Web GUI. 
> > 
> >permission denied - invalid pve ticket (401) 
> > 
> >What could be the reason ? 
> > 
> >Thanks in advance. 
> > 
> >Mit freundlichen Grüßen / Best regards / Cordialement, 
> > 
> >Sivakumar SARAVANAN 
> > 
> >Externer Dienstleister für / External service provider for 
> >Valeo Siemens eAutomotive Germany GmbH 
> >Research & Development 
> >R & D SWENG TE 1 INFTE 
> >Frauenauracher Straße 85 
> >91056 Erlangen, Germany 
> >Tel.: +49 9131 9892  
> >Mobile: +49 176 7698 5441 
> >sivakumar.saravanan.jv@valeo-siemens.com 
> >valeo-siemens.com 
> > 
> >Valeo Siemens eAutomotive Germany GmbH: Managing Directors: Holger 
> >Schwab, Michael 
> >Axmann; Chairman of the Supervisory Board: Hartmut Klötzer; Registered 
> >office: Erlangen, Germany; Commercial registry: Fürth, HRB 15655 
> > 
> >-- 
> >*This e-mail message is intended for the internal use of the intended 
> >recipient(s) only. 
> >The information contained herein is 
> >confidential/privileged. Its disclosure or reproduction is strictly 
> >prohibited. 
> >If you are not the intended recipient, please inform the sender 
> >immediately, do not disclose it internally or to third parties and 
> >destroy 
> >it. 
> > 
> >In the course of our business relationship and for business purposes 
> >only, Valeo may need to process some of your personal data. 
> >For more 
> >information, please refer to the Valeo Data Protection Statement and 
> >Privacy notice available on Valeo.com 
> >* 
> >___ 
> >pve-user mailing list 
> >pve-user@pve.proxmox.com 
> >https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> 

-- 
*This e-mail message is intended for the internal use of the intended 
recipient(s) only. 
The information contained herein is 
confidential/privileged. Its disclosure or reproduction is strictly 
prohibited. 
If you are not the intended recipient, please inform the sender 
immediately, do not disclose it internally or to third parties and destroy 
it. 

In the course of our business relationship and for business purposes 
only, Valeo may need to process some of your personal data. 
For more 
information, please refer to the Valeo Data Protection Statement and 
Privacy notice available on Valeo.com 
* 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Web interface issues with latest PVE

2020-04-05 Thread Alexandre DERUMIER
>>Unfortunately I can't remember the exact time this happened it was
>>something like 3 or 4 weeks ago, I've updated since then but as all
>>servers are in production right now, I can't test and verify if it's
>>been fixed or not.

Ok, no problem. I has been able to reproduce it. It was indeed when the config
has an extra space  after the ip address or the netmask in 
/etc/network/interfaces.
(Maybe did you have wrote it manually before ?)

I have send a patch to pve-devel mailing to fix the parsing.

- Mail original -
De: "Amin Vakil" 
À: "proxmoxve" 
Cc: "aderumier" 
Envoyé: Samedi 4 Avril 2020 19:02:31
Objet: Re: [PVE-User] Web interface issues with latest PVE

> @Amin 
>>> I have installed the latest Proxmox VE 6.1-2 and when modifying? network 
>>> information from web, the prefix (netmask) is slashed off from old 
>>> interface config irrespective of whether I specify CIDR or not. And 
>>> then, I cannot create cluster from web GUI with link0 address in 
>>> dropdown if IP address has prefix (/cidr). I had to create it from CLI. 
>> I faced this problem two weeks ago and ended up configuring it from ssh 
>> as well. 
>> 
>> 
> Do you have updated to last version ? They was a small bug when new cidr 
> format for address has been introduced, 
> and fixed some days later. 
> 

Unfortunately I can't remember the exact time this happened it was 
something like 3 or 4 weeks ago, I've updated since then but as all 
servers are in production right now, I can't test and verify if it's 
been fixed or not. 

I know it should be fixed, but it was not a big problem as it's supposed 
to be configured one time only in most cases and it could be fixed from 
console in the time of problem from GUI too. 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Web interface issues with latest PVE

2020-04-05 Thread Alexandre DERUMIER
@Sonam:
>>
>>Finally, I cannot apply pending network changes from web even after 
>>installing *ifupdown2* (without reboot). It complains about 
>>subscription. Do I need enterprise subscription for that? 


Do you have change the proxmox repository to no-subscription?

https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Web interface issues with latest PVE

2020-04-04 Thread Alexandre DERUMIER
@Amin
>> I have installed the latest Proxmox VE 6.1-2 and when modifying? network 
>> information from web, the prefix (netmask) is slashed off from old 
>> interface config irrespective of whether I specify CIDR or not. And 
>> then, I cannot create cluster from web GUI with link0 address in 
>> dropdown if IP address has prefix (/cidr). I had to create it from CLI. 
> I faced this problem two weeks ago and ended up configuring it from ssh 
> as well. 
> 
> 
Do you have updated to last version ? They was a small bug when new cidr format 
for address has been introduced,
and fixed some days later.


@Roland,

can you send ouput of #pveversion -v ?

>>that netmask 255.255.255.0 had been converted into >ip>/24

Some change have been done recently to convert old "address  + netmask 
", to "address ".
but this whitespace is strange.

Do you have initial /etc/network/interfaces before the change ? and also the 
file after the config change ?

- Mail original -
De: "Roland" 
À: "proxmoxve" , "Amin Vakil" 
Envoyé: Samedi 4 Avril 2020 13:48:43
Objet: Re: [PVE-User] Web interface issues with latest PVE

that may explain some weirdness i observed yesterday: 

i added a bridge, rebooted server and after that the management ip 
adress was gone - i had a look into the configuration file for 
networking to see 

that netmask 255.255.255.0 had been converted into /24 

after removing the whitespace and reboot network was ok again. 


Am 04.04.20 um 13:02 schrieb Amin Vakil: 

>> I have installed the latest Proxmox VE 6.1-2 and when modifying? network 
>> information from web, the prefix (netmask) is slashed off from old 
>> interface config irrespective of whether I specify CIDR or not. And 
>> then, I cannot create cluster from web GUI with link0 address in 
>> dropdown if IP address has prefix (/cidr). I had to create it from CLI. 
> I faced this problem two weeks ago and ended up configuring it from ssh 
> as well. 
> 
> 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox with ceph storage VM performance strangeness

2020-03-17 Thread Alexandre DERUMIER
>>What rates do you find on your proxmox/ceph cluster for single VMs?

with replicat x3 and 4k block random read/write with big queue depth, I'm 
around 7iops read && 4iops write

(by vm disk if iothread is used, the limitation is cpu usage of 1 thread/core 
by disk)


with queue depth=1, I'm around 4000-5000 iops. (because of network latency + 
cpu latency).

This is with client/server with 3ghz intel cpu.



- Mail original -
De: "Rainer Krienke" 
À: "proxmoxve" 
Envoyé: Mardi 17 Mars 2020 14:04:22
Objet: [PVE-User] Proxmox with ceph storage VM performance strangeness

Hello, 

I run a pve 6.1-7 cluster with 5 nodes that is attached (via 10Gb 
Network) to a ceph nautilus cluster with 9 ceph nodes and 144 magnetic 
disks. The pool with rbd images for disk storage is erasure coded with a 
4+2 profile. 

I ran some performance tests since I noticed that there seems to be a 
strange limit to the disk read/write rate on a single VM even if the 
physical machine hosting the VM as well as cluster is in total capable 
of doing much more. 

So what I did was to run a bonnie++ as well as a dd read/write test 
first in parallel on 10 VMs, then on 5 VMs and at last on a single one. 

A value of "75" for "bo++rd" in the first line below means that each of 
the 10 bonnie++-processes running on 10 different proxmox VMs in 
parallel reported in average over all the results a value of 
75MBytes/sec for "block read". The ceph-values are the peaks measured by 
ceph itself during the test run (all rd/wr values in MBytes/sec): 

VM-count: bo++rd: bo++wr: ceph(rd/wr): dd-rd: dd-wr: ceph(rd/wr): 
10 75 42 540/485 55 58 698/711 
5 90 62 310/338 47 80 248/421 
1 108 114 111/120 130 145 337/165 


What I find a little strange is that running many VMs doing IO in 
parallel I reach a write rate of about 485-711 MBytes/sec. However when 
running a single VM the maximum is at 120-165 MBytes/sec. Since the 
whole networking is based on a 10GB infrastructure and an iperf test 
between a VM and a ceph node reported nearby 10Gb I would expect a 
higher rate for the single VM. Even if I run a test with 5 VMs on *one* 
physical host (values not shown above), the results are not far behind 
the values for 5 VMs on 5 hosts shown above. So the single host seems 
not to be the limiting factor, but the VM itself is limiting IO. 

What rates do you find on your proxmox/ceph cluster for single VMs? 
Does any one have any explanation for this rather big difference or 
perhaps an idea what to try in order to get higher IO-rates from a 
single VM? 

Thank you very much in advance 
Rainer 



- 
Here are the more detailed test results for anyone interested: 

Using bonnie++: 
10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, BTRFS, cd /root; 
bonnie++ -u root 
Average for each VM: 
block write: ~42MByte/sec, block read: ~75MByte/sec 
ceph: total peak: 485MByte/sec write, 540MByte/sec read 

5 VMs (one on each of the 5 hosts) 4GB RAM, BTRFS, cd /root; bonnie++ -u 
root 
Average for each VM: 
block write: ~62MByte/sec, block read: ~90MByte/sec 
ceph: total peak: 338MByte/sec write, 310MByte/sec read 

1 VM 4GB RAM, BTRFS, cd /root; bonnie++ -u root 
Average for VM: 
block write: ~114 MByte/sec, block read: ~108MByte/sec 
ceph: total peak: 120 MByte/sec write, 111MByte/sec read 


Using dd: 
10 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based 
vm-disk "sdb" (rbd) 
write: dd if=/dev/zero of=/dev/sdb bs=nnn count=kkk conv=fsync 
status=progress 
read: dd of=/dev/null if=/dev/sdb bs=nnn count=kkk status=progress 
Average for each VM: 
bs=1024k count=12000: dd write: ~58MByte/sec, dd read: ~48MByte/sec 
bs=4096k count=3000: dd write: ~59MByte/sec, dd read: ~55MByte/sec 
ceph: total peak: 711MByte/sec write, 698 MByte/sec read 

5 VMs (two on each of the 5 hosts) VMs: 4GB RAM, write on a ceph based 
vm-disk "sdb" (rbd) 
write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync 
status=progress 
read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress 
Average for each VM: 
bs=4096 count=3000: dd write: ~80 MByte/sec, dd read: ~47MByte/sec 
ceph: total peak: 421MByte/sec write, 248 MByte/sec read 

1 VM: 4GB RAM, write on a ceph based vm-disk "sdb" (rbd-device) 
write: dd if=/dev/zero of=/dev/sdb bs=4096k count=3000 conv=fsync 
status=progress 
read: dd of=/dev/null if=/dev/sdb bs=4096k count=3000 status=progress 
Average for each VM: 
bs=4096k count=3000: dd write: ~145 MByte/sec, dd read: ~130 MByte/sec 
ceph: total peak: 165 MByte/sec write, 337 MByte/sec read 
-- 
Rainer Krienke, Uni Koblenz, Rechenzentrum, A22, Universitaetsstrasse 1 
56070 Koblenz, Web: http://www.uni-koblenz.de/~krienke, Tel: +49261287 1312 
PGP: http://www.uni-koblenz.de/~krienke/mypgp.html, Fax: +49261287 
1001312 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

__

Re: [PVE-User] systemd networking.service fails after last dist-upgrade

2020-03-16 Thread Alexandre DERUMIER
fixed package uploaded today 

http://download.proxmox.com/debian/pve/dists/buster/pve-no-subscription/binary-amd64/ifupdown2_2.0.1-1%2Bpve8_all.deb
 

- Mail original -
De: "proxmoxve" 
À: "proxmoxve" 
Cc: "leesteken" 
Envoyé: Lundi 16 Mars 2020 11:22:28
Objet: Re: [PVE-User] systemd networking.service fails after last dist-upgrade

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] systemd networking.service fails after last dist-upgrade

2020-03-15 Thread Alexandre DERUMIER
Hi,
edit /lib/systemd/system/networking.service

and remove exec line with "ExecStart=/sbin/ifup --allow=ovs"

It was a fix for openvswitch in ifupdown2, but it was fixed another way.

I have sent a mail to pve-devel to remove the patch.

thansk for reporting this.

- Mail original -
De: "proxmoxve" 
À: "proxmoxve" 
Cc: leeste...@pm.me
Envoyé: Dimanche 15 Mars 2020 09:27:45
Objet: [PVE-User] systemd networking.service fails after last dist-upgrade

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Creating VM with thin provisioning (ESXi fashion)

2020-03-10 Thread Alexandre DERUMIER
Hi,

almost all storages are thin provisionned by default in proxmox

(Create a new disk of 400Gb, it'll use 0 until you create datas on it)


- Mail original -
De: "Leandro Roggerone" 
À: "proxmoxve" 
Envoyé: Lundi 9 Mars 2020 12:53:29
Objet: [PVE-User] Creating VM with thin provisioning (ESXi fashion)

Sorry to compare ... between systems , I just want to get to the same 
function: 
When you create a VM on vmware you assign a max storage , for example 400Gb. 
Then you can see from vm dashboard that this VM is using 194Gb. 

But ,if I look inside the VM , you will see: 

[root@ftp ~]# lsblk 
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT 
sda 8:0 0 * 400G 0 disk* 
├─sda1 8:1 0 1G 0 part /boot 
└─sda2 8:2 0 398.9G 0 part 
├─centos-root 253:0 0 395G 0 lvm / 
└─centos-swap 253:1 0 3.9G 0 lvm [SWAP] 
sr0 11:0 1 918M 0 rom 

The VM "believes" it has a 400Gb drive, and can continue automatically 
growing up to those 400 GB but it is currently using only 194Gb. 
I would like to work same way on my PVE environment if possible .. or what 
is best approach. 
Leandro. 


 
Libre 
de virus. www.avast.com 

 
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve and ovs-dpdk

2020-03-10 Thread Alexandre DERUMIER
Hi,
currently they are no support for dpdk in proxmox.

I have send recently for proxmox6 ovs 2.12 (dpdk can be build too)

but they are a other things to implement to use it. (vhost-user for example for 
qemu nic)
you also need transparent hugepage enabled in the vm


- Mail original -
De: "jax" <525308...@qq.com>
À: "proxmoxve" 
Envoyé: Mardi 10 Mars 2020 09:39:03
Objet: [PVE-User] pve and ovs-dpdk

Hello 


how to use ovs with dpdk in pve5.4 version? I have tried to compile and install 
ovs and dpdk, but the ovs cannot be recognized on the web management page of 
pve, and then use apt-get install openvswitch-switch-dpdk to install the 
packaged ovs of dpdk. Although they can be identified, many configurations are 
not available. Finished, does pve have any related cases or document sharing? 




thank! 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Best practice for quorum device between data centers?

2020-02-27 Thread Alexandre DERUMIER
Well you could use a virtual quorum vm, but


dc1: 4 nodes + vm ha   dc2: 4nodes

you loose dc1 -> you loose quorum on dc2, so you can't start vm on dc2.
so it's not helping.

you really don't want to use HA here. but you can still play with "pvecm 
expected" to get back the quorum
if 1 dc is down.  (and maybe do dc1: 5 nodes  - dc2: 4 nodes, to avoid loose 
quorum on both side)


(also I don't known what is your storage ? )


- Mail original -
De: "Thomas Naumann" 
À: "proxmoxve" 
Envoyé: Jeudi 27 Février 2020 20:29:45
Objet: Re: [PVE-User] Best practice for quorum device between data centers?

hi alex, 

thanks for your response... 
that was our first idea too, but i don´t like it... 
what about the idea of a virtual quorum device (VM) inside the cluster? 
in worst case scenario (all physical connections between data centers 
are broken) where will be manual choise to start virtual quorum on this 
or that side of the cluster. Has anyone experience about this? 

regards, 
thomas 
On 2/27/20 6:49 PM, Alex Chekholko via pve-user wrote: 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Network interfaces renaming strangeness...

2020-02-13 Thread Alexandre DERUMIER
also found on systemd github
https://github.com/systemd/systemd/issues/13788

it could be a firmware bug in e1000, exposing the wrong slot

- Mail original -
De: "aderumier" 
À: "proxmoxve" 
Envoyé: Jeudi 13 Février 2020 13:06:09
Objet: Re: [PVE-User] Network interfaces renaming strangeness...

Also, what is the physical setup ? 

is it a server ? are the 2 cards ,dedicated pci card ? (or maybe 1 in embedded 
in the server ?) 

Maybe are they plugged to a riser/pci extension ? 


- Mail original - 
De: "aderumier"  
À: "proxmoxve"  
Envoyé: Jeudi 13 Février 2020 12:54:40 
Objet: Re: [PVE-User] Network interfaces renaming strangeness... 

I wonder if it could be because both are detected on same slot 

ID_NET_NAME_SLOT=ens1 


can you try to edit: 
/usr/lib/systemd/network/99-default.link 


"NamePolicy=keep kernel database onboard slot path" 

and reverse slot/path 

"NamePolicy=keep kernel database onboard path slot" 

and reboot 


I have see a user on the forum with same problem 




- Mail original - 
De: "Marco Gaiarin"  
À: "proxmoxve"  
Envoyé: Jeudi 13 Février 2020 12:32:49 
Objet: Re: [PVE-User] Network interfaces renaming strangeness... 

Mandi! Alexandre DERUMIER 
In chel di` si favelave... 

> is it a fresh proxmox6 install ? or upgraded from proxmox5? 

Fresh proxmox 6, from scratch. Upgraded daily via APT. 

> any /etc/udev/rules.d/70-persistent-net.rules file somewhere ? (should be 
> removed) 

No: 
root@ino:~# ls -la /etc/udev/rules.d/70-persistent-net.rules 
ls: cannot access '/etc/udev/rules.d/70-persistent-net.rules': No such file or 
directory 


> no special grub option ? (net.ifnames, ...) 

No: 

root@ino:~# cat /etc/default/grub /etc/default/grub.d/init-select.cfg | egrep 
-v '^[[:space:]]*#' 
GRUB_DEFAULT=0 
GRUB_TIMEOUT=5 
GRUB_DISTRIBUTOR="Proxmox Virtual Environment" 
GRUB_CMDLINE_LINUX_DEFAULT="quiet nmi_watchdog=0 intel_iommu=on" 
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs" 
GRUB_DISABLE_OS_PROBER=true 
GRUB_DISABLE_RECOVERY="true" 

-- 
dott. Marco Gaiarin GNUPG Key ID: 240A3D66 
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ 
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN) 
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! 
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Network interfaces renaming strangeness...

2020-02-13 Thread Alexandre DERUMIER
Also, what is the physical setup ?

is it a server ?  are the 2 cards ,dedicated pci card ? (or maybe 1 in embedded 
in the server ?)

Maybe are they plugged to a riser/pci extension ?


- Mail original -
De: "aderumier" 
À: "proxmoxve" 
Envoyé: Jeudi 13 Février 2020 12:54:40
Objet: Re: [PVE-User] Network interfaces renaming strangeness...

I wonder if it could be because both are detected on same slot 

ID_NET_NAME_SLOT=ens1 


can you try to edit: 
/usr/lib/systemd/network/99-default.link 


"NamePolicy=keep kernel database onboard slot path" 

and reverse slot/path 

"NamePolicy=keep kernel database onboard path slot" 

and reboot 


I have see a user on the forum with same problem 




- Mail original - 
De: "Marco Gaiarin"  
À: "proxmoxve"  
Envoyé: Jeudi 13 Février 2020 12:32:49 
Objet: Re: [PVE-User] Network interfaces renaming strangeness... 

Mandi! Alexandre DERUMIER 
In chel di` si favelave... 

> is it a fresh proxmox6 install ? or upgraded from proxmox5? 

Fresh proxmox 6, from scratch. Upgraded daily via APT. 

> any /etc/udev/rules.d/70-persistent-net.rules file somewhere ? (should be 
> removed) 

No: 
root@ino:~# ls -la /etc/udev/rules.d/70-persistent-net.rules 
ls: cannot access '/etc/udev/rules.d/70-persistent-net.rules': No such file or 
directory 


> no special grub option ? (net.ifnames, ...) 

No: 

root@ino:~# cat /etc/default/grub /etc/default/grub.d/init-select.cfg | egrep 
-v '^[[:space:]]*#' 
GRUB_DEFAULT=0 
GRUB_TIMEOUT=5 
GRUB_DISTRIBUTOR="Proxmox Virtual Environment" 
GRUB_CMDLINE_LINUX_DEFAULT="quiet nmi_watchdog=0 intel_iommu=on" 
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs" 
GRUB_DISABLE_OS_PROBER=true 
GRUB_DISABLE_RECOVERY="true" 

-- 
dott. Marco Gaiarin GNUPG Key ID: 240A3D66 
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ 
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN) 
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! 
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Network interfaces renaming strangeness...

2020-02-13 Thread Alexandre DERUMIER
I wonder if it could be because both are detected on same slot

ID_NET_NAME_SLOT=ens1


can you try to edit:
/usr/lib/systemd/network/99-default.link


"NamePolicy=keep kernel database onboard slot path"

and reverse slot/path

"NamePolicy=keep kernel database onboard path slot"

and reboot


I have see a user on the forum with same problem




- Mail original -
De: "Marco Gaiarin" 
À: "proxmoxve" 
Envoyé: Jeudi 13 Février 2020 12:32:49
Objet: Re: [PVE-User] Network interfaces renaming strangeness...

Mandi! Alexandre DERUMIER 
In chel di` si favelave... 

> is it a fresh proxmox6 install ? or upgraded from proxmox5? 

Fresh proxmox 6, from scratch. Upgraded daily via APT. 

> any /etc/udev/rules.d/70-persistent-net.rules file somewhere ? (should be 
> removed) 

No: 
root@ino:~# ls -la /etc/udev/rules.d/70-persistent-net.rules 
ls: cannot access '/etc/udev/rules.d/70-persistent-net.rules': No such file or 
directory 


> no special grub option ? (net.ifnames, ...) 

No: 

root@ino:~# cat /etc/default/grub /etc/default/grub.d/init-select.cfg | egrep 
-v '^[[:space:]]*#' 
GRUB_DEFAULT=0 
GRUB_TIMEOUT=5 
GRUB_DISTRIBUTOR="Proxmox Virtual Environment" 
GRUB_CMDLINE_LINUX_DEFAULT="quiet nmi_watchdog=0 intel_iommu=on" 
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs" 
GRUB_DISABLE_OS_PROBER=true 
GRUB_DISABLE_RECOVERY="true" 

-- 
dott. Marco Gaiarin GNUPG Key ID: 240A3D66 
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ 
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN) 
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! 
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Network interfaces renaming strangeness...

2020-02-13 Thread Alexandre DERUMIER
hi,

is it a fresh proxmox6 install ?  or upgraded from proxmox5?

any /etc/udev/rules.d/70-persistent-net.rules file somewhere ?  (should be 
removed)

no special grub option ? (net.ifnames, ...)


- Mail original -
De: "Marco Gaiarin" 
À: "proxmoxve" 
Envoyé: Jeudi 13 Février 2020 11:47:47
Objet: [PVE-User] Network interfaces renaming strangeness...

I've setup a little 'home' PVE server: proxmox6 (debian buster, kernel 
5.3.18-1-pve). 

Bultin network card get detected and renamed: 

root@ino:~# grep tg3 /var/log/kern.log 
Feb 11 09:16:42 ino kernel: [ 3.449190] tg3.c:v3.137 (May 11, 2014) 
Feb 11 09:16:42 ino kernel: [ 3.468877] tg3 :1e:00.0 eth0: Tigon3 
[partno(BCM95723) rev 5784100] (PCI Express) MAC address 9c:8e:99:7b:86:d9 
Feb 11 09:16:42 ino kernel: [ 3.468879] tg3 :1e:00.0 eth0: attached PHY is 
5784 (10/100/1000Base-T Ethernet) (WireSpeed[1], EEE[0]) 
Feb 11 09:16:42 ino kernel: [ 3.468881] tg3 :1e:00.0 eth0: RXcsums[1] 
LinkChgREG[0] MIirq[0] ASF[1] TSOcap[1] 
Feb 11 09:16:42 ino kernel: [ 3.468883] tg3 :1e:00.0 eth0: 
dma_rwctrl[7618] dma_mask[64-bit] 
Feb 11 09:16:42 ino kernel: [ 3.485088] tg3 :1e:00.0 ens1: renamed from 
eth0 
Feb 11 09:16:44 ino kernel: [ 101.292206] tg3 :1e:00.0 ens1: Link is up at 
100 Mbps, full duplex 
Feb 11 09:16:44 ino kernel: [ 101.292208] tg3 :1e:00.0 ens1: Flow control 
is on for TX and on for RX 

After that, i've addedd a second nic, pcie slot, that get detected but 
'strangely' renamed: 

root@ino:~# grep e1000e /var/log/kern.log 
Feb 11 09:16:42 ino kernel: [ 3.451250] e1000e: Intel(R) PRO/1000 Network 
Driver - 3.2.6-k 
Feb 11 09:16:42 ino kernel: [ 3.451251] e1000e: Copyright(c) 1999 - 2015 Intel 
Corporation. 
Feb 11 09:16:42 ino kernel: [ 3.451430] e1000e :20:00.0: Interrupt 
Throttling Rate (ints/sec) set to dynamic conservative mode 
Feb 11 09:16:42 ino kernel: [ 3.502028] e1000e :20:00.0 :20:00.0 
(uninitialized): registered PHC clock 
Feb 11 09:16:42 ino kernel: [ 3.556137] e1000e :20:00.0 eth0: (PCI 
Express:2.5GT/s:Width x1) 2c:27:d7:14:9b:67 
Feb 11 09:16:42 ino kernel: [ 3.556139] e1000e :20:00.0 eth0: Intel(R) 
PRO/1000 Network Connection 
Feb 11 09:16:42 ino kernel: [ 3.556182] e1000e :20:00.0 eth0: MAC: 3, PHY: 
8, PBA No: G17305-003 
Feb 11 09:16:42 ino kernel: [ 3.557393] e1000e :20:00.0 rename3: renamed 
from eth0 

Looking at: 

https://wiki.debian.org/NetworkInterfaceNames 

Strangeness remain, look at: 

root@ino:~# udevadm test-builtin net_id /sys/class/net/ens1 
Load module index 
Parsed configuration file /usr/lib/systemd/network/99-default.link 
Created link configuration context. 
Using default interface naming scheme 'v240'. 
ID_NET_NAMING_SCHEME=v240 
ID_NET_NAME_MAC=enx9c8e997b86d9 
ID_OUI_FROM_DATABASE=Hewlett Packard 
ID_NET_NAME_PATH=enp30s0 
ID_NET_NAME_SLOT=ens1 
Unload module index 
Unloaded link configuration context. 

root@ino:~# udevadm test-builtin net_id /sys/class/net/rename3 
Load module index 
Parsed configuration file /usr/lib/systemd/network/99-default.link 
Created link configuration context. 
Using default interface naming scheme 'v240'. 
ID_NET_NAMING_SCHEME=v240 
ID_NET_NAME_MAC=enx2c27d7149b67 
ID_OUI_FROM_DATABASE=Hewlett Packard 
ID_NET_NAME_PATH=enp32s0 
ID_NET_NAME_SLOT=ens1 
Unload module index 
Unloaded link configuration context. 

AFAI've understood network names would have to be 'enp30s0' and 
'enp32s0' respectively, not 'ens1' and 'rename3'... 


Why?! Thanks. 

-- 
dott. Marco Gaiarin GNUPG Key ID: 240A3D66 
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ 
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN) 
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! 
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] RBD Storage from 6.1 to 3.4 (or 4.4)

2020-01-30 Thread Alexandre DERUMIER
ceph client vs server are generally compatible between 2 or 3 releases.

They are no way to make working nautilus or luminous  clients with firefly.
I think minimum is jewel server  for nautilus client.


So best way could be to upgrade your old proxmox cluster first. (from 4->6, 
this can be done easily without downtime)



- Mail original -
De: "Fabrizio Cuseo" 
À: "uwe sauter de" , "proxmoxve" 

Envoyé: Jeudi 30 Janvier 2020 12:59:13
Objet: Re: [PVE-User] RBD Storage from 6.1 to 3.4 (or 4.4)

I can't afford the long downtime. With my method, the downtime is only to stop 
the VM on the old cluster and start on the new; the disk image copy in done 
online. 

But my last migration was from 3.4 to 4.4 


- Il 30-gen-20, alle 12:51, Uwe Sauter uwe.sauter...@gmail.com ha scritto: 

> If you can afford the downtime of the VMS you might be able to migrate the 
> disk 
> images using "rbd export | ncat" and "ncat | rbd 
> import". 
> 
> I haven't tried this with such a great difference of versions but from 
> Proxmox 
> 5.4 to 6.1 this worked without a problem. 
> 
> Regards, 
> 
> Uwe 
> 
> 
> Am 30.01.20 um 12:46 schrieb Fabrizio Cuseo: 
>> 
>> I have installed a new cluster with the last release, with a local ceph 
>> storage. 
>> I also have 2 old and smaller clusters, and I need to migrate all the VMs to 
>> the 
>> new cluster. 
>> The best method i have used in past is to add on the NEW cluster the RBD 
>> storage 
>> of the old cluster, so I can stop the VM, move the .cfg file, start the vm 
>> (all 
>> those operations are really quick), and move the disk (online) from the old 
>> storage to the new storage. 
>> 
>> But now, if I add the RBD storage, copying the keyring file of the old 
>> cluster 
>> to the new cluster, naming as the storage ID, and using the old cluster 
>> monitors IP, i can see the storage summary (space total and used), but when 
>> I 
>> go to "content", i have this error: "rbd error: rbd: listing images failed: 
>> (95) Operation not supported (500)". 
>> 
>> If, from the new cluster CLI, i use the command: 
>> 
>> rbd -k /etc/pve/priv/ceph/CephOLD.keyring -m 172.16.20.31 ls rbd2 
>> 
>> I can see the list of disk images, but also the error: "librbd::api::Trash: 
>> list: error listing rbd trash entries: (95) Operation not supported" 
>> 
>> 
>> The new cluster ceph release is Nautilus, and the old one is firefly. 
>> 
>> Some idea ? 
>> 
>> Thanks in advance, Fabrizio 
>> 
>> ___ 
>> pve-user mailing list 
>> pve-user@pve.proxmox.com 
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
>> 
> 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

-- 
--- 
Fabrizio Cuseo - mailto:f.cu...@panservice.it 
Direzione Generale - Panservice InterNetWorking 
Servizi Professionali per Internet ed il Networking 
Panservice e' associata AIIP - RIPE Local Registry 
Phone: +39 0773 410020 - Fax: +39 0773 470219 
http://www.panservice.it mailto:i...@panservice.it 
Numero verde nazionale: 800 901492 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] CPU freq scale in PVE 6.0

2020-01-26 Thread Alexandre DERUMIER
what is your cpu / motherboard / server model ?


- Mail original -
De: "José Manuel Giner" 
À: "proxmoxve" 
Envoyé: Dimanche 26 Janvier 2020 12:40:43
Objet: Re: [PVE-User] CPU freq scale in PVE 6.0

I edited /etc/default/grub 
Updated with update-grub command 
Checked that /boot/grub/grub.cfg has been updated correctly 
But after reboot nothing has changed, 
freq is still btw 1.20 and 3.60 GHz 

Any idea how to fix? 


On 23/01/2020 20:38, Alexandre DERUMIER wrote: 
> Hi, 
> I'm setup this now for my new intel processors: 
> 
> /etc/default/grub 
> 
> GRUB_CMDLINE_LINUX="intel_idle.max_cstate=0 intel_pstate=disable 
> processor.max_cstate=1" 
> 
> 
> 
> - Mail original - 
> De: "José Manuel Giner"  
> À: "proxmoxve"  
> Envoyé: Jeudi 23 Janvier 2020 15:34:47 
> Objet: [PVE-User] CPU freq scale in PVE 6.0 
> 
> Hello, 
> 
> Since PVE 6.0, we detect that the CPU frequency is dynamic even if the 
> governor "performance" is selected. 
> 
> How can we set the maximum frequency in all the cores? 
> 
> Thank you. 
> 
> 
> root@ns1031:~# cat /proc/cpuinfo | grep MHz 
> cpu MHz : 1908.704 
> cpu MHz : 1430.150 
> cpu MHz : 1436.616 
> cpu MHz : 1433.659 
> cpu MHz : 1547.050 
> cpu MHz : 2299.611 
> cpu MHz : 2931.782 
> cpu MHz : 3321.946 
> cpu MHz : 3397.896 
> cpu MHz : 3401.725 
> cpu MHz : 3401.701 
> cpu MHz : 3170.021 
> cpu MHz : 2502.847 
> cpu MHz : 1534.596 
> cpu MHz : 2969.438 
> cpu MHz : 3435.646 
> cpu MHz : 1992.717 
> cpu MHz : 2922.292 
> cpu MHz : 2917.282 
> cpu MHz : 2921.654 
> cpu MHz : 2484.807 
> cpu MHz : 1866.995 
> cpu MHz : 1629.151 
> cpu MHz : 2841.298 
> cpu MHz : 3404.503 
> cpu MHz : 3401.732 
> cpu MHz : 3406.120 
> cpu MHz : 2189.093 
> cpu MHz : 1545.454 
> cpu MHz : 3348.573 
> cpu MHz : 1564.159 
> cpu MHz : 3438.787 
> root@ns1031:~# 
> root@ns1031:~# 
> root@ns1031:~# 
> root@ns1031:~# cpupower frequency-info 
> analyzing CPU 0: 
> driver: intel_pstate 
> CPUs which run at the same hardware frequency: 0 
> CPUs which need to have their frequency coordinated by software: 0 
> maximum transition latency: Cannot determine or is not supported. 
> hardware limits: 1.20 GHz - 3.60 GHz 
> available cpufreq governors: performance powersave 
> current policy: frequency should be within 1.20 GHz and 3.60 GHz. 
> The governor "performance" may decide which speed to use 
> within this range. 
> current CPU frequency: Unable to call hardware 
> current CPU frequency: 1.90 GHz (asserted by call to kernel) 
> boost state support: 
> Supported: yes 
> Active: yes 
> root@ns1031:~# 
> root@ns1031:~# 
> 
> 
> 

-- 
José Manuel Giner 
https://ginernet.com 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VxLAN and tagged frames

2020-01-24 Thread Alexandre DERUMIER
>Arf. ifupdown2 seems to be needed for vxlan interfaces to be setup. 
yes, ifupdown2 is needed.

>>But it somehow breaks my ARP proxy setup on the WAN interface. 
>>Not sure why, everything seems to be correctly setup, but the host doesn't 
>>answer to ARP requests anymore. And everything is back to normal as soon as I 
>>revert to classic ifupdown. 
>>I'll try to look at this a bit later, when I more some spare time. 

I'm not sure, but maybe you can try to add

iface WAN
   ...
   arp-accept on



About vlan brige->vxlan, I have done some tests again with last kernel, it seem 
than 1 vlanaware bridge + 1 vxlan tunnel (tunnel_mode) is still broken,
So the only possible way to 1 vlanawarebridge + multiple vxlan tunnel.

This can be done easily with ifupdown2 like this:




%for v in range(1010,1021):
auto vxlan${v}
iface vxlan${v}
vxlan-id ${v}
bridge-access ${v}
vxlan_remoteip 192.168.0.2   
vxlan_remoteip 192.168.0.3   
%endfor


auto vmbr2
iface vmbr2 inet manual
bridge_ports glob vxlan1010-1020
bridge_stp off
bridge_fd 0
bridge-vlan-aware yes
bridge-vids 2-4094


This will map vlan1010-1020  to vxlan1010-1020.
the vxlan interfaces are create with a template in a loop

I have tested it, it's working fine.



- Mail original -
De: "Daniel Berteaud" 
À: "proxmoxve" 
Envoyé: Vendredi 24 Janvier 2020 10:15:34
Objet: Re: [PVE-User] VxLAN and tagged frames

- Le 24 Jan 20, à 8:20, Daniel Berteaud dan...@firewall-services.com a 
écrit : 

> - Le 23 Jan 20, à 20:53, Alexandre DERUMIER aderum...@odiso.com a écrit : 

>> 
>> I think if you want to do something like a simple vxlan tunnel, with 
>> multiple 
>> vlan, something like this should work (need to be tested): 
>> 
>> auto vxlan2 
>> iface vxlan2 inet manual 
>> vxlan-id 2 
>> vxlan_remoteip 192.168.0.2 
>> vxlan_remoteip 192.168.0.3 
>> 
>> auto vmbr2 
>> iface vmbr2 inet manual 
>> bridge_ports vxlan2 
>> bridge_stp off 
>> bridge_fd 0 
>> bridge-vlan-aware yes 
>> bridge-vids 2-4096 
> 
> I'll try something like that. 

Arf. ifupdown2 seems to be needed for vxlan interfaces to be setup. But it 
somehow breaks my ARP proxy setup on the WAN interface. 
Not sure why, everything seems to be correctly setup, but the host doesn't 
answer to ARP requests anymore. And everything is back to normal as soon as I 
revert to classic ifupdown. 
I'll try to look at this a bit later, when I more some spare time. 

++ 

-- 
[ https://www.firewall-services.com/ ] 
Daniel Berteaud 
FIREWALL-SERVICES SAS, La sécurité des réseaux 
Société de Services en Logiciels Libres 
Tél : +33.5 56 64 15 32 
Matrix: @dani:fws.fr 
[ https://www.firewall-services.com/ | https://www.firewall-services.com ] 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VxLAN and tagged frames

2020-01-23 Thread Alexandre DERUMIER
Hi,

>>So, what's the recommended setup for this ? Create one (non vlan aware) 
>>bridge for each network zone, with 1 VxLAN tunnel per bridge between nodes ? 

yes, you need 1 non-vlan aware bridge + 1 vxlan tunnel. 

Technically they are vlan (from aware bridge) to vxlan mapping in kernel, but 
it's realy new and unstable.
I don't known if it's possible to send vlan tagged frame inside a vxlan, never 
tested it.

>>This doesn't look very scalable compared with >>vlan aware bridges (or OVS 
>>bridges) with GRE tunnels, does it ? 

I have tested it with 2000 vxlans + 2000 bridges. Works fine. Is is enough for 
you ?



>>Are the expirimental SDN plugins available somewhere as deb so I can play a 
>>bit with it ? (couldn't find it in pve-test or no-subscription)

#apt-get install libpve-network-perl  (try for pvetest repo if possible)


The gui is not finished yet, but you can try it at
http://odisoweb1.odiso.net/pve-manager_6.1-5_amd64.deb





I think if you want to do something like a simple vxlan tunnel, with multiple 
vlan, something like this should work (need to be tested):

auto vxlan2
iface vxlan2 inet manual
vxlan-id 2
vxlan_remoteip 192.168.0.2
vxlan_remoteip 192.168.0.3

auto vmbr2
iface vmbr2 inet manual
bridge_ports vxlan2
bridge_stp off
bridge_fd 0
bridge-vlan-aware yes
bridge-vids 2-4096


Note that it's possible to do gre tunnel with ifupdown2, I can send the config 
if you need it

- Mail original -
De: "Daniel Berteaud" 
À: "proxmoxve" 
Envoyé: Mercredi 22 Janvier 2020 08:33:33
Objet: [PVE-User] VxLAN and tagged frames

Hi there 

At a french hoster (Online.net), we have a private network available on 
dedicated server, but without QinQ support. So, we can't rely on native VLAN 
between nodes. Up to now, I created a single OVS bridge on every node, with GRE 
tunnels with each other. The GRE tunnel transport tagged frames and everything 
is working. 
But I see there are some work on SDN plugins, and VxLAN support. I red [ 
https://git.proxmox.com/?p=pve-docs.git;a=blob_plain;f=vxlan-and-evpn.adoc;hb=HEAD
 | 
https://git.proxmox.com/?p=pve-docs.git;a=blob_plain;f=vxlan-and-evpn.adoc;hb=HEAD
 ] but there are some stuff I'm not sure I understand. 
Especially with vlan aware bridges. 

I like to rely on VLAN aware bridges so I don't have to touch network conf of 
the hypervisors to create a new network zone. I just use a new, unused VLAN ID. 

But the doc about VxLAN support on vlan aware bridges has been removed (see [ 
https://git.proxmox.com/?p=pve-docs.git;a=commitdiff;h=5dde3d645834b204257e8d5b3ce8b65e6842abe8;hp=d4a9910fec45b1153b1cd954a006d267d42c707a
 | 
https://git.proxmox.com/?p=pve-docs.git;a=commitdiff;h=5dde3d645834b204257e8d5b3ce8b65e6842abe8;hp=d4a9910fec45b1153b1cd954a006d267d42c707a
 ] ) 

So, what's the recommended setup for this ? Create one (non vlan aware) bridge 
for each network zone, with 1 VxLAN tunnel per bridge between nodes ? This 
doesn't look very scalable compared with vlan aware bridges (or OVS bridges) 
with GRE tunnels, does it ? 

Are the expirimental SDN plugins available somewhere as deb so I can play a bit 
with it ? (couldn't find it in pve-test or no-subscription) 

Cheers, 
Daniel 

-- 


[ https://www.firewall-services.com/ ] 
Daniel Berteaud 
FIREWALL-SERVICES SAS, La sécurité des réseaux 
Société de Services en Logiciels Libres 
Tél : +33.5 56 64 15 32 
Matrix: @dani:fws.fr 
[ https://www.firewall-services.com/ | https://www.firewall-services.com ] 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] CPU freq scale in PVE 6.0

2020-01-23 Thread Alexandre DERUMIER
Hi,
I'm setup this now for my new intel processors:

/etc/default/grub

GRUB_CMDLINE_LINUX="intel_idle.max_cstate=0 intel_pstate=disable 
processor.max_cstate=1"



- Mail original -
De: "José Manuel Giner" 
À: "proxmoxve" 
Envoyé: Jeudi 23 Janvier 2020 15:34:47
Objet: [PVE-User] CPU freq scale in PVE 6.0

Hello, 

Since PVE 6.0, we detect that the CPU frequency is dynamic even if the 
governor "performance" is selected. 

How can we set the maximum frequency in all the cores? 

Thank you. 


root@ns1031:~# cat /proc/cpuinfo | grep MHz 
cpu MHz : 1908.704 
cpu MHz : 1430.150 
cpu MHz : 1436.616 
cpu MHz : 1433.659 
cpu MHz : 1547.050 
cpu MHz : 2299.611 
cpu MHz : 2931.782 
cpu MHz : 3321.946 
cpu MHz : 3397.896 
cpu MHz : 3401.725 
cpu MHz : 3401.701 
cpu MHz : 3170.021 
cpu MHz : 2502.847 
cpu MHz : 1534.596 
cpu MHz : 2969.438 
cpu MHz : 3435.646 
cpu MHz : 1992.717 
cpu MHz : 2922.292 
cpu MHz : 2917.282 
cpu MHz : 2921.654 
cpu MHz : 2484.807 
cpu MHz : 1866.995 
cpu MHz : 1629.151 
cpu MHz : 2841.298 
cpu MHz : 3404.503 
cpu MHz : 3401.732 
cpu MHz : 3406.120 
cpu MHz : 2189.093 
cpu MHz : 1545.454 
cpu MHz : 3348.573 
cpu MHz : 1564.159 
cpu MHz : 3438.787 
root@ns1031:~# 
root@ns1031:~# 
root@ns1031:~# 
root@ns1031:~# cpupower frequency-info 
analyzing CPU 0: 
driver: intel_pstate 
CPUs which run at the same hardware frequency: 0 
CPUs which need to have their frequency coordinated by software: 0 
maximum transition latency: Cannot determine or is not supported. 
hardware limits: 1.20 GHz - 3.60 GHz 
available cpufreq governors: performance powersave 
current policy: frequency should be within 1.20 GHz and 3.60 GHz. 
The governor "performance" may decide which speed to use 
within this range. 
current CPU frequency: Unable to call hardware 
current CPU frequency: 1.90 GHz (asserted by call to kernel) 
boost state support: 
Supported: yes 
Active: yes 
root@ns1031:~# 
root@ns1031:~# 



-- 
José Manuel Giner 
https://ginernet.com 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Discard transmission between VM / LVM / mdadm layers ?

2019-12-13 Thread Alexandre DERUMIER
When a VM sends a discard/trim command, is it sent to the SSD, LVM does 
not block the command? 

Hi, yes it's working with lvm (lvm thin only)

>>Or is it useless, because mdadm handles discard/trim in his own way? 

The trim command need to be send by the guest os.

on linux guest : /etc/fstab  discard option on mounpoint, or run fstrim command 
in cron,
on windows: it's a periodic schedule task (in "optimize disk" tool)



- Mail original -
De: "Frédéric Massot" 
À: "proxmoxve" 
Envoyé: Jeudi 12 Décembre 2019 17:00:47
Objet: [PVE-User] Discard transmission between VM / LVM / mdadm layers ?

Hi, 

I have a question about the discard/trim transmission between the VM / 
LVM / mdadm layers up to the SSD. 

I have a server with four disks mounted in a RAID 10 array with mdadm. 
On this RAID 10 array, there is a volume group with LVM, which contains 
multiple logical volumes for the hypervisor and VMs. fstrim is 
periodically launched on the hypervisor and VMs. 

I know that : 
- A VM can pass discard/trim commands to the lower layer if it uses a 
"Virtio SCSI" controller. 
- LVM (since 2.02.85 with issue_discards enabled) can pass discard/trim 
to the lower layer during remove or reduce operations of a logical volume. 
- mdadm supports discard/trim since kernel 3.7. 

When a VM sends a discard/trim command, is it sent to the SSD, LVM does 
not block the command? 
Or is it useless, because mdadm handles discard/trim in his own way? 


Regards. 
-- 
== 
| FRÉDÉRIC MASSOT | 
| http://www.juliana-multimedia.com | 
| mailto:frede...@juliana-multimedia.com | 
| +33.(0)2.97.54.77.94 +33.(0)6.67.19.95.69 | 
===Debian=GNU/Linux=== 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6

2019-11-21 Thread Alexandre DERUMIER
>>Anyway, just for my education, is there someone here who can explain
>>shortly what was the problem (unicast management ?) or who have a good
>>link regarding this "behavior" ? Thanks !

It was multiple bugs in corosync3
(difficult to explain, it was very difficult to debug)

here more details:

https://github.com/kronosnet/kronosnet/commit/f1a5de2141a73716c09566f294e3873add5c3ff3

https://github.com/kronosnet/kronosnet/commit/1338058fa634b08eee7099c0614e8076267501ff


- Mail original -
De: "Hervé Ballans" 
À: "proxmoxve" 
Envoyé: Mercredi 20 Novembre 2019 16:12:53
Objet: Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6

Dear all, 

Since we upgraded to these versions (15 days ago), we don't encounter 
anymore the problem :) 

We are still waiting a few more days to be sure of stability but looks 
good ! 

Anyway, just for my education, is there someone here who can explain 
shortly what was the problem (unicast management ?) or who have a good 
link regarding this "behavior" ? Thanks ! 

Cheers, 
rv 

Le 08/11/2019 à 11:18, Alexandre DERUMIER a écrit : 
> Hi, 
> 
> do you have upgrade all your nodes to 
> 
> corosync 3.0.2-pve4 
> libknet1:amd64 1.13-pve1 
> 
> 
> ? 
> 
> (available in pve-no-subscription et pve-enteprise repos) 
> 
> - Mail original - 
> De: "Eneko Lacunza"  
> À: "proxmoxve"  
> Envoyé: Jeudi 7 Novembre 2019 15:35:38 
> Objet: Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 
> 
> Hi all, 
> 
> We updated our office cluster to get the patch, but got a node reboot on 
> 31th october. Node was fenced and rebooted, everything continued working OK. 
> 
> Is anyone experencing yet this problem? 
> 
> Cheers 
> Eneko 
> 
> El 2/10/19 a las 18:09, Hervé Ballans escribió: 
>> Hi Alexandre, 
>> 
>> We encouter exactly the same problem as Laurent Caron (after upgrade 
>> from 5 to 6). 
>> 
>> So I tried your patch 3 days ago, but unfortunately, the problem still 
>> occurs... 
>> 
>> This is a really annoying problem, since sometimes, all the PVE nodes 
>> of our cluster reboot quasi-simultaneously ! 
>> And in the same time, we don't encounter this problem with our other 
>> PVE cluster in version 5. 
>> (And obviously we are waiting for a solution and a stable situation 
>> before upgrade it !) 
>> 
>> It seems to be a unicast or corosync3 problem, but logs are not really 
>> verbose at the time of reboot... 
>> 
>> Is there anything else to test ? 
>> 
>> Regards, 
>> Hervé 
>> 
>> Le 20/09/2019 à 17:00, Alexandre DERUMIER a écrit : 
>>> Hi, 
>>> 
>>> a patch is available in pvetest 
>>> 
>>> http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/libknet1_1.11-pve2_amd64.deb
>>>  
>>> 
>>> 
>>> can you test it ? 
>>> 
>>> (you need to restart corosync after install of the deb) 
>>> 
>>> 
>>> - Mail original - 
>>> De: "Laurent CARON"  
>>> À: "proxmoxve"  
>>> Envoyé: Lundi 16 Septembre 2019 09:55:34 
>>> Objet: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 
>>> 
>>> Hi, 
>>> 
>>> 
>>> After upgrading our 4 node cluster from PVE 5 to 6, we experience 
>>> constant crashed (once every 2 days). 
>>> 
>>> Those crashes seem related to corosync. 
>>> 
>>> Since numerous users are reporting sych issues (broken cluster after 
>>> upgrade, unstabilities, ...) I wonder if it is possible to downgrade 
>>> corosync to version 2.4.4 without impacting functionnality ? 
>>> 
>>> Basic steps would be: 
>>> 
>>> On all nodes 
>>> 
>>> # systemctl stop pve-ha-lrm 
>>> 
>>> Once done, on all nodes: 
>>> 
>>> # systemctl stop pve-ha-crm 
>>> 
>>> Once done, on all nodes: 
>>> 
>>> # apt-get install corosync=2.4.4-pve1 libcorosync-common4=2.4.4-pve1 
>>> libcmap4=2.4.4-pve1 libcpg4=2.4.4-pve1 libqb0=1.0.3-1~bpo9 
>>> libquorum5=2.4.4-pve1 libvotequorum8=2.4.4-pve1 
>>> 
>>> Then, once corosync has been downgraded, on all nodes 
>>> 
>>> # systemctl start pve-ha-lrm 
>>> # systemctl start pve-ha-crm 
>>> 
>>> Would that work ? 
>>> 
>>> Thanks 
>>> 
>>> ___ 
>>> pve-user mailing list 
>>> pve-user@pve.proxmox.com 
>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
>>> 
>>> ___ 
>>> pve-user mailing list 
>>> pve-user@pve.proxmox.com 
>>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
>> 
>> ___ 
>> pve-user mailing list 
>> pve-user@pve.proxmox.com 
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VMID clarifying

2019-11-08 Thread Alexandre DERUMIER
>>Reading that the final suggestion is: using a random number, then 
>>perhaps pve could simply suggest a random PVID number between 1 and 4 
>>billion. 

use uuid in this case.

The problem is that vm tap interfaces for example, use vmid in the name 
(tapi,
and it's limited to 16characters by linux kernel.
(so this should need some kind of table/conf mapping when taps are generated)





- Mail original -
De: "lists" 
À: "proxmoxve" 
Envoyé: Mardi 22 Octobre 2019 16:47:43
Objet: Re: [PVE-User] VMID clarifying

ok, have read it. 

Pity, the outcome. 

Reading that the final suggestion is: using a random number, then 
perhaps pve could simply suggest a random PVID number between 1 and 4 
billion. 

(and if already in use: choose another random number) 

No need to store anything anywhere, and chances of duplicating a PVID 
would be virtually zero. 

MJ 

On 22-10-2019 16:19, Dominik Csapak wrote: 
> there was already a lengthy discussion of this topic on the bugtracker 
> see https://bugzilla.proxmox.com/show_bug.cgi?id=1822 
> 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6

2019-11-08 Thread Alexandre DERUMIER
Hi,

do you have upgrade all your nodes to

corosync  3.0.2-pve4
libknet1:amd641.13-pve1 


?

(available in pve-no-subscription et pve-enteprise repos)

- Mail original -
De: "Eneko Lacunza" 
À: "proxmoxve" 
Envoyé: Jeudi 7 Novembre 2019 15:35:38
Objet: Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6

Hi all, 

We updated our office cluster to get the patch, but got a node reboot on 
31th october. Node was fenced and rebooted, everything continued working OK. 

Is anyone experencing yet this problem? 

Cheers 
Eneko 

El 2/10/19 a las 18:09, Hervé Ballans escribió: 
> Hi Alexandre, 
> 
> We encouter exactly the same problem as Laurent Caron (after upgrade 
> from 5 to 6). 
> 
> So I tried your patch 3 days ago, but unfortunately, the problem still 
> occurs... 
> 
> This is a really annoying problem, since sometimes, all the PVE nodes 
> of our cluster reboot quasi-simultaneously ! 
> And in the same time, we don't encounter this problem with our other 
> PVE cluster in version 5. 
> (And obviously we are waiting for a solution and a stable situation 
> before upgrade it !) 
> 
> It seems to be a unicast or corosync3 problem, but logs are not really 
> verbose at the time of reboot... 
> 
> Is there anything else to test ? 
> 
> Regards, 
> Hervé 
> 
> Le 20/09/2019 à 17:00, Alexandre DERUMIER a écrit : 
>> Hi, 
>> 
>> a patch is available in pvetest 
>> 
>> http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/libknet1_1.11-pve2_amd64.deb
>>  
>> 
>> 
>> can you test it ? 
>> 
>> (you need to restart corosync after install of the deb) 
>> 
>> 
>> - Mail original - 
>> De: "Laurent CARON"  
>> À: "proxmoxve"  
>> Envoyé: Lundi 16 Septembre 2019 09:55:34 
>> Objet: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6 
>> 
>> Hi, 
>> 
>> 
>> After upgrading our 4 node cluster from PVE 5 to 6, we experience 
>> constant crashed (once every 2 days). 
>> 
>> Those crashes seem related to corosync. 
>> 
>> Since numerous users are reporting sych issues (broken cluster after 
>> upgrade, unstabilities, ...) I wonder if it is possible to downgrade 
>> corosync to version 2.4.4 without impacting functionnality ? 
>> 
>> Basic steps would be: 
>> 
>> On all nodes 
>> 
>> # systemctl stop pve-ha-lrm 
>> 
>> Once done, on all nodes: 
>> 
>> # systemctl stop pve-ha-crm 
>> 
>> Once done, on all nodes: 
>> 
>> # apt-get install corosync=2.4.4-pve1 libcorosync-common4=2.4.4-pve1 
>> libcmap4=2.4.4-pve1 libcpg4=2.4.4-pve1 libqb0=1.0.3-1~bpo9 
>> libquorum5=2.4.4-pve1 libvotequorum8=2.4.4-pve1 
>> 
>> Then, once corosync has been downgraded, on all nodes 
>> 
>> # systemctl start pve-ha-lrm 
>> # systemctl start pve-ha-crm 
>> 
>> Would that work ? 
>> 
>> Thanks 
>> 
>> ___ 
>> pve-user mailing list 
>> pve-user@pve.proxmox.com 
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
>> 
>> ___ 
>> pve-user mailing list 
>> pve-user@pve.proxmox.com 
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> 
> 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 


-- 
Zuzendari Teknikoa / Director Técnico 
Binovo IT Human Project, S.L. 
Telf. 943569206 
Astigarragako bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa) 
www.binovo.es 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] kernel panic after live migration

2019-10-30 Thread Alexandre DERUMIER
is it between amd and intel host ?

because, in past, it was never stable. (I had also problem between different 
amd generation).


- Mail original -
De: "proxmoxve" 
À: "proxmoxve" 
Cc: "Humberto Jose De Sousa" 
Envoyé: Mardi 29 Octobre 2019 12:41:19
Objet: [PVE-User] kernel panic after live migration

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6

2019-09-20 Thread Alexandre DERUMIER
Hi,

a patch is available in pvetest

http://download.proxmox.com/debian/pve/dists/buster/pvetest/binary-amd64/libknet1_1.11-pve2_amd64.deb

can you test it ? 

(you need to restart corosync after install of the deb)


- Mail original -
De: "Laurent CARON" 
À: "proxmoxve" 
Envoyé: Lundi 16 Septembre 2019 09:55:34
Objet: [PVE-User] Recurring crashes after cluster upgrade from 5 to 6

Hi, 


After upgrading our 4 node cluster from PVE 5 to 6, we experience 
constant crashed (once every 2 days). 

Those crashes seem related to corosync. 

Since numerous users are reporting sych issues (broken cluster after 
upgrade, unstabilities, ...) I wonder if it is possible to downgrade 
corosync to version 2.4.4 without impacting functionnality ? 

Basic steps would be: 

On all nodes 

# systemctl stop pve-ha-lrm 

Once done, on all nodes: 

# systemctl stop pve-ha-crm 

Once done, on all nodes: 

# apt-get install corosync=2.4.4-pve1 libcorosync-common4=2.4.4-pve1 
libcmap4=2.4.4-pve1 libcpg4=2.4.4-pve1 libqb0=1.0.3-1~bpo9 
libquorum5=2.4.4-pve1 libvotequorum8=2.4.4-pve1 

Then, once corosync has been downgraded, on all nodes 

# systemctl start pve-ha-lrm 
# systemctl start pve-ha-crm 

Would that work ? 

Thanks 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Kernel Memory Leak on PVE6?

2019-09-20 Thread Alexandre DERUMIER
can send detail of 

cat /proc/slabinfo

?

- Mail original -
De: "Aaron Lauterer" 
À: "proxmoxve" 
Envoyé: Vendredi 20 Septembre 2019 15:12:04
Objet: Re: [PVE-User] Kernel Memory Leak on PVE6?

On 9/20/19 3:04 PM, Chris Hofstaedtler | Deduktiva wrote: 
> * Aaron Lauterer  [190920 14:58]: 
>> Curious, I do have a very similar case at the moment with a slab of ~155GB, 
>> out of ~190GB RAM installed. 
>> 
>> I am not sure yet what causes it but things I plan to investigate are: 
>> 
>> * hanging NFS mount 
> 
> Okay, to rule storage issues out, this setup has: 
> - root filesystem as ext4 on GPT 
> - efi system partition 
> - two LVM PVs and VGs, with all VM storage in the second LVM VG 
> - no NFS, no ZFS, no Ceph, no fancy userland filesystems 
> 
>> * possible (PVE) service starting too many threads -> restarting each and 
>> checking the memory / slab usage. 
> 
> Do you have a particular service in mind? 

Not at this point. I would restart all PVE services (systemctl| grep -e 
"pve.*service") one by one to see if any of it will result in memory 
being released by the kernel. 

If that is not the case at least they are ruled out. 
> 
> Chris 
> 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox Ceph to Hyper-V or VMWare

2019-06-04 Thread Alexandre DERUMIER
I have done test with vmware, you need the iscsi gateway from ceph, but really 
need to be on
mimic or better nautilus  for iscsi gateway stability.

http://docs.ceph.com/docs/master/rbd/iscsi-initiator-esx/

- Mail original -
De: "Gilberto Nunes" 
À: "proxmoxve" 
Envoyé: Lundi 3 Juin 2019 20:42:26
Objet: [PVE-User] Proxmox Ceph to Hyper-V or VMWare

Hi there 

Simple question: Is there any way to connect a Proxmox Ceph cluster with a 
Hyper-V or VMWare server?? 

Thanks a lot. 
--- 
Gilberto Nunes Ferreira 

(47) 3025-5907 
(47) 99676-7530 - Whatsapp / Telegram 

Skype: gilberto.nunes36 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Support for Ceph Nautilus?

2019-05-17 Thread Alexandre DERUMIER
>>so as Mimic was no stable release

That's not true anymore.
Since luminous, they have change their release cycle (each 9month),
and all releases are now stable

https://ceph.com/releases/v13-2-0-mimic-released/
"This is the first stable release of Mimic, the next long term release series"


- Mail original -
De: "Thomas Lamprecht" 
À: "Christian Balzer" , "proxmoxve" 
Envoyé: Vendredi 17 Mai 2019 10:23:08
Objet: Re: [PVE-User] Support for Ceph Nautilus?

On 5/17/19 9:53 AM, Christian Balzer wrote: 
> On Fri, 17 May 2019 08:05:21 +0200 Thomas Lamprecht wrote: 
>> On 5/17/19 4:27 AM, Christian Balzer wrote: 
>>> is there anything that's stopping the current PVE to work with an 
>>> externally configured Ceph Nautilus cluster? 
>> 
>> 
>> Short: rather not, you need to try out to be sure though. 

Dominik reminded me that with the new ceph release scheme you always 
should be able to use clients with one stable version older or newer 
than the server version just fine, so as Mimic was no stable release 
you should be able to use Luminous clients with Nautilus server without 
issues. 

>> 
> Whenever Ceph Nautilus packages drop for either Stretch or Buster, I'm 
> quite aware of the issues you mention below. 

Ah, I did not understand your question this way, sorry, and as said 
they'll drop when PVE 6 drops and that'll highly probably be after 
buster :-) 

> 
>> You probably cannot use the kernel RBD as it's support may be to old 
>> for Nautilus images. 
>> 
>> The userspace libraries we use else would need to be updated to Nautilus 
>> to ensure working fine with everything changed in Nautilus. 
>> 
>> So why don't we, the Proxmox development community, don't just update to 
>> Nautilus? 
>> 
>> That ceph version started to employ programming language features available 
>> in only relative recent compiler versions, sadly the one which _every_ 
>> binary 
>> in Stretch is build with (gcc 6.3) does not supports those features - we 
>> thought about workarounds, but felt very uneasy of all of them - the 
>> compiler 
>> and it's used libc is such a fundamental base of a Linux Distro that we 
>> cannot 
>> change that for a single application without hurting stability and bringing 
>> up 
>> lots of problems in any way. 
>> 
>> So Nautilus will come first with Proxmox VE 6.0 based upon Debian Buster, 
>> compiled with gcc 8, which supports all those new shiny used features. 
>> 
> Any timeline (he asks innocently, knowing the usual Debian release delays)? 
> 

For that it can make sense to ask the Debian release team, as at the moment 
no public information is available, one can only speculate on the count and 
types of release blocking bugs, with that and past releases in mind one can 
probably roughly extrapolate the timeline and be correct for ± a month or so. 

>>> 
>>> No matter how many bugfixes and backports have been done for Luminous, it 
>>> still feels like a very weak first attempt with regards to making 
>>> Bluestore the default storage and I'd rather not deploy anything based on 
>>> it. 
>> 
>> We use Bluestore in our own Infrastructure without issues, and lot's of PVE 
>> user 
>> do also - if that make you feelings shift a bit to the better. 
>> 
> 
> The use case I have is one where Bluestore on the same HW would perform 
> noticeably worse than filestore and with cache tiers being in limbo 
> (neither recommended nor replaced with something else) 
> 
> At this point in time I would like to deploy Nautilus and then forget 
> about it, that installation will not be upgraded ever and retired before 
> the usual 5 years are up. 

sound like a security nightmare if not heavily contained, but I guess you know 
that having that on the internet is not the best idea and that Proxmox release 
cycles are normally supported for about ~3 years from initial .0 release, at 
least in the past, and that not only means security update but any issue or 
question which will come up in >3 years (you all prob. know that, and can deal 
with it yourself, I do no question that, just as a reminder for anybody reading 
this) 

cheers, 
Thomas 


___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph and firewalling

2019-05-10 Thread Alexandre DERUMIER
>>The more relevant flag, nf_conntrack_tcp_loose (If it is set to zero,
>>we disable picking up already established connections) is already on
>>(non-zero) by default.

I think this break already established connections of a vm when we do a 
live-migration, as we don't transfert the conntrack.


- Mail original -
De: "Thomas Lamprecht" 
À: "proxmoxve" , "Mark Schouten" 
Envoyé: Jeudi 9 Mai 2019 11:10:46
Objet: Re: [PVE-User] Ceph and firewalling

On 5/9/19 10:09 AM, Mark Schouten wrote: 
> On Thu, May 09, 2019 at 07:53:50AM +0200, Alexandre DERUMIER wrote: 
>> But to really be sure to not have the problem anymore : 
>> 
>> add in /etc/sysctl.conf 
>> 
>> net.netfilter.nf_conntrack_tcp_be_liberal = 1 
> 
> This is very useful info. I'll create a bug for Proxmox, so they can 
> consider it to set this in pve-firewall, which seems a good default if 
> you ask me. 
> 

IMO this is not a sensible default, it makes conntrack almost void: 

> nf_conntrack_tcp_be_liberal - BOOLEAN 
> [...] 
> If it's non-zero, we mark *only out of window RST segments* as INVALID. 

The more relevant flag, nf_conntrack_tcp_loose (If it is set to zero, 
we disable picking up already established connections) is already on 
(non-zero) by default. 

The issue you ran into was the case where pve-cluster (pmxcfs) was 
upgraded and restarted and pve-firewall thought that the user deleted 
all rules and thus flushed them, is already fixed for most common cases 
(package upgrade and normal restart of pve-cluster), so this shouldn't 
be an issue with pve-firewall in version 3.0-20 

But, Stoiko offered to re-take a look at this and try doing additional 
error handling if the fw config read fails (as in pmxcfs not mounted) 
and keep the current rules un-touched in this case (i.e., no remove, 
no add) or maybe also moving the management rules above the conntrack, 
but we need to take a close look here to ensure this has no non-intended 
side effects. 

cheers, 
Thomas 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Ceph and firewalling

2019-05-08 Thread Alexandre DERUMIER
Hi,

I had this problem with cephfs in the vm mainly, when firewall is stopped 
(rules are flushed - but existing connections still conntrack), then start 
again the firewall,

and conntrack put in invalid because it don't have tracked connection sequence 
when firewall was stopped.

This could happen with previous proxmox release, when /etc/pve/cluster.cfg 
couldn't be read during restart of pve-cluster I think. (this has been fixed in 
last pve-firewall).

But to really be sure to not have the problem anymore :

add in /etc/sysctl.conf

net.netfilter.nf_conntrack_tcp_be_liberal = 1


and to get it loaded at boot, add

/etc/modules-load.d/nf_conntrack.conf
nf_conntrack
nf_conntrack_ipv4
nf_conntrack_ipv6



- Mail original -
De: "Mark Schouten" 
À: "proxmoxve" 
Envoyé: Mercredi 8 Mai 2019 02:35:15
Objet: [PVE-User] Ceph and firewalling

Hi, 

While upgrading two clusters tonight, it seems that the Ceph-cluster gets 
confused by the updates of tonight. I think it has something to do with the 
firewall and connection tracking. A restart of ceph-mon on a node seems to 
work. 

I *think* the issue is that when pve-firewall is upgraded, the conntracktable 
is emptied, and existing connections are captured by the 'ctstate 
INVALID'-rule. But it is kinda hard to reproduce. 

If you ask me, the rules for the 'management' ipset should be applied before 
the conntrack-rules, or am I setting things up incorrectly? 


The following packages are updated in this run: 
root@proxmox01:~# grep upgrade /var/log/dpkg.log 
2019-05-08 02:09:46 upgrade base-files:amd64 9.9+deb9u8 9.9+deb9u9 
2019-05-08 02:09:46 upgrade ceph-mds:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:47 upgrade ceph-mgr:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:48 upgrade ceph-mon:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:49 upgrade ceph:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:49 upgrade ceph-osd:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:51 upgrade ceph-base:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:52 upgrade ceph-common:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:54 upgrade librbd1:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:54 upgrade python-rados:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:54 upgrade python-rbd:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:54 upgrade python-rgw:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:54 upgrade python-ceph:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:54 upgrade python-cephfs:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:54 upgrade libcephfs2:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:54 upgrade librgw2:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:55 upgrade libradosstriper1:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:55 upgrade librados2:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:55 upgrade ceph-fuse:amd64 12.2.11-pve1 12.2.12-pve1 
2019-05-08 02:09:56 upgrade libhttp-daemon-perl:all 6.01-1 6.01-2 
2019-05-08 02:09:56 upgrade libjs-jquery:all 3.1.1-2 3.1.1-2+deb9u1 
2019-05-08 02:09:56 upgrade libmariadbclient18:amd64 10.1.37-0+deb9u1 
10.1.38-0+deb9u1 
2019-05-08 02:09:56 upgrade libpng16-16:amd64 1.6.28-1 1.6.28-1+deb9u1 
2019-05-08 02:09:56 upgrade libpq5:amd64 9.6.11-0+deb9u1 9.6.12-0+deb9u1 
2019-05-08 02:09:56 upgrade rsync:amd64 3.1.2-1+deb9u1 3.1.2-1+deb9u2 
2019-05-08 02:09:56 upgrade pve-cluster:amd64 5.0-33 5.0-36 
2019-05-08 02:09:56 upgrade libpve-storage-perl:all 5.0-39 5.0-41 
2019-05-08 02:09:57 upgrade pve-firewall:amd64 3.0-18 3.0-20 
2019-05-08 02:09:57 upgrade pve-ha-manager:amd64 2.0-8 2.0-9 
2019-05-08 02:09:57 upgrade pve-qemu-kvm:amd64 2.12.1-2 2.12.1-3 
2019-05-08 02:09:59 upgrade pve-edk2-firmware:all 1.20181023-1 1.20190312-1 
2019-05-08 02:10:00 upgrade qemu-server:amd64 5.0-47 5.0-50 
2019-05-08 02:10:00 upgrade libpve-common-perl:all 5.0-47 5.0-51 
2019-05-08 02:10:00 upgrade libpve-access-control:amd64 5.1-3 5.1-8 
2019-05-08 02:10:00 upgrade libpve-http-server-perl:all 2.0-12 2.0-13 
2019-05-08 02:10:00 upgrade libssh2-1:amd64 1.7.0-1 1.7.0-1+deb9u1 
2019-05-08 02:10:00 upgrade linux-libc-dev:amd64 4.9.144-3.1 4.9.168-1 
2019-05-08 02:10:08 upgrade pve-kernel-4.15:all 5.3-3 5.4-1 
2019-05-08 02:10:08 upgrade postfix-sqlite:amd64 3.1.9-0+deb9u2 3.1.12-0+deb9u1 
2019-05-08 02:10:08 upgrade postfix:amd64 3.1.9-0+deb9u2 3.1.12-0+deb9u1 
2019-05-08 02:10:10 upgrade proxmox-widget-toolkit:all 1.0-23 1.0-26 
2019-05-08 02:10:10 upgrade pve-container:all 2.0-35 2.0-37 
2019-05-08 02:10:10 upgrade pve-docs:all 5.3-3 5.4-2 
2019-05-08 02:10:11 upgrade pve-i18n:all 1.0-9 1.1-4 
2019-05-08 02:10:11 upgrade pve-xtermjs:amd64 3.10.1-2 3.12.0-1 
2019-05-08 02:10:11 upgrade pve-manager:amd64 5.3-11 5.4-5 
2019-05-08 02:10:11 upgrade proxmox-ve:all 5.3-1 5.4-1 
2019-05-08 02:10:11 upgrade pve-kernel-4.15.18-12-pve:amd64 4.15.18-35 
4.15.18-36 
2019-05-08 02:10:19 upgrade python-cryptography:amd64 1.7.1-3 1.7.1-3+deb9u1 
2019-05-08 02:10:19 upgrade unzip:amd64 6.0-21 6.0-21+deb9u1 
201

Re: [PVE-User] RBD move disk : what about "sparse" blocks ?

2019-05-06 Thread Alexandre DERUMIER
>>Is there any way to copy *exactly* disk from one pool to another, using 
>>PVE live disk moving ? 

It's currently not possible with ceph/librbd. (it's missing some pieces in qemu 
librbd driver)


>>Of course when I run "fstrim -va" on the guest, I can't reclaim space 
>>because kernel thinks it was already trimmed (while on source pool).

It should works without any problem. I have done it a thousand time.
fstrim do trim again and again a each run,for the whole disk.
(vs /etc/fstab discard option, which is doing it live only)


as workaround, we have added an option in the qemu-agent, to do fstrim for you 
just after move disk

- Mail original -
De: "Florent Bautista" 
À: "proxmoxve" 
Envoyé: Mardi 7 Mai 2019 00:47:48
Objet: [PVE-User] RBD move disk : what about "sparse" blocks ?

Hi, 

I have a strange situation while moving disks stored on RBD pools. 

I move from one pool to another (replicated pool to EC pool). 

For example, I have a 16GB disk, using 3131.12 MB on the replicated 
source pool (command : rbd diff pve01-rbd02/vm-206-disk-1 | awk '{ SUM 
+= $2 } END { print SUM/1024/1024 " MB" }' ) 

When PVE has moved the disk on the destination EC pool, disk is now 
using 16384 MB. 

Of course when I run "fstrim -va" on the guest, I can't reclaim space 
because kernel thinks it was already trimmed (while on source pool). 

Is there any way to copy *exactly* disk from one pool to another, using 
PVE live disk moving ? 

Thank you. 

Florent 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Shared storage recommendations

2019-02-28 Thread Alexandre DERUMIER
>>Not sure what documentation exist. 40gbps links are internally 4x 10gbps 
>>waves so a single io will flow on one of the 10gbps links at 10gbps 
>>latency.  a 25 Gbps link is a faster single wave, and 100Gbps is 
>>4x25Gbps waves.  since IOPS is usually much more important then 
>>bandwidth (for VM's) you may get more iops with 25Gbps then with 
>>40Gbps   basically it depends on the workload, if the load is mostly 
>>bandwidth you might need the thru put of 40Gbps more.

Yes, that's true. 

But you have also the latency from the cpu (as ceph is cpu intensive),
so try to have fast frequencies cpus first (both client side, and cluster nodes 
side)


- Mail original -
De: "Ronny Aasen" 
À: "proxmoxve" 
Envoyé: Mercredi 27 Février 2019 19:03:58
Objet: Re: [PVE-User] Shared storage recommendations

On 27.02.2019 16:49, Frederic Van Espen wrote: 
> Hi, 
> 
>> tips: you want 5 monitors, so you can have dual failures or one fail 
>> while another is under maintenance/reboot; Good networking is 
>> important, 25gbps is better then 40gbps due to lower latency; 
> Thanks for these tips! Do you have any reference documentation for the 
> 25gbps vs 40gbps statement? I find no immediate results for that on 
> google. In any case, we were thinking of starting out with a 10gbps 
> network. Are there any solid reasons not to do that for a light 
> workload? 

Not sure what documentation exist. 40gbps links are internally 4x 10gbps 
waves so a single io will flow on one of the 10gbps links at 10gbps 
latency. a 25 Gbps link is a faster single wave, and 100Gbps is 
4x25Gbps waves. since IOPS is usually much more important then 
bandwidth (for VM's) you may get more iops with 25Gbps then with 
40Gbps basically it depends on the workload, if the load is mostly 
bandwidth you might need the thru put of 40Gbps more. 

quote from 
https://ceph.com/planet/making-ceph-faster-lessons-from-performance-testing/ 
"20 HDDs or 2-3 SSDs can exceed the bandwidth of single 10GbE link for 
read throughput. " 
so if your nodes are on this size, a single 10gbps will be the 
bottleneck. but if the load is below the bottleneck it is not a problem. 

even if you design network with 2x10gbps links for HA you may need need 
to run the cluster on a single 10gbps in case of a failure on part of 
the network. workload and SLA's you provide to your clients determine if 
that is acceptable or not. monitoring is important 


good luck! 

Ronny Aasen 




___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox Ceph Cluster and osd_target_memory

2019-02-18 Thread Alexandre DERUMIER
How many disk do you have in you node ?

maybe using filestore could help if you are low in memory.


- Mail original -
De: "Gilberto Nunes" 
À: "proxmoxve" 
Envoyé: Lundi 18 Février 2019 03:37:34
Objet: [PVE-User] Proxmox Ceph Cluster and osd_target_memory

Hi there 

I have a system w/ 6 PVE Ceph Cluster and all of then are low memory - 16G 
to be more exactly. 
Is this reasonable lower osd_target_memory to 2GB, in order to make system 
run more smoothly? I cannot buy more memory for now... 
Thanks for any advice 

--- 
Gilberto Nunes Ferreira 

(47) 3025-5907 
(47) 99676-7530 - Whatsapp / Telegram 

Skype: gilberto.nunes36 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Could a pve cluster has 50 nodes, or more?

2019-02-15 Thread Alexandre DERUMIER
I'm running 20 nodes cluster here, unicast.

I think if you have switch with big asic, and small latency,
you can do it better.


I'm currently testing corosync 3, with the new protocol knet,
it should help too for bigger cluster. (should be the default for proxmox6).


I think currently they are an hardcorded value in corosync (maybe 64 ? not sure)

- Mail original -
De: "Eneko Lacunza" 
À: "proxmoxve" 
Envoyé: Jeudi 14 Février 2019 12:07:12
Objet: Re: [PVE-User] Could a pve cluster has 50 nodes, or more?

Hi Denis, 

El 13/2/19 a las 23:28, Denis Morejon escribió: 
> I note that sharing the db file, even using multicast protocol, could 
> put a limit to the maximum number of members. Any thinking about a 
> centralized db paradigm? How many members have you put together? 

Docs talk about 32 nodes: 
https://pve.proxmox.com/wiki/Cluster_Manager 

I think that the current aproach for cluster info is an advantage of 
Proxmox, because it's resilient and very easy to setup. The same applies 
to the ability to manage the cluster from any cluster node too. 

Cheers 
Eneko 

-- 
Zuzendari Teknikoa / Director Técnico 
Binovo IT Human Project, S.L. 
Telf. 943569206 
Astigarraga bidea 2, 2º izq. oficina 11; 20180 Oiartzun (Gipuzkoa) 
www.binovo.es 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox Suricata Setup

2018-12-18 Thread Alexandre DERUMIER
as you have configured nfqueue=2, do you have setup 

ips_queues: 2

?


- Mail original -
De: "Mark Kaye" 
À: "proxmoxve" 
Envoyé: Mardi 18 Décembre 2018 10:54:23
Objet: [PVE-User] Proxmox Suricata Setup

Hi, 
I've followed the instructions for setting up Suricata on my Proxmox server as 
detailed at:https://pve.proxmox.com/wiki/Firewall 

However, no traffic is currently being filtered by Suricata to or from the VMs 
or containers (dns, http, fast logs show nothing). 
My /etc/default/suricata is: 
RUN=yesSURCONF=/etc/suricata/suricata.yaml 
LISTENMODE=nfqueue 
IFACE=eth0 
NFQUEUE=2TCMALLOC="YES"PIDFILE=/var/run/suricata.pid 
I have a OVS bond (bond0) setup with an OVS bridge (vmbr0) for the public 
interface. 
I've had to override the default Suricata systemd configuration in order to run 
the Suricata init script & thus the /etc/default/suricata configuration using 
the following: 
[Service]ExecStart=ExecStart=/etc/init.d/suricata 
startExecStop=ExecStop=/etc/init.d/suricata stop 

IPTABLES-- pkts bytes target prot opt in out source destination 0 0 
PVEFW-reject tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:4394086 13M 
PVEFW-DropBroadcast all -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT icmp -- * * 
0.0.0.0/0 0.0.0.0/0 icmptype 3 code 4 0 0 ACCEPT icmp -- * * 0.0.0.0/0 
0.0.0.0/0 icmptype 11 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID 0 
0 DROP udp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 135,445 3 234 DROP udp 
-- * * 0.0.0.0/0 0.0.0.0/0 udp dpts:137:139 0 0 DROP udp -- * * 0.0.0.0/0 
0.0.0.0/0 udp spt:137 dpts:1024:65535 0 0 DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 
multiport dports 135,139,445 0 0 DROP udp -- * * 0.0.0.0/0 0.0.0.0/0 udp 
dpt:1900 44 4694 DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:!0x17/0x02 0 0 
DROP udp -- * * 0.0.0.0/0 0.0.0.0/0 udp spt:53 1617 90572 all -- * * 0.0.0.0/0 
0.0.0.0/0 /* PVESIG:WDy2wbFe7jNYEyoO3QhUELZ4mIQ */ 
Chain PVEFW-DropBroadcast (2 references) pkts bytes target prot opt in out 
source destination 2034 202K DROP all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match 
dst-type BROADCAST90388 13M DROP all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match 
dst-type MULTICAST 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match 
dst-type ANYCAST 0 0 DROP all -- * * 0.0.0.0/0 224.0.0.0/4 1664 95500 all -- * 
* 0.0.0.0/0 0.0.0.0/0 /* PVESIG:NyjHNAtFbkH7WGLamPpdVnxHy4w */ 
Chain PVEFW-FORWARD (1 references) pkts bytes target prot opt in out source 
destination 197 91271 PVEFW-IPS all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate 
RELATED,ESTABLISHED 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID 76 
5238 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED14773 
1962K PVEFW-FWBR-IN all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match --physdev-in 
fwln+ --physdev-is-bridged 75 4358 PVEFW-FWBR-OUT all -- * * 0.0.0.0/0 
0.0.0.0/0 PHYSDEV match --physdev-out fwln+ --physdev-is-bridged 75 4358 all -- 
* * 0.0.0.0/0 0.0.0.0/0 /* PVESIG:SObnoWmKDADnyocGfaX95tEgIRE */ 
Chain PVEFW-FWBR-IN (1 references) pkts bytes target prot opt in out source 
destination94086 13M PVEFW-smurfs all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate 
INVALID,NEW94086 13M veth100i0-IN all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match 
--physdev-out veth100i0 --physdev-is-bridged 0 0 all -- * * 0.0.0.0/0 0.0.0.0/0 
/* PVESIG:4OE5DGCM8EKNLmQbkV3LIx5w1QM */ 
Chain PVEFW-FWBR-OUT (1 references) pkts bytes target prot opt in out source 
destination 170 9994 veth100i0-OUT all -- * * 0.0.0.0/0 0.0.0.0/0 PHYSDEV match 
--physdev-in veth100i0 --physdev-is-bridged 170 9994 all -- * * 0.0.0.0/0 
0.0.0.0/0 /* PVESIG:lXaefvpIDNAYTJwiaBF5f1+faEw */ 
Chain PVEFW-HOST-IN (1 references) pkts bytes target prot opt in out source 
destination3551K 7022M ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 1289 51560 DROP 
all -- * * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID5599K 7971M ACCEPT all -- * * 
0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED3494K 606M PVEFW-smurfs all -- * 
* 0.0.0.0/0 0.0.0.0/0 ctstate INVALID,NEW 0 0 RETURN 2 -- * * 0.0.0.0/0 
0.0.0.0/0 8672 451K RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 match-set 
PVEFW-0-management-v4 src tcp dpt:8006 0 0 RETURN tcp -- * * 0.0.0.0/0 
0.0.0.0/0 match-set PVEFW-0-management-v4 src tcp dpts:5900:5999 0 0 RETURN tcp 
-- * * 0.0.0.0/0 0.0.0.0/0 match-set PVEFW-0-management-v4 src tcp dpt:3128 21 
4420 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 match-set PVEFW-0-management-v4 src 
tcp dpt:22 0 0 RETURN udp -- * * 192.168.1.0/24 192.168.1.0/24 udp 
dpts:5404:5405 0 0 RETURN udp -- * * 192.168.1.0/24 0.0.0.0/0 ADDRTYPE match 
dst-type MULTICAST udp dpts:5404:54053486K 606M RETURN all -- * * 0.0.0.0/0 
0.0.0.0/0 0 0 all -- * * 0.0.0.0/0 0.0.0.0/0 /* 
PVESIG:ALOh8WDdOmO5Ptu5R2nVTkfCuQE */ 
Chain PVEFW-HOST-OUT (1 references) pkts bytes target prot opt in out source 
destination3551K 7022M ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 7 280 DROP all -- 
* * 0.0.0.0/0 0.0.0.0/0 ctstate INVALID5686K 5074M ACCEPT all -- * * 0.0.0.0/0 
0.0.0.0/0 ctstate RELATED,ESTABLISHED 20 800 RETURN 2 -- * * 0.0.0.0/0 
0.0.0.0/

Re: [PVE-User] Export / import VM

2018-12-17 Thread Alexandre DERUMIER
for import, you can do it with command line only.

for an ovf:

qm importovf[OPTIONS]


for only 1disk:

qm importdisk[OPTIONS]



For export, they are no command line currently.
(but you can use "qemu-img convert ..." to convert disk to vmdk or other 
format.)


- Mail original -
De: "David Martin" 
À: "proxmoxve" 
Envoyé: Lundi 17 Décembre 2018 10:16:19
Objet: [PVE-User] Export / import VM

Hi, 

How to import or export VM ? I receiving différent files format (ova,ovf, vmdk) 
but i don t find function in web interface. 

David 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] HA scalability and predictability

2018-12-09 Thread Alexandre DERUMIER
>>I presume a "node1:2,node8:1" and "restricted 1" should do the trick here.

restricted is only used, if both both node1 && node8 are down, the vm don't go 
to another node.

But the weight indeed, should do the trick.

for example, n nodes with :2  , and spare(s) node(s) with :1

- Mail original -
De: "Christian Balzer" 
À: "proxmoxve" 
Envoyé: Lundi 10 Décembre 2018 04:45:27
Objet: [PVE-User] HA scalability and predictability

Hello, 

still investigating PVE as a large ganeti cluster replacement. 

Some years ago we did our owh HA VM cluster based on Pacemaker, libvirt 
(KVM) and DRBD. 
While this worked well it also showed the limitations in Pacemaker and LRM 
in particular. Things got pretty sluggish with 60VMs and a total of 120 
resources. 
This cluster will have about 800VMs, has anybody done this number of HA 
VMs with PVE and what's their experience? 

Secondly, it is an absolute requirement that a node failure will result in 
a predictable and restricted failover. 
I.e. the cluster will have a n+1 (or n+2) redundancy with at least one 
node being essentially a hot spare. 
Failovers should only go to the spare(s), never another compute node. 

I presume a "node1:2,node8:1" and "restricted 1" should do the trick here. 

Regards, 

Christian 
-- 
Christian Balzer Network/Systems Engineer 
ch...@gol.com Rakuten Communications 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Risks for using writeback on Ceph RBD

2018-11-12 Thread Alexandre DERUMIER
Like all writeback, you'll lost datas in memory before the fsync of the 
filesystem, but not corruption of the filesystem. (writeback = rbd_cache=true)

note that rbd writeback help only for sequential of small writes (aggregate in 
1big transaction to send to ceph).

Also, read latency is bigger when writeback is enable.

- Mail original -
De: "Mark Schouten" 
À: "proxmoxve" 
Envoyé: Lundi 12 Novembre 2018 14:45:41
Objet: [PVE-User] Risks for using writeback on Ceph RBD

Hi, 

We've noticed some performance wins on using writeback for Ceph RBD 
devices, but I'm wondering how we should project risks on using 
writeback. Writeback isn't very unsafe, but what are the risks in case 
of powerloss of a host? 

Thanks, 

-- 
Mark Schouten | Tuxis Internet Engineering 
KvK: 61527076 | http://www.tuxis.nl/ 
T: 0318 200208 | i...@tuxis.nl 


___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] SAN and correctly propagate 'discard'...

2018-10-09 Thread Alexandre DERUMIER
>>Clearly i cannot use 'LVM Thin', because is not shared. 
>[ https://pve.proxmox.com/wiki/Storage:_LVM_Thin | 
>https://pve.proxmox.com/wiki/Storage:_LVM_Thin ] 
>>Someone can give me some clue? Thanks.

Indeed, LVM thin can't work with shared SAN, only on local disk.
So you can't have thin provisioning, and can't use discard.

It's just not possible, sorry.


- Mail original -
De: "Marco Gaiarin" 
À: "proxmoxve" 
Envoyé: Mardi 9 Octobre 2018 15:12:18
Objet: [PVE-User] SAN and correctly propagate 'discard'...

My SAN (HP MSA 1040) have 'virtual volume' enabled, eg discard. 

I've had to use directly some LUN in some VM, via iSCSI, and 'fstrim' 
works as expected. 


Now i'm using for some disks in my VMs, as storage of type 'LVM'. 

But if i look at disk space on SAN, it is coherent with the real disk 
space occupied. 
While, in Proxmox, the storage volume are full or near full. 

Example, VM: 
root@vdmsv1:~# df -h | grep sdc 
/dev/sdc1 800G 519G 282G 65% /home 
/dev/sdc2 1000G 152G 848G 16% /srv 

Proxmox node: 
root@tessier:~# pvesm status | grep DATA0 
DATA0 lvm 1 1953120256 1887436800 65683456 97.14% 
root@tessier:~# pvesm list DATA0 
DATA0:vm-120-disk-1 raw 1932735283200 120 

and SAN report 730GB allocated. 


Clearly disks of VM have 'discard' enabled, but seems that discard/trim 
does not ''traverse'' all the storage chains (partition in the VM, 
LV/PV/VG in proxmox node, iSCSI in the SAN). 

Clearly i cannot use 'LVM Thin', because is not shared. 

https://pve.proxmox.com/wiki/Storage:_LVM_Thin 


Someone can give me some clue? Thanks. 

-- 
dott. Marco Gaiarin GNUPG Key ID: 240A3D66 
Associazione ``La Nostra Famiglia'' http://www.lanostrafamiglia.it/ 
Polo FVG - Via della Bontà, 7 - 33078 - San Vito al Tagliamento (PN) 
marco.gaiarin(at)lanostrafamiglia.it t +39-0434-842711 f +39-0434-842797 

Dona il 5 PER MILLE a LA NOSTRA FAMIGLIA! 
http://www.lanostrafamiglia.it/index.php/it/sostienici/5x1000 
(cf 00307430132, categoria ONLUS oppure RICERCA SANITARIA) 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM encryption and high availability

2018-10-07 Thread Alexandre DERUMIER
Hi,

It's also possible to manage luks encryption at qemu level

I have an opened bugzilla about this, but don't have time yet to work on it
https://bugzilla.proxmox.com/show_bug.cgi?id=1894

Advantage is that it's could work with any storage

- Mail original -
De: "Daniel Berteaud" 
À: "proxmoxve" 
Envoyé: Lundi 8 Octobre 2018 08:30:17
Objet: Re: [PVE-User] VM encryption and high availability

Le 05/10/2018 à 16:55, Martin LEUSCH a écrit : 
> Hi, 
> 
> I have a Proxmox cluster and use LVM over iSCSI as storage. As I 
> didn't own the iSCSI server, I plane to encrypt some disk image to 
> increase confidentiality. 
> 
> Firstly, I didn't found a way to encrypt iSCSI target or LVM logical 
> volume and use them in Proxmox, is there a way to achieve that? 


You can, this is what I use. Just declare your iSCSI volume, but don't 
use it yet. Create a LUKS volume on it (just on one node): 


cryptsetup luksFormat /dev/sdc 

[...] 


Then open your new LUKS device: 


cryptsetup open --type=luks /dev/sdc clear 


Now you can use /dev/mapper/clear as LVM (pvcreate && vgcreate on one 
node before using it). 


Now, when you reboot one of your node, you just have to unlock the 
device with 


cryptsetup open --type=luks /dev/sdc clear 


Before you can access the data 

-- 

Logo FWS 

*Daniel Berteaud* 

FIREWALL-SERVICES SAS. 
Société de Services en Logiciels Libres 
Tel : 05 56 64 15 32 
Matrix: @dani:fws.fr 
/www.firewall-services.com/ 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox CEPH 6 servers failures!

2018-10-05 Thread Alexandre DERUMIER
>>Nice.. Perhaps if I create a VM in Proxmox01 and Proxmox02, and join this
>>VM into Cluster Ceph, can I solve to quorum problem?

you could have 1VM (only one)  , but you need to replicate it (syncronously) 
between the 2 proxmox server.
and you'll have a cluster hang until it's have failover.
maybe with drbd for example
here an old blog about this kind of setup
https://www.sebastien-han.fr/blog/2013/01/28/ceph-geo-replication-sort-of/




The only clean way, to have a ceph cluster multi site,
is to have 3 sites, with 1 monitor on each site. (and multiple network links 
between sites).

BTW, if your second site have a powerfailure, you're ceph cluster will hang too 
on 1st site.(you'll lost quorum too)

and you also need to configure the crushmap to get replication working 
correctly between the site.







- Mail original -
De: "Gilberto Nunes" 
À: "proxmoxve" 
Envoyé: Vendredi 5 Octobre 2018 14:31:20
Objet: Re: [PVE-User] Proxmox CEPH 6 servers failures!

Nice.. Perhaps if I create a VM in Proxmox01 and Proxmox02, and join this 
VM into Cluster Ceph, can I solve to quorum problem? 
--- 
Gilberto Nunes Ferreira 

(47) 3025-5907 
(47) 99676-7530 - Whatsapp / Telegram 

Skype: gilberto.nunes36 





Em sex, 5 de out de 2018 às 09:23, dorsy  escreveu: 

> Your question has already been answered. You need majority to have quorum. 
> 
> On 2018. 10. 05. 14:10, Gilberto Nunes wrote: 
> > Hi 
> > Perhaps this can help: 
> > 
> > https://imageshack.com/a/img921/6208/X7ha8R.png 
> > 
> > I was thing about it, and perhaps if I deploy a VM in both side, with 
> > Proxmox and add this VM to the CEPH cluster, maybe this can help! 
> > 
> > thanks 
> > --- 
> > Gilberto Nunes Ferreira 
> > 
> > (47) 3025-5907 
> > (47) 99676-7530 - Whatsapp / Telegram 
> > 
> > Skype: gilberto.nunes36 
> > 
> > 
> > 
> > 
> > 
> > Em sex, 5 de out de 2018 às 03:55, Alexandre DERUMIER < 
> aderum...@odiso.com> 
> > escreveu: 
> > 
> >> Hi, 
> >> 
> >> Can you resend your schema, because it's impossible to read. 
> >> 
> >> 
> >> but you need to have to quorum on monitor to have the cluster working. 
> >> 
> >> 
> >> - Mail original - 
> >> De: "Gilberto Nunes"  
> >> À: "proxmoxve"  
> >> Envoyé: Jeudi 4 Octobre 2018 22:05:16 
> >> Objet: [PVE-User] Proxmox CEPH 6 servers failures! 
> >> 
> >> Hi there 
> >> 
> >> I have something like this: 
> >> 
> >> CEPH01 | 
> >> |- CEPH04 
> >> | 
> >> | 
> >> CEPH02 |-| 
> >> CEPH05 
> >> | Optic Fiber 
> >> | 
> >> CEPH03 | 
> >> |--- CEPH06 
> >> 
> >> Sometime, when Optic Fiber not work, and just CEPH01, CEPH02 and CEPH03 
> >> remains, the entire cluster fail! 
> >> I find out the cause! 
> >> 
> >> ceph.conf 
> >> 
> >> [global] auth client required = cephx auth cluster required = cephx auth 
> >> service required = cephx cluster network = 10.10.10.0/24 fsid = 
> >> e67534b4-0a66-48db-ad6f-aa0868e962d8 keyring = 
> >> /etc/pve/priv/$cluster.$name.keyring mon allow pool delete = true osd 
> >> journal size = 5120 osd pool default min size = 2 osd pool default size 
> = 
> >> 3 
> >> public network = 10.10.10.0/24 [osd] keyring = 
> >> /var/lib/ceph/osd/ceph-$id/keyring [mon.pve-ceph01] host = pve-ceph01 
> mon 
> >> addr = 10.10.10.100:6789 mon osd allow primary affinity = true 
> >> [mon.pve-ceph02] host = pve-ceph02 mon addr = 10.10.10.110:6789 mon osd 
> >> allow primary affinity = true [mon.pve-ceph03] host = pve-ceph03 mon 
> addr 
> >> = 
> >> 10.10.10.120:6789 mon osd allow primary affinity = true 
> [mon.pve-ceph04] 
> >> host = pve-ceph04 mon addr = 10.10.10.130:6789 mon osd allow primary 
> >> affinity = true [mon.pve-ceph05] host = pve-ceph05 mon addr = 
> >> 10.10.10.140:6789 mon osd allow primary affinity = true 
> [mon.pve-ceph06] 
> >> host = pve-ceph06 mon addr = 10.10.10.150:6789 mon osd allow primary 
> >> affinity = true 
> >> 
> >> Any help will be welcome! 
> >> 
> >> --- 
> >> Gilberto Nunes Ferreira 
> >> 
> >> (47) 3025-5907 
> >> (47) 99676-7530 - Whatsapp / Telegram 
> >> 
> >> Skype: gilberto.nunes36 
> >> ___

Re: [PVE-User] Proxmox CEPH 6 servers failures!

2018-10-04 Thread Alexandre DERUMIER
Hi,

Can you resend your schema, because it's impossible to read.


but you need to have to quorum on monitor to have the cluster working.


- Mail original -
De: "Gilberto Nunes" 
À: "proxmoxve" 
Envoyé: Jeudi 4 Octobre 2018 22:05:16
Objet: [PVE-User] Proxmox CEPH 6 servers failures!

Hi there 

I have something like this: 

CEPH01 | 
|- CEPH04 
| 
| 
CEPH02 |-| 
CEPH05 
| Optic Fiber 
| 
CEPH03 | 
|--- CEPH06 

Sometime, when Optic Fiber not work, and just CEPH01, CEPH02 and CEPH03 
remains, the entire cluster fail! 
I find out the cause! 

ceph.conf 

[global] auth client required = cephx auth cluster required = cephx auth 
service required = cephx cluster network = 10.10.10.0/24 fsid = 
e67534b4-0a66-48db-ad6f-aa0868e962d8 keyring = 
/etc/pve/priv/$cluster.$name.keyring mon allow pool delete = true osd 
journal size = 5120 osd pool default min size = 2 osd pool default size = 3 
public network = 10.10.10.0/24 [osd] keyring = 
/var/lib/ceph/osd/ceph-$id/keyring [mon.pve-ceph01] host = pve-ceph01 mon 
addr = 10.10.10.100:6789 mon osd allow primary affinity = true 
[mon.pve-ceph02] host = pve-ceph02 mon addr = 10.10.10.110:6789 mon osd 
allow primary affinity = true [mon.pve-ceph03] host = pve-ceph03 mon addr = 
10.10.10.120:6789 mon osd allow primary affinity = true [mon.pve-ceph04] 
host = pve-ceph04 mon addr = 10.10.10.130:6789 mon osd allow primary 
affinity = true [mon.pve-ceph05] host = pve-ceph05 mon addr = 
10.10.10.140:6789 mon osd allow primary affinity = true [mon.pve-ceph06] 
host = pve-ceph06 mon addr = 10.10.10.150:6789 mon osd allow primary 
affinity = true 

Any help will be welcome! 

--- 
Gilberto Nunes Ferreira 

(47) 3025-5907 
(47) 99676-7530 - Whatsapp / Telegram 

Skype: gilberto.nunes36 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox per VM memory limit

2018-10-03 Thread Alexandre DERUMIER
The qemu limit is 4TB.
proxmox don't have a max limit in vm config.


But I'm not sure that seabios support more than 1TB. (never tried it, but in 
2013 it was not supported).
maybe enabling uefi could help.


- Mail original -
De: "Gilberto Nunes" 
À: "proxmoxve" 
Envoyé: Mardi 2 Octobre 2018 20:34:48
Objet: [PVE-User] Proxmox per VM memory limit

Hi there! 

How many memory per VM I get in PVE? 
Is there some limit? 1 TB? 2 TB? 
Just curious 

Thanks 
--- 
Gilberto Nunes Ferreira 

(47) 3025-5907 
(47) 99676-7530 - Whatsapp / Telegram 

Skype: gilberto.nunes36 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VM tooks a lot time to restart...

2018-09-20 Thread Alexandre DERUMIER
Hi,

maybe the network link on NODE1 to network storage is saturated ?


- Mail original -
De: "Gilberto Nunes" 
À: "proxmoxve" 
Envoyé: Jeudi 20 Septembre 2018 15:31:17
Objet: [PVE-User] VM tooks a lot time to restart...

Hi there 

I have a bunch of VM running in two cluster HA... 
This VM's are lying in 6 Servers CEPH Proxmox storage. 

Sometimes, some VM's with Windows 2012R2, took lot a time to start. 
So, I need migrate it to the other node, let's say NODE2, in order to VM 
start properly... After VM are UP and RUNNING, I back the VM to NODE1. 
I don't have a glue why this happen 
Somebody here has the same behavior? 

Cheers 

--- 
Gilberto Nunes Ferreira 

(47) 3025-5907 
(47) 99676-7530 - Whatsapp / Telegram 

Skype: gilberto.nunes36 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE is using CEPH 12.2.7 but 10.2.5 is only avaible in Debian repos

2018-09-12 Thread Alexandre DERUMIER
>>Hi, 
>>I am just mount cephfs via fstab - I have not installed anything 
>>ceph-related but I am relying on the kernel driver. 
>>As far as I know, this cannot really be updated unless I upgrade the 
>>kernel? 
>>[ http://docs.ceph.com/docs/master/cephfs/kernel/ | 
>>http://docs.ceph.com/docs/master/cephfs/kernel/ ]

Yes,indeed. If you use the kernel client, it's related to kernel version. 
(which is faster than fuse version)

Old kernel client is compatible with new ceph, but sometime miss new features. 
(and quota for examples)



- Mail original -
De: "Jonathan Sélea" 
À: "proxmoxve" 
Envoyé: Mercredi 12 Septembre 2018 10:58:57
Objet: Re: [PVE-User] PVE is using CEPH 12.2.7 but 10.2.5 is only avaible in 
Debian repos

Hi, 
I am just mount cephfs via fstab - I have not installed anything 
ceph-related but I am relying on the kernel driver. 
As far as I know, this cannot really be updated unless I upgrade the 
kernel? 
http://docs.ceph.com/docs/master/cephfs/kernel/ 

The machine does not run ceph itself. 

Please correct me if I am wrong :) 

/ Jonathan 

> 
> You sure about that? I can see 
> https://download.ceph.com/debian-luminous/dists/stretch/ there, and the 
> instructions from their docs worked for me: 
> 
> :~# apt-cache policy ceph 
> ceph: 
> Installed: (none) 
> Candidate: 12.2.8-1~bpo90+1 
> Version table: 
> 12.2.8-1~bpo90+1 500 
> 500 https://download.ceph.com/debian-luminous stretch/main 
> amd64 
> Packages 
> 10.2.5-7.2 500 
> 500 http://ftp.at.debian.org/debian stretch/main amd64 Packages 
> 
> Can you tell us what you tried and didn't work for you? This is the 
> sources.list entry that I added for it: 
> 
> deb https://download.ceph.com/debian-luminous/ stretch main 
> 
> Hope that helps, 
> Rhonda 
> 
> 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 


___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE is using CEPH 12.2.7 but 10.2.5 is only avaible in Debian repos

2018-09-11 Thread Alexandre DERUMIER
>>I am trying to diagnose a performance issue we have in our cluster. 
>>What we found is that Proxmox 5 is using Ceph Luminous 12.2.7, but our 
>>clients that is mounting Cephfs is running Jewel 10.2.5 – Is this an issue? 

It's better to use matching version packages for client too. (last kernel or 
last ceph-fuse).

ceph.com provide packages for debian with differents release

http://docs.ceph.com/docs/master/install/get-packages/?highlight=debian

- Mail original -
De: "Jonathan Sélea" 
À: "proxmoxve" 
Envoyé: Mardi 11 Septembre 2018 08:56:24
Objet: [PVE-User] PVE is using CEPH 12.2.7 but 10.2.5 is only avaible in Debian 
repos

Hi, 



I am trying to diagnose a performance issue we have in our cluster. 
What we found is that Proxmox 5 is using Ceph Luminous 12.2.7, but our 
clients that is mounting Cephfs is running Jewel 10.2.5 – Is this an issue? 
Could this degrade the performance between the clients and the cluster? 




___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cloning a running VM - is it safe?

2018-09-04 Thread Alexandre DERUMIER
Note that it's also possible,

to clone a snapshot.


(Take a snapshot, then clone vm and choose snapshot instead 'current').

I this case, it should flush the pending write in the snapshot if you have qemu 
agent installed.



- Mail original -
De: "Klaus Darilion" 
À: "proxmoxve" 
Envoyé: Mardi 4 Septembre 2018 17:14:26
Objet: Re: [PVE-User] Cloning a running VM - is it safe?

Am 04.09.2018 um 16:39 schrieb Alexandre DERUMIER: 
> live cloning don't use snapshot, but use qemu drive mirror (like move disk), 
> but at the end, don't do the switch from current disk to new disk. 

Thanks, that was the info I was looking for - how it is done technically. 

Klaus 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cloning a running VM - is it safe?

2018-09-04 Thread Alexandre DERUMIER
live cloning don't use snapshot, but use qemu drive mirror (like move disk),
but at the end, don't do the switch from current disk to new disk.

It should work in almost all cases, but be carefull, I don't think that
pending write in memory are flushed to new disk.

(It's like you clone it, and at the end, you poweroff it).

so maybe some writes just before the end can be lost.


- Mail original -
De: "Klaus Darilion" 
À: "proxmoxve" 
Envoyé: Mardi 4 Septembre 2018 15:55:37
Objet: [PVE-User] Cloning a running VM - is it safe?

Hi! 

When I clone the running VM, is it in the brackground doing a temp. 
snapshot which gets cloned? (if relevant, the source VM is using a ZFS 
volume) 

I have not found this info the docs. 

Thanks 
Klaus 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] VZDump and Ceph Snapshots

2018-08-28 Thread Alexandre DERUMIER
Hi,

vma backup only work on running vm (attached disk),
so no, it's not possible currently.

Currently, I'm doing ceph backup to another remote ceph backup cluster
with custom script


* Start 
* guest-fs-freeze 
* rbd snap $image@vzdump_$timstamp 
* guest-fs-thaw 
* rbd export $image@vzdump_$timstamp  | rbd import 


works very fine. (with incremental backup)


I'll try to put my code on github soon.

(I have a nice gui/cli too, to restore a full vm, or some files inside the 
backup)





- Mail original -
De: "Mark Schouten" 
À: "proxmoxve" 
Envoyé: Mardi 28 Août 2018 16:27:42
Objet: [PVE-User] VZDump and Ceph Snapshots

Hi, 

I'm currently using vzdump with the snapshot method to periodically 
generate vma-files for disasterrecovery. vzdump in snapshot mode 
instructs Qemu to start backing up the disk to a specific location, and 
while doing so, VM users can suffer poor performance. 

We run practically all VMs on Ceph storage, which has snapshot 
functionality. 

Would it be feasable to alter VZDump to use the following flow: 

* Start 
* guest-fs-freeze 
* rbd snap $image@vzdump_$timstamp 
* guest-fs-thaw 
* qemu-img convert -O raw rbd:$image@vzdump_$timstamp $tmpdir 
* vma create 
* rbd snap rm $image@vzdump_$timstamp 
* Done 

Regards, 

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ 
Mark Schouten | Tuxis Internet Engineering 
KvK: 61527076 | http://www.tuxis.nl/ 
T: 0318 200208 | i...@tuxis.nl 


___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Linux improvements for ceph and general running vm's

2018-08-26 Thread Alexandre DERUMIER
Hi,

do you want to improve latency or throughput ?


They are lot of thing to tune in ceph.conf before tuning network.
can you send your ceph.conf too ? (ceph cluster && proxmox nodes if ceph is 
outside proxmox)

ssd ? hdd ?


- Mail original -
De: "Gilberto Nunes" 
À: "proxmoxve" 
Envoyé: Lundi 27 Août 2018 05:00:52
Objet: [PVE-User] Linux improvements for ceph and general running vm's

Hi there 

Can any one tell me if this options bellow can improve somethings related 
network: 

net.ipv4.tcp_low_latency = 1 
net.core.rmem_default = 1000 
net.core.wmem_default = 1000 
net.core.rmem_max = 16777216 
net.core.wmem_max = 16777216 

net.ipv4.tcp_rfc1337=1 

net.ipv4.tcp_window_scaling=1 

net.ipv4.tcp_workaround_signed_windows=1 

net.ipv4.tcp_sack=1 

net.ipv4.tcp_fack=1 

net.ipv4.tcp_low_latency=1 

net.ipv4.ip_no_pmtu_disc=0 

net.ipv4.tcp_mtu_probing=1 

net.ipv4.tcp_frto=2 

net.ipv4.tcp_frto_response=2 

net.ipv4.tcp_congestion_control=illinois 

Thanks a lot 

--- 
Gilberto Nunes Ferreira 

(47) 3025-5907 
(47) 99676-7530 - Whatsapp / Telegram 

Skype: gilberto.nunes36 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] About the FRR integrated into the PVE?

2018-07-12 Thread Alexandre DERUMIER
> What do you want to do with frr ? (vxlan - bgp evpn ?) 

>>yes, this is more urgent! 

Great . with me, we are already 3 differents users who's need it :)
I'll send documentation soon, but it's already working in my test lab.

I'm currently trying to polish promox integration, make network config 
reloading possible,...

If you have time to test next month, it could be great :)



>>Add ospf/ospf6 ?
>>
>>In small and medium networks, the OSPF will be more advantageous! 
>>By the way, BGP also needs to establish neighbors first, while establishing 
>>neighbors is usually OSPF.
>>
>>It's just a suggestion. I really don't have the experience of large-scale 
>>network environment.

well, you can do bgp,ospf with frr.
I don't known yet if we need to manage frr.conf from proxmox gui or not. As 
they are so many setup possible, depending of infrastructure.


The advantage of bgp-evpn, is that vm live migration is working, because 
control plane is done through bgp, learning mac address from vm, 
and dynamicaly create routes.


bgp/ospf, is more for the routing of the proxmox hosts underlay. (if you have a 
layer3 network)



- Mail original -
De: "lyt_yudi" 
À: "proxmoxve" 
Envoyé: Jeudi 12 Juillet 2018 03:20:47
Objet: Re: [PVE-User] About the FRR integrated into the PVE?

> 在 2018年7月11日,下午9:17,Alexandre DERUMIER  写道: 
> 
> What do you want to do with frr ? (vxlan - bgp evpn ?) 

yes, this is more urgent! 


___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] About the FRR integrated into the PVE?

2018-07-11 Thread Alexandre DERUMIER
Just curious,

What do you want to do with frr ? (vxlan - bgp evpn ?)




- Mail original -
De: "lyt_yudi" 
À: "proxmoxve" 
Envoyé: Mercredi 11 Juillet 2018 09:58:26
Objet: Re: [PVE-User] About the FRR integrated into the PVE?

> 在 2018年7月10日,下午3:58,Alexandre DERUMIER  写道: 
> 
> 
> I have sent a package for proxmox last month, 
> 
> I need to rebase it on frr 5.0.1. 
> 
> (I need it for vxlan bgp evpn) 

It's too fast you are. Great! 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] About the FRR integrated into the PVE?

2018-07-10 Thread Alexandre DERUMIER
Hi,

I have sent a package for proxmox last month,

I need to rebase it on frr 5.0.1.

(I need it for vxlan bgp evpn)


- Mail original -
De: "lyt_yudi" 
À: "proxmoxve" 
Envoyé: Mardi 10 Juillet 2018 03:41:38
Objet: [PVE-User]  About the FRR integrated into the PVE?

Hi, 

will it be better integrated into the PVE network architecture? 

About https://frrouting.org/  

https://github.com/FRRouting/frr  

___ 
pve-user mailing list 
pve-user@pve.proxmox.com  
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] networking adjustment | hope to get some feedback

2018-06-21 Thread Alexandre DERUMIER
>>For example: with the dual 10G LACP connection to each server, we can 
>>only use mtu size 1500. Are we loosing much there..? Or would there be a 
>>way around this, somehow? 

you can setup mtu 9000 on your bridge and bond.
if your vms have mtu 1500 (inside the vm), the packet will use 1500 mtu


>>I tried also assigning the ceph cluster ip 10.10.89.10/11/12 to the 
>>bond0, instead of assigning it to vmbr0 as an alias. But in that setup I 
>>could never ping the other machine, so that somehow doesn't work. :-( 

you can setup ip on interface which is plugged in a bridge.

but you can use vlan interface on the bond for example


- management proxmox without vlan && ceph on dedicated vlan

auto bond0 
iface bond0 inet manual 
slaves eth1 eth2 
bond_miimon 100 
bond_mode 802.3ad 
bond_xmit_hash_policy layer3+4 

 
auto vmbr0 
iface vmbr0 inet static 
address a.b.c.10 
netmask 255.255.255.0 
gateway a.b.c.1 
bridge_ports bond0 
bridge_stp off 
bridge_fd 0 


#dedicated ceph vlan
auto bond0.100 
iface bond0.100 inet static
address ...
netmask 


or with dedicated vlan for proxmox management && ceph
--



auto bond0 
iface bond0 inet manual 
slaves eth1 eth2 
bond_miimon 100 
bond_mode 802.3ad 
bond_xmit_hash_policy layer3+4 


auto vmbr0 
iface vmbr0 inet static 
bridge_ports bond0 
bridge_stp off 
bridge_fd 0 

#dedicated proxmox vlan
auto bond0.99
iface bond0.99 inet static
address a.b.c.10 
netmask 255.255.255.0 
gateway a.b.c.1 

#dedicated ceph vlan
auto bond0.100 
iface bond0.100 inet static
address ...
netmask 





- Mail original -
De: "mj" 
À: "proxmoxve" 
Envoyé: Jeudi 21 Juin 2018 14:31:28
Objet: Re: [PVE-User] networking adjustment | hope to get some feedback

Hi, 

So, I setup a test rig, with (only) two proxmox test-servers, with two 
NICs per server to test. 

This /etc/network/interfaces seems to work well: 

> iface eth1 inet manual 
> 
> iface eth2 inet manual 
> 
> auto bond0 
> iface bond0 inet manual 
> slaves eth1 eth2 
> bond_miimon 100 
> bond_mode 802.3ad 
> bond_xmit_hash_policy layer3+4 
> 
> auto vmbr0 
> iface vmbr0 inet static 
> address a.b.c.10 
> netmask 255.255.255.0 
> gateway a.b.c.1 
> bridge_ports bond0 
> bridge_stp off 
> bridge_fd 0 
> up ip addr add 10.10.89.10/24 dev vmbr0 || true 
> down ip addr del 10.10.89.10/24 dev vmbr0 || true 

On the (procurve 5400) chassis, I configured the LACP like this: 
> trunk D1-D2 Trk1 LACP 
and 
> trunk D3-D4 Trk2 LACP 
resulting in this: 
> Procurve chassis(config)# show trunk 
> 
> Load Balancing Method: L3-based (default) 
> 
> Port | Name Type | Group Type 
>  +  - + --  
> D1 | Link to pve001 - 1 10GbE-T | Trk1 LACP 
> D2 | Link to pve001 - 2 10GbE-T | Trk1 LACP 
> D3 | Link to pve002 - 1 10GbE-T | Trk2 LACP 
> D4 | Link to pve002 - 2 10GbE-T | Trk2 LACP 

The above config allows me to assign VLANs to lacp trunks ("Trk1", 
"Trk2") in the chassis webinterface like you would do with ports. 

Then I did some reading on load balancing between the trunked ports, and 
figured that load balancing based on L4 would perhaps work better for 
us, so I changed it with 
> trunk-load-balance L4 

Since we are running the public and cluster network over the same wires, 
I don't think we can enable jumbo frames. Or would there be a way to 
make ceph traffic use a specific vlan, so we can enable jumbo frames on 
that vlan? 

I realise that this is perhaps all very specific to our environment, but 
again: if there is anyone here with insights, tips, trics, please, 
feedback is welcome. 

For example: with the dual 10G LACP connection to each server, we can 
only use mtu size 1500. Are we loosing much there..? Or would there be a 
way around this, somehow? 

I tried also assigning the ceph cluster ip 10.10.89.10/11/12 to the 
bond0, instead of assigning it to vmbr0 as an alias. But in that setup I 
could never ping the other machine, so that somehow doesn't work. :-( 

(thinking that with the ceph ip on the bond0, my VMs would not be able 
to see that traffic..?) 

Again: all feedback welcome. 

MJ 



On 06/18/2018 11:56 AM, mj wrote: 
> Hi all, 
> 
> After having bought some new networking equipment, and gaining more 
> insight over the last two years, I am planning to make some adjustments 
> to our proxmox/ceph setup, and I would *greatly* appreciate some 
> feedback :-) 
> 
> We are running a three-identical-server proxmox/ceph setup, with on each 
> server: 
> 
> NIC1 Ceph cluster and monitors on 10.10.89.10/11/12 (10G ethernet) 
> NIC2 clients and public ip a.b.c.10/11/12 (1G ethernet) 
> 
> Since we bought new hardware, I can connect each server to our HP 
> chassis, over a dual 10G bonded LACP connection. 
> 
> I obviously need to keep the (NIC1) public IP, but since the ceph 
> monitors ip is difficult to change, I'd like to keep the (NIC2) 
> 10.10.89.x

Re: [PVE-User] pve-csync version of pve-zsync?

2018-05-17 Thread Alexandre DERUMIER
>>Could you please elaborate more on how you have implemented ceph 
>>replication using proxmox?

As I said, I didn't replicate it with proxmox or proxmox code currently.

only custom scripts calling ceph api directly.

I need to dig a little bit more in current proxmox zfs replication, 
because it' mainly focus on local storage replication.
so it's a little it different with a shared storage (vm live migration, ha 
failover,...)


- Mail original -
De: "Mark Adams" 
À: "proxmoxve" 
Envoyé: Jeudi 17 Mai 2018 17:36:43
Objet: Re: [PVE-User] pve-csync version of pve-zsync?

Hi Alexander, 

Could you please elaborate more on how you have implemented ceph 
replication using proxmox? 

Thanks, 
Mark 


On Thu, 17 May 2018, 15:26 Alexandre DERUMIER,  wrote: 

> Hi, 
> 
> I'm currently a lot busy working on network code. 
> 
> for now, we have implemented ceph replication out of proxmox code, I'll 
> try to work on it this summer. 
> 
> 
> - Mail original - 
> De: "Mark Adams"  
> À: "proxmoxve"  
> Envoyé: Mardi 15 Mai 2018 00:13:03 
> Objet: Re: [PVE-User] pve-csync version of pve-zsync? 
> 
> Hi Alexandre, 
> 
> Did you ever get a chance to take a look at this? 
> 
> Regards, 
> Mark 
> 
> On 13 March 2018 at 18:32, Alexandre DERUMIER  
> wrote: 
> 
> > Hi, 
> > 
> > I have plans to implement storage replication for rbd in proxmox, 
> > like for zfs export|import. (with rbd export-diff |rbd import-diff ) 
> > 
> > I'll try to work on it next month. 
> > 
> > I'm not sure that currently a plugin infrastructe in done in code, 
> > and that it's able to manage storages with differents name. 
> > 
> > Can't tell if it'll be hard to implement, but the workflow is almost the 
> > same. 
> > 
> > I'll try to look also at rbd mirror, but it's only work with librbd in 
> > qemu, not with krbd, 
> > so it can't be implemented for container. 
> > 
> > 
> > - Mail original - 
> > De: "Mark Adams"  
> > À: "proxmoxve"  
> > Envoyé: Mardi 13 Mars 2018 18:52:21 
> > Objet: Re: [PVE-User] pve-csync version of pve-zsync? 
> > 
> > Hi Alwin, 
> > 
> > I might have to take another look at it, but have you actually done this 
> > with 2 proxmox clusters? I can't remember the exact part I got stuck on 
> as 
> > it was quite a while ago, but it wasn't as straight forward as you 
> > suggest. 
> > I think you couldn't use the same cluster name, which in turn created 
> > issues trying to use the "remote" (backup/dr/whatever you wanna call it) 
> > cluster with proxmox because it needed to be called ceph. 
> > 
> > The docs I was referring to were the ceph ones yes. Some of the options 
> > listed in that doc do not work in the current proxmox version (I think 
> the 
> > doc hasn't been updated for newer versions...) 
> > 
> > Regards, 
> > Mark 
> > 
> > On 13 March 2018 at 17:19, Alwin Antreich  
> wrote: 
> > 
> > > On Mon, Mar 12, 2018 at 04:51:32PM +, Mark Adams wrote: 
> > > > Hi Alwin, 
> > > > 
> > > > The last I looked at it, rbd mirror only worked if you had different 
> > > > cluster names. Tried to get it working with proxmox but to no avail, 
> > > > without really messing with how proxmox uses ceph I'm not sure it's 
> > > > feasible, as proxmox assumes the default cluster name for 
> > everything... 
> > > That isn't mentioned anywhere in the ceph docs, they use for ease of 
> > > explaining two different cluster names. 
> > > 
> > > If you have a config file named after the cluster, then you can 
> specifiy 
> > > it on the command line. 
> > > http://docs.ceph.com/docs/master/rados/configuration/ 
> > > ceph-conf/#running-multiple-clusters 
> > > 
> > > > 
> > > > Also the documentation was a bit poor for it IMO. 
> > > Which documentation do you mean? 
> > > ? -> http://docs.ceph.com/docs/master/rbd/rbd-mirroring/ 
> > > 
> > > > 
> > > > Would also be nice to choose specifically which VM's you want to be 
> > > > mirroring, rather than the whole cluster. 
> > > It is done either per pool or image separately. See the link above. 
> > > 
> > > > 
> > > > I've manually done rbd export-diff and rbd import-diff between 2 
> > separate 
> > > > proxmox clusters ove

Re: [PVE-User] pve-csync version of pve-zsync?

2018-05-17 Thread Alexandre DERUMIER
Hi,

I'm currently a lot busy working on network code.

for now, we have implemented ceph replication out of proxmox code, I'll try to 
work on it this summer.


- Mail original -
De: "Mark Adams" 
À: "proxmoxve" 
Envoyé: Mardi 15 Mai 2018 00:13:03
Objet: Re: [PVE-User] pve-csync version of pve-zsync?

Hi Alexandre, 

Did you ever get a chance to take a look at this? 

Regards, 
Mark 

On 13 March 2018 at 18:32, Alexandre DERUMIER  wrote: 

> Hi, 
> 
> I have plans to implement storage replication for rbd in proxmox, 
> like for zfs export|import. (with rbd export-diff |rbd import-diff ) 
> 
> I'll try to work on it next month. 
> 
> I'm not sure that currently a plugin infrastructe in done in code, 
> and that it's able to manage storages with differents name. 
> 
> Can't tell if it'll be hard to implement, but the workflow is almost the 
> same. 
> 
> I'll try to look also at rbd mirror, but it's only work with librbd in 
> qemu, not with krbd, 
> so it can't be implemented for container. 
> 
> 
> - Mail original - 
> De: "Mark Adams"  
> À: "proxmoxve"  
> Envoyé: Mardi 13 Mars 2018 18:52:21 
> Objet: Re: [PVE-User] pve-csync version of pve-zsync? 
> 
> Hi Alwin, 
> 
> I might have to take another look at it, but have you actually done this 
> with 2 proxmox clusters? I can't remember the exact part I got stuck on as 
> it was quite a while ago, but it wasn't as straight forward as you 
> suggest. 
> I think you couldn't use the same cluster name, which in turn created 
> issues trying to use the "remote" (backup/dr/whatever you wanna call it) 
> cluster with proxmox because it needed to be called ceph. 
> 
> The docs I was referring to were the ceph ones yes. Some of the options 
> listed in that doc do not work in the current proxmox version (I think the 
> doc hasn't been updated for newer versions...) 
> 
> Regards, 
> Mark 
> 
> On 13 March 2018 at 17:19, Alwin Antreich  wrote: 
> 
> > On Mon, Mar 12, 2018 at 04:51:32PM +, Mark Adams wrote: 
> > > Hi Alwin, 
> > > 
> > > The last I looked at it, rbd mirror only worked if you had different 
> > > cluster names. Tried to get it working with proxmox but to no avail, 
> > > without really messing with how proxmox uses ceph I'm not sure it's 
> > > feasible, as proxmox assumes the default cluster name for 
> everything... 
> > That isn't mentioned anywhere in the ceph docs, they use for ease of 
> > explaining two different cluster names. 
> > 
> > If you have a config file named after the cluster, then you can specifiy 
> > it on the command line. 
> > http://docs.ceph.com/docs/master/rados/configuration/ 
> > ceph-conf/#running-multiple-clusters 
> > 
> > > 
> > > Also the documentation was a bit poor for it IMO. 
> > Which documentation do you mean? 
> > ? -> http://docs.ceph.com/docs/master/rbd/rbd-mirroring/ 
> > 
> > > 
> > > Would also be nice to choose specifically which VM's you want to be 
> > > mirroring, rather than the whole cluster. 
> > It is done either per pool or image separately. See the link above. 
> > 
> > > 
> > > I've manually done rbd export-diff and rbd import-diff between 2 
> separate 
> > > proxmox clusters over ssh, and it seems to work really well... It 
> would 
> > > just be nice to have a tool like pve-zsync so I don't have to write 
> some 
> > > script myself. Seems to me like something that would be desirable as 
> part 
> > > of proxmox as well? 
> > That would basically implement the ceph rbd mirror feature. 
> > 
> > > 
> > > Cheers, 
> > > Mark 
> > > 
> > > On 12 March 2018 at 16:37, Alwin Antreich  
> > wrote: 
> > > 
> > > > Hi Mark, 
> > > > 
> > > > On Mon, Mar 12, 2018 at 03:49:42PM +, Mark Adams wrote: 
> > > > > Hi All, 
> > > > > 
> > > > > Has anyone looked at or thought of making a version of pve-zsync 
> for 
> > > > ceph? 
> > > > > 
> > > > > This would be great for DR scenarios... 
> > > > > 
> > > > > How easy do you think this would be to do? I imagine it wouId it 
> be 
> > quite 
> > > > > similar to pve-zsync, but using rbd export-diff and rbd 
> import-diff 
> > > > instead 
> > > > > of zfs send and zfs receive? so could the existing script be 
> > relat

Re: [PVE-User] Meltdown/Spectre mitigation options / Intel microcode

2018-05-08 Thread Alexandre DERUMIER
note that your only need SPEC-CTRL and last microcode, if your vms are windows, 
or linux with a kernel without retpoline mitigation


PCID is only to improve performance (and you need to recent kernel (>4.13 I 
think), in your vm, because it was not use before)
setting vcpu other than kvm64, improve performance too because of INVPCID 
support (>= Haswell)

Personnaly, I have upgraded all my debian to 4.15 kernel with retpoline + PCID 
+ vcpu model set to my lowest intel model of my cluster. (xeon v3)

and my windows vm with SPEC-CTRL option + microcode  on proxmox host

you can use this script
https://github.com/speed47/spectre-meltdown-checker

to check if your vm is protected or not.  (with -v verbose to see PCID/INVPCID 
support)





- Mail original -
De: "uwe sauter de" 
À: "proxmoxve" 
Envoyé: Mardi 8 Mai 2018 15:31:52
Objet: [PVE-User] Meltdown/Spectre mitigation options / Intel microcode

Hi all, 

I recently discovered that one of the updates since turn of the year introduced 
options to let the VM know about Meltdown/Spectre 
mitigation on the host (VM configuration -> processors -> advanced -> PCID & 
SPEC-CTRL). 

I'm not sure if I understand the documentation correctly so please correct me 
if I'm wrong with the following: 

I have two different CPU types in my cluster, Intel Xeon E5606 and Intel Xeon 
E5-2670. Both do not have the latest microcode 
because I don't have stretch-backports enabled (which provides microcode from 
20180312 in contrast to stretch's version from 
20170707). 

Both have the "pcid" CPU flag, as well as "pti" and "retpoline" (whiche are not 
mentioned in the docs and probably show kernel 
features and not CPU features). Both *do not* have "spec_ctrl". 

All my VMs are configured to use "default (kvm64)" CPUs. 

This means that I should manually enable the PCID flag as the kvm64 CPU doesn't 
set this automatically. But I mustn't enable 
SPEC-CTRL because my host hardware doesn't support the feature. Is this 
correct? 



Regards, 

Uwe 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Custom storage in ProxMox 5

2018-03-30 Thread Alexandre DERUMIER
Hi,

>>Ceph has rather larger overheads

Agree. they are overhead, but performance increase with each release.
I think the biggest problem is that you can reach more than 70-90k iops with 1 
vm disk currently.
and maybe latency could be improve too.

>>much bigger PITA to admin
don't agree. I'm running 5 ceph cluster (around 200TB ssd). almost 0 
maintenance.

>>does not perform as well on whitebox hardware 
define whitebox hardware ?  
the only thing is to no use consumer ssd (because they sucks with direct io)


>>an Ceph run multiple replication and ec levels for different files on the 
>>same volume?
you can manage it by pool. (as block storage).
>> Near instantaneous spanshots 
yes

>>and restores 

a lit bit slower to rollback

>>at any level of the filesystem you choose? 

why are you talking about filesystem ? Are you mounting lizard inside your vm? 
or for hosting vm disk ?


>>Run on commodity hardware with trivial adding of disks as required? 
yes
- Mail original -
De: "Lindsay Mathieson" 
À: "proxmoxve" 
Envoyé: Vendredi 30 Mars 2018 05:05:11
Objet: Re: [PVE-User] Custom storage in ProxMox 5

Ceph has rather larger overheads, much bigger PITA to admin, does not perform 
as well on whitebox hardware – in fact the Ceph crowd std reply to issues is to 
spend big on enterprise hardware and is far less flexible. 

Can Ceph run multiple replication and ec levels for different files on the same 
volume? Can you change goal settings on the fly? Near instantaneous spanshots 
and restores at any level of the filesystem you choose? Run on commodity 
hardware with trivial adding of disks as required? 



ZFS is not a distributed filesystem, so don’t know why you bring it up. Though 
I am using ZFS as the underlying filesystem. 



-- 
Lindsay Mathieson 



 
From: pve-user  on behalf of 
ad...@extremeshok.com  
Sent: Friday, March 30, 2018 11:53:10 AM 
To: pve-user@pve.proxmox.com 
Subject: Re: [PVE-User] Custom storage in ProxMox 5 

Don't waste your time with lizardfs. 

Proxmox 5+ has proper Ceph and ZFS support. 

Ceph does everything and more, and ZFS is about the de-facto container 
storage medium. 


On 03/30/2018 03:20 AM, Lindsay Mathieson wrote: 
> I was working on a custom storage plugin (lizardfs) for VE 4.x, looking to 
> revisit it. Has the API changed much (or at all) for PX 5? Is there any 
> documentation for it? 
> 
> Thanks, 
> 
> -- 
> Lindsay Mathieson 
> 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Notify VM after migration

2018-03-28 Thread Alexandre DERUMIER
Hi,

I don't think it's possible currently.

BTW, do you have to use chrony instead ntpd ? It's really faster to keep clock 
sync vs ntpd or openntpd

- Mail original -
De: "Andreas Herrmann" 
À: "proxmoxve" 
Envoyé: Lundi 26 Mars 2018 10:22:06
Objet: [PVE-User] Notify VM after migration

Hi there, 

is there any way to notify a VM after live migration has finished? I need to 
restart ntpd because of the time shift. 

Thanks, 
Andreas 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] how many nodes can there be in a proxmox cluster ?

2018-03-25 Thread Alexandre DERUMIER
>>When you say 20 nodes with unicast - you're saying they're across subnets? 

no , currently, same subnet, on same pair of switch (mlag).
It's a quite new mellanox switch s2100, with fast asics. (avg latency 0.015ms)


I have too much problem with multicast in my infra.

no special tuning.


i had a older cluster, where in past I had to tune

  token: 4000

in corosync config.


I'm planning to test it across differents datacenter soon (1ms latency), but I 
don't think it'll possible
with theses latencies

- Mail original -
De: "Josh Knight" 
À: "proxmoxve" 
Envoyé: Mercredi 21 Mars 2018 13:28:12
Objet: Re: [PVE-User] how many nodes can there be in a proxmox cluster ?

Hey Alexandre, 

When you say 20 nodes with unicast - you're saying they're across subnets? 

I ask because we've been looking at doing this, more for a vCenter-like 
single console to manage all VMs and hosts instead of HA, but I thought I 
read on the proxmox wiki that it wasn't recommended for larger clusters? 
Does it work well for you at 20 nodes or are there any config tweaks or 
gotchas that you've run into? 

Thanks, 

Josh 


Josh Knight 


On Wed, Mar 21, 2018 at 2:03 AM, Alexandre DERUMIER  
wrote: 

> I'm running 20 nodes clusters here, with unicast. (I need to increase 
> corosync timeout a little bit with more than 12 nodes). 
> 
> I think the hard limit is around 100nodes in last corosync, but 
> recommendations is around 32. with more than that, I think the corosync 
> timeout need to be increase high and you'll have latency. 
> 
> 
> 
> De: "Ronny Aasen"  
> À: "proxmoxve"  
> Envoyé: Mardi 20 Mars 2018 21:30:13 
> Objet: [PVE-User] how many nodes can there be in a proxmox cluster ? 
> 
> once around proxmox 2 i read something about a hard limit about 32 
> nodes, due to corosync limits. but that there should probably not be 
> that many. 
> 
> i have not been able to find any more recent information. 
> 
> is there a hard or soft limit to how large a proxmox cluster you can 
> have ? what is the largest installations people are running. and does 
> this affect stabillity of the cluster? 
> 
> 
> kind regards 
> 
> Ronny Aasen 
> 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pmbalance

2018-03-23 Thread Alexandre DERUMIER
Thanks for your script, I was looking for something like that.

I'll try it next week and try to debug.


I'm seeing 2 improvements:

- check that shared storage is available on target node
- take ksm value in memory count. (with ksm enable, we can have all nodes at 
80% memory usage, but with differents ksm usage)



- Mail original -
De: "Mark Schouten" 
À: "proxmoxve" 
Envoyé: Jeudi 22 Mars 2018 10:08:06
Objet: [PVE-User] pmbalance

Hi, 

I have a script that is able to automatically move around VM's so the 
load on the clusternodes is somewhat equal. You can find the script 
here: https://tuxis.bestandonline.nl/index.php/s/JzH3cz2wGdY8QXs 

It's called pmbalance. 

Now, since 4.4 I have the issue that I can no longer get info or 
commands from other nodes than the node I'm running this script on. I 
get "500 proxy not allowed" as soon as I get to 'get_vm_description()'. 

What am I doing wrong? Thanks! 


root@proxmox2-4:~# ./pmbalance --verbose --dry-run 
V: Looking for nodes on this cluster.. 
V: Found proxmox2-4 
V: Found proxmox2-5 
V: Found proxmox2-2 
V: Found proxmox2-1 
V: Found proxmox2-3 
V: Calculating the average memory free on the cluster 
V: proxmox2-2 is at 26% free 
V: proxmox2-4 is at 39% free 
V: proxmox2-1 is at 69% free 
V: proxmox2-3 is at 19% free 
V: proxmox2-5 is at 20% free 
V: Which is: 34% 
V: We should lower usage of proxmox2-3 by: 15% to reach 34% 
V: 15 of 67524956160 = 10128743424 bytes 
V: proxmox2-1 will receive the VM 
V: Looking for migratable VMs on proxmox2-3 
500 proxy not allowed 



-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ 
Mark Schouten | Tuxis Internet Engineering 
KvK: 61527076 | http://www.tuxis.nl/ 
T: 0318 200208 | i...@tuxis.nl 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] how many nodes can there be in a proxmox cluster ?

2018-03-20 Thread Alexandre DERUMIER
I'm running 20 nodes clusters here, with unicast. (I need to increase corosync 
timeout a little bit with more than 12 nodes). 

I think the hard limit is around 100nodes in last corosync, but recommendations 
is around 32. with more than that, I think the corosync timeout need to be 
increase high and you'll have latency. 



De: "Ronny Aasen"  
À: "proxmoxve"  
Envoyé: Mardi 20 Mars 2018 21:30:13 
Objet: [PVE-User] how many nodes can there be in a proxmox cluster ? 

once around proxmox 2 i read something about a hard limit about 32 
nodes, due to corosync limits. but that there should probably not be 
that many. 

i have not been able to find any more recent information. 

is there a hard or soft limit to how large a proxmox cluster you can 
have ? what is the largest installations people are running. and does 
this affect stabillity of the cluster? 


kind regards 

Ronny Aasen 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] pve-csync version of pve-zsync?

2018-03-13 Thread Alexandre DERUMIER
Hi,

I have plans to implement storage replication for rbd in proxmox,
like for zfs export|import.  (with rbd export-diff |rbd import-diff )

I'll try to work on it next month.

I'm not sure that currently a plugin infrastructe in done in code,
and that it's able to manage storages with differents name.

Can't tell if it'll be hard to implement, but the workflow is almost the same.

I'll try to look also at rbd mirror, but it's only work with librbd in qemu, 
not with krbd,
so it can't be implemented for container.


- Mail original -
De: "Mark Adams" 
À: "proxmoxve" 
Envoyé: Mardi 13 Mars 2018 18:52:21
Objet: Re: [PVE-User] pve-csync version of pve-zsync?

Hi Alwin, 

I might have to take another look at it, but have you actually done this 
with 2 proxmox clusters? I can't remember the exact part I got stuck on as 
it was quite a while ago, but it wasn't as straight forward as you suggest. 
I think you couldn't use the same cluster name, which in turn created 
issues trying to use the "remote" (backup/dr/whatever you wanna call it) 
cluster with proxmox because it needed to be called ceph. 

The docs I was referring to were the ceph ones yes. Some of the options 
listed in that doc do not work in the current proxmox version (I think the 
doc hasn't been updated for newer versions...) 

Regards, 
Mark 

On 13 March 2018 at 17:19, Alwin Antreich  wrote: 

> On Mon, Mar 12, 2018 at 04:51:32PM +, Mark Adams wrote: 
> > Hi Alwin, 
> > 
> > The last I looked at it, rbd mirror only worked if you had different 
> > cluster names. Tried to get it working with proxmox but to no avail, 
> > without really messing with how proxmox uses ceph I'm not sure it's 
> > feasible, as proxmox assumes the default cluster name for everything... 
> That isn't mentioned anywhere in the ceph docs, they use for ease of 
> explaining two different cluster names. 
> 
> If you have a config file named after the cluster, then you can specifiy 
> it on the command line. 
> http://docs.ceph.com/docs/master/rados/configuration/ 
> ceph-conf/#running-multiple-clusters 
> 
> > 
> > Also the documentation was a bit poor for it IMO. 
> Which documentation do you mean? 
> ? -> http://docs.ceph.com/docs/master/rbd/rbd-mirroring/ 
> 
> > 
> > Would also be nice to choose specifically which VM's you want to be 
> > mirroring, rather than the whole cluster. 
> It is done either per pool or image separately. See the link above. 
> 
> > 
> > I've manually done rbd export-diff and rbd import-diff between 2 separate 
> > proxmox clusters over ssh, and it seems to work really well... It would 
> > just be nice to have a tool like pve-zsync so I don't have to write some 
> > script myself. Seems to me like something that would be desirable as part 
> > of proxmox as well? 
> That would basically implement the ceph rbd mirror feature. 
> 
> > 
> > Cheers, 
> > Mark 
> > 
> > On 12 March 2018 at 16:37, Alwin Antreich  
> wrote: 
> > 
> > > Hi Mark, 
> > > 
> > > On Mon, Mar 12, 2018 at 03:49:42PM +, Mark Adams wrote: 
> > > > Hi All, 
> > > > 
> > > > Has anyone looked at or thought of making a version of pve-zsync for 
> > > ceph? 
> > > > 
> > > > This would be great for DR scenarios... 
> > > > 
> > > > How easy do you think this would be to do? I imagine it wouId it be 
> quite 
> > > > similar to pve-zsync, but using rbd export-diff and rbd import-diff 
> > > instead 
> > > > of zfs send and zfs receive? so could the existing script be 
> relatively 
> > > > easily modified? (I know nothing about perl) 
> > > > 
> > > > Cheers, 
> > > > Mark 
> > > > ___ 
> > > > pve-user mailing list 
> > > > pve-user@pve.proxmox.com 
> > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> > > Isn't ceph mirror already what you want? It can mirror a image or a 
> > > whole pool. It keeps track of changes and serves remote image deletes 
> > > (adjustable delay). 
> > > 
> 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] 4.15 based test kernel for PVE 5.x available

2018-03-13 Thread Alexandre DERUMIER
>>yes, it has KPTI for v3/Meltdown, full RETPOLINE for v2, and masking of 
>>pointers passed from user space via array_index_mask_nospec for v1. 

>>it does not include the originally embargoed IBRS/IBPB patch set used by 
>>RH/Suse/Canonical in the first waves of mitigation. some parts of that 
>>might still get included if/when they get applied upstream. passing 
>>SPEC_CTRL/IBRS/IBPB through to VM guests should work as before (if 
>>supported by the CPU/µcode). 


Great ! Congrat to all proxmox team !


- Mail original -
De: "Fabian Grünbichler" 
À: "proxmoxve" 
Envoyé: Lundi 12 Mars 2018 20:08:57
Objet: Re: [PVE-User] 4.15 based test kernel for PVE 5.x available

On Mon, Mar 12, 2018 at 07:43:09PM +0100, Alexandre DERUMIER wrote: 
> Hi, 
> 
> Is retpoline support enabled like ubuntu build ? (builded with recent gcc ?) 

yes, it has KPTI for v3/Meltdown, full RETPOLINE for v2, and masking of 
pointers passed from user space via array_index_mask_nospec for v1. 

it does not include the originally embargoed IBRS/IBPB patch set used by 
RH/Suse/Canonical in the first waves of mitigation. some parts of that 
might still get included if/when they get applied upstream. passing 
SPEC_CTRL/IBRS/IBPB through to VM guests should work as before (if 
supported by the CPU/µcode). 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] 4.15 based test kernel for PVE 5.x available

2018-03-12 Thread Alexandre DERUMIER
Hi,

Is retpoline support enabled like ubuntu build ? (builded with recent gcc ?)


- Mail original -
De: "Fabian Grünbichler" 
À: "proxmoxve" 
Envoyé: Lundi 12 Mars 2018 14:14:29
Objet: [PVE-User] 4.15 based test kernel for PVE 5.x available

a pve-kernel-4.15 meta package depending on a preview build based on 
Ubuntu Bionic's 4.15 kernel is available on pvetest. it is provided as 
opt-in package in order to catch potential regressions and hardware 
incompatibilities early on, and allow testing on a wide range of systems 
before the default kernel series gets switched over to 4.15 and support 
for our 4.13 based kernel is phased out at some point in the future. 

in order to try it out on your test systems, configure the pvetest 
repository and run 

apt update 
apt install pve-kernel-4.15 

the pve-kernel-4.15 meta package will keep the preview kernel updated, 
just like the pve-kernel-4.13 meta package (recently pulled out of the 
proxmox-ve package) does for the stable 4.13 based kernel. 

also on pvetest, you will find a pve-headers-4.15 header meta package in 
case you need them for third-party module building, a linux-tools-4.15 
package with compatible perf, as well as an updated pve-firmware package 
with the latest blobs. 

notable changes besides the major kernel update include the removal of 
out-of-tree Intel NIC modules, which are not (yet) compatible with 4.15 
kernels. this removal be re-evaluated at some later point before the 
final switch to 4.15 as new stable kernel in PVE 5. 

there are no plans to support both 4.13 and 4.15 in the long term - once 
the testing phase of 4.15 is over 4.15 will become the new default 
kernel and 4.13 will receive no updates anymore. 

happy testing and looking forward to feedback! 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Network Interface Card Passthroug

2018-03-07 Thread Alexandre DERUMIER
Please read the doc here :

https://pve.proxmox.com/wiki/Pci_passthrough

- Mail original -
De: "Gilberto Nunes" 
À: "proxmoxve" 
Envoyé: Vendredi 2 Mars 2018 18:06:37
Objet: [PVE-User] Network Interface Card Passthroug

pve02:~# cat /proc/cmdline 
BOOT_IMAGE=/boot/vmlinuz-4.13.13-6-pve root=/dev/mapper/pve-root ro quiet 
intel_iommu=on,iommu=pt,igfx_off,pass-through pci-stub.ids=dd01:0003 
pve02:~# cat /etc/pve/qemu-server/100.conf 
boot: dcn 
bootdisk: scsi0 
cores: 1 
machine: q35 
hostpci0: 00:04.0 
ide2: local:iso/mini-bionic-net-install.iso,media=cdrom 
kvm: 0 
memory: 512 
name: VMTESTE 
net0: virtio=4E:DA:E3:90:AA:AC,bridge=vmbr0 
numa: 0 
ostype: l26 
scsi0: local-lvm:vm-100-disk-1,size=8G 
scsihw: virtio-scsi-pci 
smbios1: uuid=67dc9730-f0b5-4bc1-a99f-52253ef18903 
sockets: 1 
pve02:~# qm start 100 
Cannot open iommu_group: No such file or directory 

Where am I wrong??? 

--- 
Gilberto Nunes Ferreira 

(47) 3025-5907 
(47) 99676-7530 - Whatsapp / Telegram 

Skype: gilberto.nunes36 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] CEPH Luminous source packages

2018-02-15 Thread Alexandre DERUMIER
git clone git://git.proxmox.com/git/ceph.git


- Mail original -
De: "Mike O'Connor" 
À: "proxmoxve" , "aderumier" 
Envoyé: Jeudi 15 Février 2018 21:46:59
Objet: Re: [PVE-User] CEPH Luminous source packages

On 13/2/18 10:58 pm, Alexandre DERUMIER wrote: 
> https://git.proxmox.com/?p=ceph.git;a=summary 
> 
Any idea how I clone the packages from that git system ? 

Mike 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] CEPH Luminous source packages

2018-02-13 Thread Alexandre DERUMIER
https://git.proxmox.com/?p=ceph.git;a=summary


- Mail original -
De: "Mike O'Connor" 
À: "proxmoxve" 
Envoyé: Mardi 13 Février 2018 01:56:47
Objet: [PVE-User] CEPH Luminous source packages

Hi All 

Where can I find the source packages that the Proxmox Ceph Luminous was 
built from ? 


Mike 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Loosing network connectivity in a VM

2018-01-03 Thread Alexandre DERUMIER
>>Do you have a link about these virtio issues in 4.10 ? 

It was about memory leak in vhost-net 
https://lkml.org/lkml/2017/12/1/446

(buy maybe it don't impact 4.10, it was 4.12 and early 4.13)



- Mail original -
De: "Mark Schouten" 
À: "proxmoxve" 
Envoyé: Mardi 2 Janvier 2018 19:28:23
Objet: Re: [PVE-User] Loosing network connectivity in a VM

I just had it again. Toggling the 'disconnect' checkbox in Proxmox 'fixes' the 
issue. 


I do not use bonding. It is as simple as it gets. A bridge on a 10Gbit 
interface. 


Do you have a link about these virtio issues in 4.10 ? 



Met vriendelijke groeten, 

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ 
Mark Schouten | Tuxis Internet Engineering 
KvK: 61527076 | http://www.tuxis.nl/ 
T: 0318 200208 | i...@tuxis.nl 



Van: Alexandre DERUMIER  
Aan: proxmoxve  
Verzonden: 2-1-2018 17:49 
Onderwerp: Re: [PVE-User] Loosing network connectivity in a VM 

Try to upgrade to kernel 4.13, they are known virtio bug in 4.10. 
(no sure it's related, but it could help) 

do you use bonding on your host ? if yes, which mode ? 

- Mail original - 
De: "Mark Schouten"  
À: "proxmoxve"  
Envoyé: Mardi 2 Janvier 2018 13:41:43 
Objet: [PVE-User] Loosing network connectivity in a VM 

Hi, 

I have a VM running Debian Stretch, with three interfaces. Two VirtIO 
interfaces and one E1000. 

In the last few days one of the Virtio interfaces stopped responding. The 
other interfaces are working flawlessly. 

The Virtio-interface in question is the busiest interface, but at the moment 
the the interface stops responding, it's not busy at all. 

tcpdump shows me ARP-traffic going out, but nothing coming back. 

I experienced this with other VM's on this (physical) machine as, making me 
believe it is a new bug in KVM/Virtio. 

The config of the VM is quite default, as is the config for other VM's on which 
I experienced this same issue. 

Has anybody seen this behaviour before, does anyone have an idea what to do 
about it? 

root@host02:~# pveversion 
pve-manager/5.0-30/5ab26bc (running kernel: 4.10.17-2-pve) 

# info version 
2.9.0pve-qemu-kvm_2.9.0-4 

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ 
Mark Schouten | Tuxis Internet Engineering 
KvK: 61527076 | http://www.tuxis.nl/ 
T: 0318 200208 | i...@tuxis.nl 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Loosing network connectivity in a VM

2018-01-02 Thread Alexandre DERUMIER
Try to upgrade to kernel 4.13, they are known virtio bug in 4.10.
(no sure it's related, but it could help)

do you use bonding on your host ? if yes, which mode ?

- Mail original -
De: "Mark Schouten" 
À: "proxmoxve" 
Envoyé: Mardi 2 Janvier 2018 13:41:43
Objet: [PVE-User] Loosing network connectivity in a VM

Hi, 

I have a VM running Debian Stretch, with three interfaces. Two VirtIO 
interfaces and one E1000. 

In the last few days one of the Virtio interfaces stopped responding. The 
other interfaces are working flawlessly. 

The Virtio-interface in question is the busiest interface, but at the moment 
the the interface stops responding, it's not busy at all. 

tcpdump shows me ARP-traffic going out, but nothing coming back. 

I experienced this with other VM's on this (physical) machine as, making me 
believe it is a new bug in KVM/Virtio. 

The config of the VM is quite default, as is the config for other VM's on which 
I experienced this same issue. 

Has anybody seen this behaviour before, does anyone have an idea what to do 
about it? 

root@host02:~# pveversion 
pve-manager/5.0-30/5ab26bc (running kernel: 4.10.17-2-pve) 

# info version 
2.9.0pve-qemu-kvm_2.9.0-4 

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ 
Mark Schouten | Tuxis Internet Engineering 
KvK: 61527076 | http://www.tuxis.nl/ 
T: 0318 200208 | i...@tuxis.nl 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] PVE 5.1 Clone problem

2017-11-14 Thread Alexandre DERUMIER
yes, this is normal.

the block job mirror is cancelled at the end, to avoid that source vm switch on 
the new disk.


- Mail original -
De: "Fabrizio Cuseo" 
À: "proxmoxve" 
Envoyé: Mardi 31 Octobre 2017 18:14:28
Objet: [PVE-User] PVE 5.1 Clone problem

Hello. 
I have just installed a test cluster with PVE 5.1 and Ceph (bluestore). 

3 nodes, 4 HD per node, 3 OSD, single gigabit ethernet, no ceph dedicated 
network (is only a test cluster). 

Cloning a VM (both Powered ON and OFF), returns "trying to acquire 
lock...drive-scsi0: Cancelling block job", but the clone seems ok. 

See the screenshot: 
http://cloud.fabry.it/index.php/s/9vrEvMQE5zToCrE 

Regards, Fabrizio Cuseo 


-- 
--- 
Fabrizio Cuseo - mailto:f.cu...@panservice.it 
Direzione Generale - Panservice InterNetWorking 
Servizi Professionali per Internet ed il Networking 
Panservice e' associata AIIP - RIPE Local Registry 
Phone: +39 0773 410020 - Fax: +39 0773 470219 
http://www.panservice.it mailto:i...@panservice.it 
Numero verde nazionale: 800 901492 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] OVA, OVF support

2017-11-08 Thread Alexandre DERUMIER
already available in proxmox 5.1 :)

#qm importovf   

- Mail original -
De: "Francois Deslauriers" 
À: "proxmoxve" 
Envoyé: Mercredi 8 Novembre 2017 23:20:49
Objet: [PVE-User] OVA, OVF support

is there any plan to support OVA, OVF import into Proxmox ? 
And if so when this could be expected ? 



It would geatly help in promoting the plataform 
and ease the process of migration 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox 4.4, Ceph Hammer and Ceph Luminous

2017-10-20 Thread Alexandre DERUMIER
>>Has anybody installed a Luminous client on Proxmox 4.4 ?

yes, it's working.

But I have tested luminous client with jewel and luminous cluster only.

don't known if it's still compatible with hammer

- Mail original -
De: "Mark Schouten" 
À: "proxmoxve" 
Envoyé: Mercredi 18 Octobre 2017 10:47:09
Objet: [PVE-User] Proxmox 4.4, Ceph Hammer and Ceph Luminous

Hi, 


I have a Proxmox 4.4 cluster, which is connected to two Ceph clusters. One of 
them is running Hammer, the other is running Luminous (since this morning). 


All running VM's seem to be fine and working, but the Proxmox webinterface 
froze. It looks like the default Rados-client in Debian ( 
root@proxmox2-1:~# rados -v 
ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3) 
) 
is incompatible with Ceph Luminous. 



2017-10-18 10:43:54.285544 7fa394755700 -1 failed to decode message of type 59 
v1: buffer::malformed_input: __PRETTY_FUNCTION__ unknown encoding version > 9 


That is a bit surprising, imho. But OK, there are a few versions in between. 


The question is, can I install a Ceph Luminous client on the Proxmox machines 
(add Ceph Luminous to the sources.list, dist-upgrade) and be happy? Or am I 
going to have trouble connecting to my older Hammer cluster using a Luminous 
client? 


For now, I disabled the whole storage-entry in storage.cfg for the Luminous 
cluster to prevent Proxmox from hanging on /usr/bin/rados. Has anybody 
installed a Luminous client on Proxmox 4.4 ? 

Met vriendelijke groeten, 

-- 
Kerio Operator in de Cloud? https://www.kerioindecloud.nl/ 
Mark Schouten | Tuxis Internet Engineering 
KvK: 61527076 | http://www.tuxis.nl/ 
T: 0318 200208 | i...@tuxis.nl 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Request: Update keepalived version in proxmox 4 (jessie)

2017-10-01 Thread Alexandre DERUMIER
Try to as to debian devs to add it to jessie-backports repo ?


or maybe, do you have tried to install deb package from stretch on jessie ?


- Mail original -
De: "Lindsay Mathieson" 
À: "proxmoxve" 
Envoyé: Dimanche 1 Octobre 2017 06:59:21
Objet: [PVE-User] Request: Update keepalived version in proxmox 4 (jessie)

Is it possible to update the keepalived version to the latest in Proxmox 4? 


Current version is 1.2.13 which has a bug with keepalived grabbing 
master on startup, even if state on all ndoes is set to BACKUP and 
priorites the same. 


1.3.x resolves the issue. I can build it from src on proxmox 4.x, but 
packages are better :) 


Thanks 

-- 
Lindsay Mathieson 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] OpenBSD problems (freezing console, lots of time drift) on Proxmox 5.0

2017-09-22 Thread Alexandre DERUMIER
> interesting to note: after deactivating the system console on the 
> VGA/NoVNC and switching to a serial console the above phenomens are 
> fixed and the OpenBSD guest is running smoothly like with Proxmox VE 
> 4.4.

could it be related to the change from cirrus to vga  from the vga adapter ?


- Mail original -
De: "Wolfgang Link" 
À: "proxmoxve" , "Alex Bihlmaier" 

Envoyé: Vendredi 22 Septembre 2017 14:11:35
Objet: Re: [PVE-User] OpenBSD problems (freezing console, lots of time drift) 
on Proxmox 5.0

It looks like a kernel problem. 
Not sure what exactly is the problem, but every kernel(mainline and ubuntu) 
larger 4.10 will trigger this behavior on some HW. 
Debugging take some time. 
I will report the result in this forum thread. 

https://forum.proxmox.com/threads/openbsd-6-1-guest-h%C3%A4ngt.36903/#post-181872
 

> Alex Bihlmaier  hat am 22. September 2017 um 13:29 
> geschrieben: 
> 
> 
> Hi - after upgrading several Proxmox setups from 
> version 4.4 to 5.0 (currently 5.0-31) i have problems 
> with VMs running OpenBSD (version 6.1). 
> 
> phenomens i encounter are: 
> * reboot / shutdown on the console is not working properly. 
> it results in a console freeze and 100% load on a single cpu core 
> * time sleep 1 should result in a total runtime of 1 second. 
> Actually on one host it gives me this: 
> "sleep 1 0.00s user 0.00s system 0% cpu 17.486 total" 
> * ntpd is unable to keep up with a local time drift (only on one 
> Intel i7 Host, another Xeon Proxmox host does not suffer from this 
> issue) and the guest clock is drifting more and more away from the 
> real clock 
> 
> interesting to note: after deactivating the system console on the 
> VGA/NoVNC and switching to a serial console the above phenomens are 
> fixed and the OpenBSD guest is running smoothly like with Proxmox VE 
> 4.4. 
> 
> switching from VGA system console to serial system console: 
> 
> /etc/boot.conf 
> set tty com0 
> 
> 
> Anyone with similar issue? Maybe time for a bug submission to the 
> developers. 
> 
> 
> cheers 
> Alex 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox 5 (LACP/Bridge/VLAN120)

2017-08-22 Thread Alexandre DERUMIER
Just a note, 

if you create a bond0.120,

then after you create also vm on this vlan, proxmox will enslave a bridge on 
this bond0.120, and you can't
have ip on interface, when this interface is enslave in bridge. (so you'll lost 
connection).

another way to prevent this, is to create a bridge

vmbr0v120(like proxmox do),  enslave bond0.120 in this bridge, and setup ip 
addres on the vmbr0v120 brige.


(personnaly, I'm using vlan aware, it's working fine)


- Mail original -
De: "Uwe Sauter" 
À: "proxmoxve" 
Envoyé: Mercredi 23 Août 2017 07:30:25
Objet: Re: [PVE-User] Proxmox 5 (LACP/Bridge/VLAN120)

Sorry, I mixed up the IP addresses. MTU according to what you switch supports 
(in this case jumbo frames with 9000 bytes). 

/etc/network/interfaces 
- 
# management interface 
auto eth0 
iface eth0 inet static 
address 192.168.0.2 
netmask 255.255.255.0 
gateway 192.168.0.254 

# 1st interface in bond 
auto eth1 
iface eth1 inet manual 
mtu 9000 

# 2nd interface in bond 
auto eth2 
iface eth2 inet manual 
mtu 9000 

# bond 
auto bond0 
iface bond0 inet manual 
slaves eth1 eth2 
bond-mode 802.3ad 
bond-miimon 100 
mtu 9000 

# interface for vlan 120 on bond with IP 
auto bond0.120 
iface bond0.120 inet static 
address 10.100.100.8 
netmask 255.255.255.0 
mtu 9000 

# interface for vlan 130 on bond without IP (just for VMs) 
auto bond0.130 
iface bond0.130 inet manual 

- 


Am 23.08.2017 um 07:27 schrieb Uwe Sauter: 
> An example out of my head: 
> 
> 
> /etc/network/interfaces 
> - 
> # management interface 
> auto eth0 
> iface eth0 inet static 
> address 10.100.100.8 
> netmask 255.255.255.0 
> gateway 10.100.100.254 
> 
> # 1st interface in bond 
> auto eth1 
> iface eth1 inet manual 
> mtu 9000 
> 
> # 2nd interface in bond 
> auto eth2 
> iface eth2 inet manual 
> mtu 9000 
> 
> # bond 
> auto bond0 
> iface bond0 inet manual 
> slaves eth1 eth2 
> bond-mode 802.3ad 
> bond-miimon 100 
> mtu 9000 
> 
> # interface for vlan 120 on bond with IP 
> auto bond0.120 
> iface bond0.120 inet static 
> address 192.168.123.2 
> netmask 255.255.255.0 
> mtu 9000 
> 
> # interface for vlan 130 on bond without IP (just for VMs) 
> auto bond0.130 
> iface bond0.130 inet manual 
> 
> - 
> 
> 
> Am 23.08.2017 um 06:44 schrieb Alexandre DERUMIER: 
>> Hi, 
>> 
>> you can create a bond0.120 interface and setup ip on it. 
>> 
>> (and keep bond0 in bridge). 
>> 
>> 
>> another way, enable "vlan aware" option in bridge. (see gui), 
>> 
>> then you can create a 
>> 
>> vmbr0.120, and setup ip on it. 
>> 
>> 
>> - Mail original - 
>> De: "Devin Acosta"  
>> À: "proxmoxve"  
>> Envoyé: Mercredi 23 Août 2017 03:40:07 
>> Objet: [PVE-User] Proxmox 5 (LACP/Bridge/VLAN120) 
>> 
>> I need to configure Proxmox 5.x with the following. 
>> 
>> Bond of 2 interfaces using LACP 
>> Bridge off Bond0, with IP on VLAN 120 (10.100.100.8/24) 
>> The network switch doesn’t have native VLAN, so the IP has to be on the 
>> TAGGED VLAN 120. 
>> 
>> Please help. ;) 
>> 
>> Devin 
>> ___ 
>> pve-user mailing list 
>> pve-user@pve.proxmox.com 
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
>> 
>> ___ 
>> pve-user mailing list 
>> pve-user@pve.proxmox.com 
>> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
>> 
> 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox 5 (LACP/Bridge/VLAN120)

2017-08-22 Thread Alexandre DERUMIER
Hi,

you can create a bond0.120 interface and setup ip on it.

(and keep bond0 in bridge).


another way, enable "vlan aware" option in bridge. (see gui),

then you can create a

vmbr0.120, and setup ip on it.


- Mail original -
De: "Devin Acosta" 
À: "proxmoxve" 
Envoyé: Mercredi 23 Août 2017 03:40:07
Objet: [PVE-User] Proxmox 5 (LACP/Bridge/VLAN120)

I need to configure Proxmox 5.x with the following. 

Bond of 2 interfaces using LACP 
Bridge off Bond0, with IP on VLAN 120 (10.100.100.8/24) 
The network switch doesn’t have native VLAN, so the IP has to be on the 
TAGGED VLAN 120. 

Please help. ;) 

Devin 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Random kernel panics of my KVM VMs

2017-08-15 Thread Alexandre DERUMIER
can you post your vmid.conf ?

- Mail original -
De: "Bill Arlofski" 
À: "proxmoxve" 
Envoyé: Mardi 15 Août 2017 07:02:12
Objet: [PVE-User] Random kernel panics of my KVM VMs

Hello everyone. 

I am not sure this is the right place to ask, but I am also not sure where to 
start, so this list seemed like a good place. I am happy for any direction as 
to the best place to turn to for a solution. :) 

For quite some time now I have been having random kernel panics on random VMs. 

I have a two-node cluster, currently running a pretty current PVE version: 

PVE Manager Version pve-manager/5.0-23/af4267bf 

Now, these kernel panics have continued through several VM kernel upgrades, 
and even continue after the 4.x to 5.x Proxmox upgrade several weeks ago. In 
addition, I have moved VMs from one Proxmox node to the other to no avail, 
ruling out hardware on one node or the other. 

Also, it does not matter if the VMs have their (QCOW2) disks on the Proxmox 
node's local hardware RAID storage or the Synology NFS-connected storage 

I am trying to verify this by moving a few VMs that seem to panic more often 
than others back to some local hardware RAID storage on one node as I write 
this email... 

Typically the kernel panics occur during the nightly backups of the VMs, but I 
cannot say that this is always when they occur. I _can_ say that the kernel 
panic always reports the sym53c8xx_2 module as the culprit though... 

I have set up remote kernel logging on one VM and here is the kernel panic 
reported: 

8< 
[138539.201838] Kernel panic - not syncing: assertion "i && 
sym_get_cam_status(cp->cmd) == DID_SOFT_ERROR" failed: file 
"drivers/scsi/sym53c8xx_2/sym_hipd.c", line 3399 
[138539.201838] 
[138539.201838] CPU: 2 PID: 0 Comm: swapper/2 Not tainted 4.9.34-gentoo #5 
[138539.201838] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
rel-1.10.2-0-g5f4c7b1-prebuilt.qemu-project.org 04/01/2014 
[138539.201838] 88023fd03d90 813a2408 8800bb842700 
81c51450 
[138539.201838] 88023fd03e10 8111ff3f 88020020 
88023fd03e20 
[138539.201838] 88023fd03db8 813c70f3 81c517b0 
81c51400 
[138539.201838] Call Trace: 
[138539.201838]  [138539.201838] [] dump_stack+0x4d/0x65 
[138539.201838] [] panic+0xca/0x203 
[138539.201838] [] ? swiotlb_unmap_sg_attrs+0x43/0x60 
[138539.201838] [] sym_interrupt+0x1bff/0x1dd0 
[138539.201838] [] ? e1000_clean+0x358/0x880 
[138539.201838] [] sym53c8xx_intr+0x37/0x80 
[138539.201838] [] __handle_irq_event_percpu+0x38/0x1a0 
[138539.201838] [] handle_irq_event_percpu+0x1e/0x50 
[138539.201838] [] handle_irq_event+0x27/0x50 
[138539.201838] [] handle_fasteoi_irq+0x89/0x160 
[138539.201838] [] handle_irq+0x6e/0x120 
[138539.201838] [] ? atomic_notifier_call_chain+0x15/0x20 
[138539.201838] [] do_IRQ+0x46/0xd0 
[138539.201838] [] common_interrupt+0x7f/0x7f 
[138539.201838]  [138539.201838] [] ? 
default_idle+0x1b/0xd0 
[138539.201838] [] arch_cpu_idle+0xa/0x10 
[138539.201838] [] default_idle_call+0x1e/0x30 
[138539.201838] [] cpu_startup_entry+0xd5/0x1c0 
[138539.201838] [] start_secondary+0xe8/0xf0 
[138539.201838] Shutting down cpus with NMI 
[138539.201838] Kernel Offset: disabled 
[138539.201838] ---[ end Kernel panic - not syncing: assertion "i && 
sym_get_cam_status(cp->cmd) == DID_SOFT_ERROR" failed: file 
"drivers/scsi/sym53c8xx_2/sym_hipd.c", line 3399 
8< 

The dmesg output on the Proxmox nodes' does not show any issues during the 
times of these VM kernel panics. 

I appreciate any comments, questions, or some direction on this. 

Thank you, 

Bill 


-- 
Bill Arlofski 
Reverse Polarity, LLC 
http://www.revpol.com/blogs/waa 
--- 
He picks up scraps of information 
He's adept at adaptation 

--[ Not responsible for anything below this line ]-- 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] USB Devices hotplug

2017-08-11 Thread Alexandre DERUMIER
you need to define the usb port before starting the vm, (with physical port 
number or usb device id)


After that, you can hotplug/unplug usb device

- Mail original -
De: "Gilberto Nunes" 
À: "proxmoxve" 
Envoyé: Vendredi 11 Août 2017 15:31:19
Objet: Re: [PVE-User] USB Devices hotplug

Ok! 
I performe a fresh installation and just realize that with Virtio SCSI 
Single, command qm set doesn't work. 
I change to Virtio SCSI and qm set works properly. 
But USB Device passthrough doesn't work, unless I turn off the VM. It's not 
suppose to work on line??? I meant hotplug??? 

Thanks any way 

2017-08-11 9:23 GMT-03:00 Gilberto Nunes : 

> 
> 
> 
> And, when I try it with Linux machine, the VM just freeze! 
> What a mess! 
> 
> 2017-08-11 9:09 GMT-03:00 Gilberto Nunes : 
> 
>> No! Nothing work here. 
>> I try different USB port with different devices. 
>> I try with and without hotplug, with and without SPICE. 
>> I try with Linux, with Windows 7 Ultimate and with Windows 2012. 
>> Nothing work. 
>> What now?? Any suggestions??? 
>> 
>> 
>> 
>> 2017-08-11 8:47 GMT-03:00 Gilberto Nunes : 
>> 
>>> I just update the server, but doesn't work... 
>>> I will try with and without hotplug, and with and without SPICE. 
>>> We'll see. 
>>> 
>>> 
>>> 
>>> 
>>> 2017-08-11 8:33 GMT-03:00 Gilberto Nunes : 
>>> 
 I will try it... 
 Just now I update the server... Perhaps this can solve the problem... 
 Wait a moment please. 
 
 
 
 
 
 2017-08-11 8:28 GMT-03:00 Alwin Antreich : 
 
> Hallo Gilberto, 
> 
> > Gilberto Nunes  hat am 11. August 2017 
> um 13:21 geschrieben: 
> > 
> > 
> > I just turn on the server.. 
> > No VM running. I start a Windows 7 VM. 
> > Put the pendriver in physical server. 
> > Try to add usb device to Win 7 VM and still see pendig device 
> > I try with qm set command as well, and same result 
> > Come'on This feature already work in PVE 4.x. Why this recede??? 
> 
> Did it work with a different device? Does it work without hotplug? 
> 
> > 
> > 
> > 
> > 
> > 2017-08-11 5:21 GMT-03:00 Alwin Antreich : 
> > 
> > > Hi Gilberto, 
> > > 
> > > > Gilberto Nunes  hat am 10. August 
> 2017 um 
> > > 18:02 geschrieben: 
> > > > 
> > > > 
> > > > Hi friends 
> > > > 
> > > > I am using PVE 5 here, and when I try to add a external USB 
> Device into 
> > > > Windows 2012 VM, I see it in red, which means in append mode. 
> > > > I just have to shutdown the VM and start it again in order to 
> see the 
> > > > device in Windows disk manager 
> > > > I already enable hotplug into vm config, like this: 
> > > > 
> > > > boot: cdn 
> > > > bootdisk: scsi0 
> > > > cores: 2 
> > > > cpu: host 
> > > > hotplug: 1 
> > > > ide2: local:iso/virtio-win-0.1.140.iso,media=cdrom,size=166688K 
> > > > memory: 4096 
> > > > name: Win2012-a1 
> > > > net0: e1000=22:65:CE:9B:B7:0F,bridge=vmbr0 
> > > > numa: 1 
> > > > ostype: win8 
> > > > scsi0: 
> > > > ZFS-LOCAL:vm-107-disk-1,cache=writeback,discard=on,iothread= 
> 1,size=40G 
> > > > scsihw: virtio-scsi-single 
> > > > smbios1: uuid=4c0295ab-725c-4f17-837a-0dfefb95ed99 
> > > > sockets: 2 
> > > > 
> > > > [PENDING] 
> > > > usb0: host=152d:2329 
> > > 
> > > Check if the usb device is already in use. Does hotplug for a 
> different 
> > > usb device work on that VM? 
> > > 
> > > > 
> > > > What I do wrong?? 
> > > > 
> > > > Thanks a lot 
> > > > ___ 
> > > > pve-user mailing list 
> > > > pve-user@pve.proxmox.com 
> > > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> > > 
> > > -- 
> > > Cheers, 
> > > Alwin 
> > > 
> > > ___ 
> > > pve-user mailing list 
> > > pve-user@pve.proxmox.com 
> > > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> > > 
> > ___ 
> > pve-user mailing list 
> > pve-user@pve.proxmox.com 
> > https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> -- 
> Cheers, 
> Alwin 
> 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
> 
 
 
>>> 
>> 
> 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Cluster won't reform so I can't restart VMs

2017-08-11 Thread Alexandre DERUMIER
seem to be a multicast problem.

does it work with omping ?


- Mail original -
De: "Chris Tomkins" 
À: "proxmoxve" 
Envoyé: Vendredi 11 Août 2017 12:02:00
Objet: [PVE-User] Cluster won't reform so I can't restart VMs

Hi Proxmox users, 

I have a 4 node cluster. It has been in production for a few months with 
few/no issues. 

This morning one of my admins reported that each node appeared isolated 
("Total votes: 1"). All VMs were up and unaffected. Unfortunately I 
made the mistake of stopping VMs on 3 of the nodes to apply updates and 
reboot as I assumed this would clear the issue. Now the scenario remains 
the same but the VMs are down on 3 of the nodes and it won't allow me to 
start them as I have no quorum. 

No config changes were made and this cluster was fine and had quorum last 
time I looked (last week). 

I don't want to take the wrong action and make this worse - any advice 
would be greatly appreciated! 

hypervisors ar1406/ar1600/ar1601 are up to date and have been rebooted this 
morning. ar1407 has not been rebooted or updated (yet) as the VMs on it are 
critical. 

Thanks, 

Chris 

[LIVE]root@ar1406:~# for i in ar1406 ar1407 ar1600 ar1601; do ssh $i 'cat 
/etc/pve/.members'; done 
{ 
"nodename": "ar1406", 
"version": 3, 
"cluster": { "name": "netteamcluster", "version": 4, "nodes": 4, "quorate": 
0 }, 
"nodelist": { 
"ar1407": { "id": 2, "online": 0}, 
"ar1601": { "id": 3, "online": 0}, 
"ar1600": { "id": 4, "online": 0}, 
"ar1406": { "id": 1, "online": 1, "ip": "10.0.6.201"} 
} 
} 
{ 
"nodename": "ar1407", 
"version": 3, 
"cluster": { "name": "netteamcluster", "version": 4, "nodes": 4, "quorate": 
0 }, 
"nodelist": { 
"ar1407": { "id": 2, "online": 1, "ip": "10.0.6.202"}, 
"ar1601": { "id": 3, "online": 0}, 
"ar1600": { "id": 4, "online": 0}, 
"ar1406": { "id": 1, "online": 0} 
} 
} 
{ 
"nodename": "ar1600", 
"version": 3, 
"cluster": { "name": "netteamcluster", "version": 4, "nodes": 4, "quorate": 
0 }, 
"nodelist": { 
"ar1407": { "id": 2, "online": 0}, 
"ar1601": { "id": 3, "online": 0}, 
"ar1600": { "id": 4, "online": 1, "ip": "10.0.6.203"}, 
"ar1406": { "id": 1, "online": 0} 
} 
} 
{ 
"nodename": "ar1601", 
"version": 3, 
"cluster": { "name": "netteamcluster", "version": 4, "nodes": 4, "quorate": 
0 }, 
"nodelist": { 
"ar1407": { "id": 2, "online": 0}, 
"ar1601": { "id": 3, "online": 1, "ip": "10.0.6.204"}, 
"ar1600": { "id": 4, "online": 0}, 
"ar1406": { "id": 1, "online": 0} 
} 
} 

-- 

Chris Tomkins 

Brandwatch | Senior Network Engineer (Linux/Network) 

chr...@brandwatch.com | (+44) 01273 448 949 

@Brandwatch 

New York | San Francisco | Brighton | Singapore | Berlin | 
Stuttgart 


Discover how organizations are using Brandwatch to create their own success 
 


Email disclaimer  


[image: bw-signature logo.png] 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Issues with CPU and Memory HotPlug

2017-07-27 Thread Alexandre DERUMIER
cpu hotplug/unplug works fine 

memory hotplug works fine 

memory unplug is mostly broken in linux and not implemented in windows. 


see notes here : 

https://pve.proxmox.com/wiki/Hotplug_(qemu_disk,nic,cpu,memory) 





Alexandre Derumier 
Ingénieur système et stockage 

Manager Infrastructure 


Fixe : +33 3 59 82 20 10 



125 Avenue de la république 
59110 La Madeleine 
[ https://twitter.com/OdisoHosting ] [ https://twitter.com/mindbaz ] [ 
https://www.linkedin.com/company/odiso ] [ 
https://www.viadeo.com/fr/company/odiso ] [ 
https://www.facebook.com/monsiteestlent ] 

[ https://www.monsiteestlent.com/ | MonSiteEstLent.com ] - Blog dédié à la 
webperformance et la gestion de pics de trafic 






De: "Gilberto Nunes"  
À: "proxmoxve"  
Envoyé: Mercredi 26 Juillet 2017 23:26:45 
Objet: [PVE-User] Issues with CPU and Memory HotPlug 

Hi list 

I am using PVE 5.0, and try to hotplug Memory and CPU. 
I set the file 
/lib/udev/rules.d/80-hotplug-cpu-mem.rules, with 

SUBSYSTEM=="cpu", ACTION=="add", TEST=="online", ATTR{online}=="0", 
ATTR{online}="1" 
SUBSYSTEM=="memory", ACTION=="add", TEST=="state", 
ATTR{state}=="offline", ATTR{state}="online" 

I have install a VM with Ubuntu 17.04, with ZFS as storage, with 4 GB of 
memory. 

When I try decrease the memory, I receive a kernelops: 

kernel BUG at /build/linux-Fsa3B1/linux-4.10.0/mm/memory_hotplug.c:2172 

And in the web interface I get: 

Parameter verification failed. (400) 

memory: hotplug problem - 400 parameter verification failed. dimm5: 
error unplug memory 

This is the conf of the vm: 

agent: 1 
bootdisk: scsi0 
cores: 2 
cpu: Conroe 
hotplug: disk,network,usb,memory,cpu 
ide2: STG-01:iso/mini-ubuntu-64.iso,media=cdrom 
memory: 2048 
name: VM-Ubuntu 
net0: virtio=FE:26:B8:AC:47:0B,bridge=vmbr0 
numa: 1 
ostype: l26 
protection: 1 
scsi0: STG-ZFS-01:vm-101-disk-1,size=32G 
scsihw: virtio-scsi-pci 
smbios1: uuid=5794fcd3-c227-46fe-bea9-84d41f75a0d7 
sockets: 2 
usb0: spice 
usb1: spice 
usb2: spice 
vga: qxl 


What is wrong??? 

Thanks 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Matching VM-id and routed IP

2017-07-25 Thread Alexandre DERUMIER
Hi,

they are no hook script for vm stop/start,

but maybe can you try to hack
/var/lib/qemu-server/pve-bridge

this is the script that qemu is executing when vm stop/start, or nic is 
hotplugged.
(I'm not sure about live migration, as you need to announce ip only when the vm 
is resumed on target host, and I think network script is launch before)



also,
I'm still working on cloudinit support, where we'll be able to put ip address 
for qemu machine in vm config.
But for now, you can write config in /etc/pve/ if you want.


I'm interested to see if it's working fine, as in the future, I would like to 
test this kind of setup, to eliminate layer2 on my network.

(BTW, you can contact me directly if you want, in french ;)




- Mail original -
De: "Alarig Le Lay" 
À: "proxmoxve" 
Envoyé: Lundi 24 Juillet 2017 00:52:31
Objet: [PVE-User] Matching VM-id and routed IP

Hi, 

I’m wondering if there is a way to do some matching between the id of 
a VM and it’s routed IP. 

The plan is to announce the IPv4’s /32 and IPv6’s /48 of each VM running 
on an hypervisor to edge routers with iBGP or OSPF. 
Of course, a cluster will be set up, so ID will be unique in the 
cluster’s scope. 

My basic idea is to have something like 
89.234.186.18/32 dev tap101i0 
2a00:5884:8218::/48 dev tap101i0 
on the host where VM 101 is running, and doing 'redistribute kernel' 
with quagga or bird. 

If this feature is not included in promox, is it possible to use the 
pmxcfs to achieve this (by example by writing the routed IP in 
/etc/pve/nodes/${node}/priv/101-routes.conf) and having some hooks at 
the startup/shutdown that will read that file? 

Thanks, 
-- 
alarig 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Increase memory QXL graphic adapter

2017-07-25 Thread Alexandre DERUMIER
seem that streaming-video=off by default,

maybe can you test with editing

/usr/share/perl5/PVE/QemuServer.pm

push @$devices, '-spice', 
"tls-port=${spice_port},addr=$localhost,tls-ciphers=HIGH,seamless-migration=on";

and add 

  push @$devices, '-spice', 
"tls-port=${spice_port},addr=$localhost,tls-ciphers=HIGH,seamless-migration=on,streaming-video=filter";


then restart 

systemctl restart pvedaemon


and start your vm.


(test with "filter" and "all" value to compare)



- Mail original -
De: "aderumier" 
À: "proxmoxve" 
Envoyé: Mardi 25 Juillet 2017 10:06:26
Objet: Re: [PVE-User] Increase memory QXL graphic adapter

also discussion about youtube performance 

https://www.spinics.net/lists/spice-devel/msg27403.html 

"Since you are on el7 system you can test our nightly builds: 
https://copr.fedorainfracloud.org/coprs/g/spice/nightly/ 
which provides ability to switch the video encoder in spicy (package 
spice-gtk-tools) under Options menu. 

Your vm needs to have the video streaming enabled (set to 'filter' or 
'all'). 

(virsh edit VM ; and add  to graphics node) 

Also check if the image compression is turned on (ideally set to glz) 

" 

this can be tuned in -spice command line. (this need change in proxmox 
QemuServer.pm) 

"-spice port=5903,tls-port=5904,addr=127.0.0.1,\ 
x509-dir=/etc/pki/libvirt-spice,\ 
image-compression=auto_glz,jpeg-wan-compression=auto,\ 
zlib-glz-wan-compression=auto,\ 
playback-compression=on,streaming-video=all" 


not sure about new spicy video encoder. 



- Mail original - 
De: "aderumier"  
À: "proxmoxve"  
Envoyé: Mardi 25 Juillet 2017 09:23:14 
Objet: Re: [PVE-User] Increase memory QXL graphic adapter 

another interesting article in deutsh 

http://linux-blog.anracom.com/2017/07/06/kvmqemu-mit-qxl-hohe-aufloesungen-und-virtuelle-monitore-im-gastsystem-definieren-und-nutzen-i/
 


- Mail original - 
De: "aderumier"  
À: "proxmoxve"  
Envoyé: Mardi 25 Juillet 2017 09:21:01 
Objet: Re: [PVE-User] Increase memory QXL graphic adapter 

ovirt have a good draft for auto tune value, I wonder if we could we use this 
for proxmox ? 
http://www.ovirt.org/documentation/draft/video-ram/ 

default value are ram='65536', vram='65536', vgamem='16384', 'heads=1'. 

also, seem that a new vram64 value is available 

https://www.redhat.com/archives/libvir-list/2016-February/msg01082.html 


I think you can do tests with 

args: -global qxl-vga.ram_size=134217728 -global qxl-vga.vram_size=134217728 
-global qxl-vga.vgamem_mb=32  



- Mail original - 
De: "Eric Abreu"  
À: "proxmoxve"  
Envoyé: Mardi 25 Juillet 2017 04:36:04 
Objet: [PVE-User] Increase memory QXL graphic adapter 

Hi. I wonder if there are ways of improving graphic performance of a kvm 
VM. Is there a way of increasing the memory of the virtual graphic adapter 
(QXL) from the terminal or tweaking the VM in a way that a user could 
stream YouTube videos without interruption. Thank you in advance. 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Increase memory QXL graphic adapter

2017-07-25 Thread Alexandre DERUMIER
also discussion about youtube performance

https://www.spinics.net/lists/spice-devel/msg27403.html

"Since you are on el7 system you can test our nightly builds:
https://copr.fedorainfracloud.org/coprs/g/spice/nightly/
which provides ability to switch the video encoder in spicy (package
spice-gtk-tools) under Options menu.

Your vm needs to have the video streaming enabled (set to 'filter' or
'all').

(virsh edit VM ; and add  to graphics node)

Also check if the image compression is turned on (ideally set to glz)

"

this can be tuned in -spice command line. (this need change in proxmox 
QemuServer.pm)

"-spice port=5903,tls-port=5904,addr=127.0.0.1,\
x509-dir=/etc/pki/libvirt-spice,\
image-compression=auto_glz,jpeg-wan-compression=auto,\
zlib-glz-wan-compression=auto,\
playback-compression=on,streaming-video=all"


not sure about new spicy video encoder.

 

- Mail original -
De: "aderumier" 
À: "proxmoxve" 
Envoyé: Mardi 25 Juillet 2017 09:23:14
Objet: Re: [PVE-User] Increase memory QXL graphic adapter

another interesting article in deutsh 

http://linux-blog.anracom.com/2017/07/06/kvmqemu-mit-qxl-hohe-aufloesungen-und-virtuelle-monitore-im-gastsystem-definieren-und-nutzen-i/
 


- Mail original - 
De: "aderumier"  
À: "proxmoxve"  
Envoyé: Mardi 25 Juillet 2017 09:21:01 
Objet: Re: [PVE-User] Increase memory QXL graphic adapter 

ovirt have a good draft for auto tune value, I wonder if we could we use this 
for proxmox ? 
http://www.ovirt.org/documentation/draft/video-ram/ 

default value are ram='65536', vram='65536', vgamem='16384', 'heads=1'. 

also, seem that a new vram64 value is available 

https://www.redhat.com/archives/libvir-list/2016-February/msg01082.html 


I think you can do tests with 

args: -global qxl-vga.ram_size=134217728 -global qxl-vga.vram_size=134217728 
-global qxl-vga.vgamem_mb=32  



- Mail original - 
De: "Eric Abreu"  
À: "proxmoxve"  
Envoyé: Mardi 25 Juillet 2017 04:36:04 
Objet: [PVE-User] Increase memory QXL graphic adapter 

Hi. I wonder if there are ways of improving graphic performance of a kvm 
VM. Is there a way of increasing the memory of the virtual graphic adapter 
(QXL) from the terminal or tweaking the VM in a way that a user could 
stream YouTube videos without interruption. Thank you in advance. 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Increase memory QXL graphic adapter

2017-07-25 Thread Alexandre DERUMIER
another interesting article in deutsh 

http://linux-blog.anracom.com/2017/07/06/kvmqemu-mit-qxl-hohe-aufloesungen-und-virtuelle-monitore-im-gastsystem-definieren-und-nutzen-i/


- Mail original -
De: "aderumier" 
À: "proxmoxve" 
Envoyé: Mardi 25 Juillet 2017 09:21:01
Objet: Re: [PVE-User] Increase memory QXL graphic adapter

ovirt have a good draft for auto tune value, I wonder if we could we use this 
for proxmox ? 
http://www.ovirt.org/documentation/draft/video-ram/ 

default value are ram='65536', vram='65536', vgamem='16384', 'heads=1'. 

also, seem that a new vram64 value is available 

https://www.redhat.com/archives/libvir-list/2016-February/msg01082.html 


I think you can do tests with 

args: -global qxl-vga.ram_size=134217728 -global qxl-vga.vram_size=134217728 
-global qxl-vga.vgamem_mb=32  



- Mail original - 
De: "Eric Abreu"  
À: "proxmoxve"  
Envoyé: Mardi 25 Juillet 2017 04:36:04 
Objet: [PVE-User] Increase memory QXL graphic adapter 

Hi. I wonder if there are ways of improving graphic performance of a kvm 
VM. Is there a way of increasing the memory of the virtual graphic adapter 
(QXL) from the terminal or tweaking the VM in a way that a user could 
stream YouTube videos without interruption. Thank you in advance. 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Increase memory QXL graphic adapter

2017-07-25 Thread Alexandre DERUMIER
ovirt have a good draft for auto tune value, I wonder if we could we use this 
for proxmox ?
http://www.ovirt.org/documentation/draft/video-ram/

default value are ram='65536', vram='65536', vgamem='16384', 'heads=1'.

also, seem that a new vram64 value is available

https://www.redhat.com/archives/libvir-list/2016-February/msg01082.html


I think you can do tests with

args: -global qxl-vga.ram_size=134217728 -global qxl-vga.vram_size=134217728 
-global qxl-vga.vgamem_mb=32   



- Mail original -
De: "Eric Abreu" 
À: "proxmoxve" 
Envoyé: Mardi 25 Juillet 2017 04:36:04
Objet: [PVE-User] Increase memory QXL graphic adapter

Hi. I wonder if there are ways of improving graphic performance of a kvm 
VM. Is there a way of increasing the memory of the virtual graphic adapter 
(QXL) from the terminal or tweaking the VM in a way that a user could 
stream YouTube videos without interruption. Thank you in advance. 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] qemu-img convert async

2017-07-21 Thread Alexandre DERUMIER
hi,

I'm seeing that qemu 2.9 have new flags to make qemu-img convert async

http://git.qemu.org/?p=qemu.git;a=commit;h=2d9187bc65727d9dd63e2c410b5500add3db0b0d


"This patches introduces 2 new cmdline parameters. The -m parameter to specify
the number of coroutines running in parallel (defaults to 8). And the -W 
parameter to
allow qemu-img to write to the target out of order rather than sequential. This 
improves
performance as the writes do not have to wait for each other to complete."


Does somebody already have tested it ?  (-W flag)


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] corosync unicast : does somebody use it in production with 10-16 nodes ?

2017-07-13 Thread Alexandre DERUMIER
>>And what switches do you employ there? You said something about modern 
>>low latency ASIC switches. 

new mellanox switches, with spectrum asic 

http://www.mellanox.com/page/products_dyn?product_family=251&mtag=sn2000


- Mail original -
De: "Thomas Lamprecht" 
À: "pve-devel" , "aderumier" , 
"dietmar" 
Cc: "proxmoxve" 
Envoyé: Jeudi 13 Juillet 2017 07:56:49
Objet: Re: [pve-devel] corosync unicast : does somebody use it in production 
with 10-16 nodes ?

On 07/12/2017 06:05 PM, Alexandre DERUMIER wrote: 
> forgot to said, it's with proxmox 4, corosync 2.4.2-2~pve4+1 
> 
> cpu are CPU E5-2687W v3 @ 3.10GHz 

And what switches do you employ there? You said something about modern 
low latency ASIC switches. 

But yes, quite interesting that this can be done. 
Maybe we should adapt pvecm so that unicast mode is a bit easier to setup. 


> 
> - Mail original - 
> De: "dietmar"  
> À: "aderumier"  
> Cc: "proxmoxve" , "pve-devel" 
>  
> Envoyé: Mercredi 12 Juillet 2017 17:42:18 
> Objet: Re: [pve-devel] corosync unicast : does somebody use it in production 
> with 10-16 nodes ? 
> 
>>>> how many running VMs/Containers? 
>> on 20 cluster nodes, around 1000 vm 
>> 
>> on a 10 cluster nodes, 800vm + 800ct 
>> 
>> on a 9 cluster nodes, 400vm 
> Interesting. So far I did not know anybody using that with more than 6 nodes 
> ... 
> 
> ___ 
> pve-devel mailing list 
> pve-de...@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] corosync unicast : does somebody use it in production with 10-16 nodes ?

2017-07-12 Thread Alexandre DERUMIER
forgot to said, it's with proxmox 4, corosync 2.4.2-2~pve4+1

cpu are CPU E5-2687W v3 @ 3.10GHz


- Mail original -
De: "dietmar" 
À: "aderumier" 
Cc: "proxmoxve" , "pve-devel" 

Envoyé: Mercredi 12 Juillet 2017 17:42:18
Objet: Re: [pve-devel] corosync unicast : does somebody use it in production 
with 10-16 nodes ?

> >>how many running VMs/Containers? 
> 
> on 20 cluster nodes, around 1000 vm 
> 
> on a 10 cluster nodes, 800vm + 800ct 
> 
> on a 9 cluster nodes, 400vm 

Interesting. So far I did not know anybody using that with more than 6 nodes 
... 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] corosync unicast : does somebody use it in production with 10-16 nodes ?

2017-07-12 Thread Alexandre DERUMIER
>>how many running VMs/Containers? 

on 20 cluster nodes, around 1000 vm

on a 10 cluster nodes, 800vm + 800ct

on a 9 cluster nodes, 400vm





- Mail original -
De: "dietmar" 
À: "aderumier" , "pve-devel" 
Cc: "proxmoxve" 
Envoyé: Mercredi 12 Juillet 2017 12:03:31
Objet: Re: [pve-devel] corosync unicast : does somebody use it in production 
with 10-16 nodes ?

> just for the record, 
> 
> I have migrate all my clusters with unicast, also big clusters with 16-20 
> nodes, and It's working fine. 
> 
> 
> "pvedaemon: ipcc_send_rec failed: Transport endpoint is not connected " seem 
> to be gone. 
> 
> don't see any error on the cluster. 
> 
> traffic is around 3-4mbit/s on each node. 

how many running VMs/Containers? 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] corosync unicast : does somebody use it in production with 10-16 nodes ?

2017-07-12 Thread Alexandre DERUMIER
Hi,

just for the record,

I have migrate all my clusters with unicast, also big clusters with 16-20 
nodes, and It's working fine.


"pvedaemon: ipcc_send_rec failed: Transport endpoint is not connected " seem to 
be gone.

don't see any error on the cluster.

traffic is around 3-4mbit/s on each node.



- Mail original -
De: "aderumier" 
À: "pve-devel" 
Cc: "proxmoxve" 
Envoyé: Vendredi 7 Juillet 2017 11:51:42
Objet: Re: [pve-devel] corosync unicast : does somebody use it in production 
with 10-16 nodes ?

note that I'm just seeing , time to time (around once by hour), 

pvedaemon: ipcc_send_rec failed: Transport endpoint is not connected 

But I don't have any corosync error / retransmit. 


- Mail original - 
De: "aderumier"  
À: "pve-devel" , "proxmoxve" 
 
Envoyé: Vendredi 7 Juillet 2017 10:46:33 
Objet: [pve-devel] corosync unicast : does somebody use it in production with 
10-16 nodes ? 

Hi, 

I'm looking to remove multicast from my network (Don't have too much time to 
explain, but we have multicast storm problem,because of igmp snooping bug) 

Does somebody running it with "big" clusters ? (10-16 nodes) 


I'm currently testing it with 9 nodes (1200vm+containers), I'm seeing around 
3mbit/s of traffic on each node, 

and I don't have any cluster break for now. (Switch have recents asics with 
around 0,015ms latency). 


Any return of experience is welcome :) 

Thanks ! 

Alexandre 
___ 
pve-devel mailing list 
pve-de...@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___ 
pve-devel mailing list 
pve-de...@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] corosync unicast : does somebody use it in production with 10-16 nodes ?

2017-07-07 Thread Alexandre DERUMIER
note that I'm just seeing , time to time (around once by hour),

pvedaemon: ipcc_send_rec failed: Transport endpoint is not connected

But I don't have any corosync error / retransmit. 


- Mail original -
De: "aderumier" 
À: "pve-devel" , "proxmoxve" 

Envoyé: Vendredi 7 Juillet 2017 10:46:33
Objet: [pve-devel] corosync unicast : does somebody use it in production with 
10-16 nodes ?

Hi, 

I'm looking to remove multicast from my network (Don't have too much time to 
explain, but we have multicast storm problem,because of igmp snooping bug) 

Does somebody running it with "big" clusters ? (10-16 nodes) 


I'm currently testing it with 9 nodes (1200vm+containers), I'm seeing around 
3mbit/s of traffic on each node, 

and I don't have any cluster break for now. (Switch have recents asics with 
around 0,015ms latency). 


Any return of experience is welcome :) 

Thanks ! 

Alexandre 
___ 
pve-devel mailing list 
pve-de...@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


[PVE-User] corosync unicast : does somebody use it in production with 10-16 nodes ?

2017-07-07 Thread Alexandre DERUMIER
Hi,

I'm looking to remove multicast from my network (Don't have too much time to 
explain, but we have multicast storm problem,because of igmp snooping bug)

Does somebody running it with "big" clusters ? (10-16 nodes)


I'm currently testing it with 9 nodes (1200vm+containers), I'm seeing around 
3mbit/s of traffic on each node,

and I don't have any cluster break for now. (Switch have recents asics with 
around 0,015ms latency).


Any return of experience is welcome :)

Thanks !

Alexandre
___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE 5.0 released!

2017-07-05 Thread Alexandre DERUMIER
>>What's the general opinion regarding incremental backups with a dirty map 
>>like feature? http://wiki.qemu.org/Features/IncrementalBackup
>>
>>I see that as a very important and missing feature.

It need to be implement in proxmox vma format and backup code. (as proxmox 
don't use qemu drive_backup qmp)



Note that currently, dirty map can't be saved on disk (on vm shutdown for 
exemple), so if vm is shutdown, you need to do a full backup again.


- Mail original -
De: "Martin Overgaard Hansen" 
À: "proxmoxve" 
Envoyé: Mercredi 5 Juillet 2017 15:54:59
Objet: Re: [PVE-User] Proxmox VE 5.0 released!

Sounds great, really looking forward to the new ceph release becoming stable. 

What's the general opinion regarding incremental backups with a dirty map like 
feature? http://wiki.qemu.org/Features/IncrementalBackup 

I see that as a very important and missing feature. 

Best Regards, 
Martin Overgaard Hansen 
MultiHouse IT Partner A/S 

> -Original Message- 
> From: pve-user [mailto:pve-user-boun...@pve.proxmox.com] On Behalf Of 
> Martin Maurer 
> Sent: Tuesday, July 4, 2017 5:25 PM 
> To: pve-de...@pve.proxmox.com; PVE User List  u...@pve.proxmox.com> 
> Subject: [PVE-User] Proxmox VE 5.0 released! 
> 
> Hi all, 
> 
> We are very happy to announce the final release of our Proxmox VE 5.0 - 
> based on the great Debian 9 codename "Stretch" and a Linux Kernel 4.10. 
> 
> New Proxmox VE Storage Replication 
> 
> Replicas provide asynchronous data replication between two or multiple 
> nodes in a cluster, thus minimizing data loss in case of failure. For all 
> organizations using local storage the Proxmox replication feature is a great 
> option to increase data redundancy for high I/Os avoiding the need of 
> complex shared or distributed storage configurations. 
> => https://pve.proxmox.com/wiki/Storage_Replication 
> 
> With Proxmox VE 5.0 Ceph RBD becomes the de-facto standard for 
> distributed storage. Packaging is now done by the Proxmox team. The Ceph 
> Luminous is not yet production ready but already available for testing. 
> If you use Ceph, follow the recommendations below. 
> 
> We also have a simplified procedure for disk import from different 
> hypervisors. You can now easily import disks from VMware, Hyper-V, or 
> other hypervisors via a new command line tool called ‘qm importdisk’. 
> 
> Other new features are the live migration with local storage via QEMU, 
> added USB und Host PCI address visibility in the GUI, bulk actions and 
> filtering 
> options in the GUI and an optimized NoVNC console. 
> 
> And as always we have included countless bugfixes and improvements on a 
> lot of places. 
> 
> Video 
> Watch our short introduction video - What's new in Proxmox VE 5.0? 
> https://www.proxmox.com/en/training/video-tutorials 
> 
> Release notes 
> https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.0 
> 
> Download 
> https://www.proxmox.com/en/downloads 
> Alternate ISO download: 
> http://download.proxmox.com/iso/ 
> 
> Source Code 
> https://git.proxmox.com 
> 
> Bugtracker 
> https://bugzilla.proxmox.com 
> 
> FAQ 
> Q: Can I upgrade a 5.x beta installation to the stable 5.0 release via apt? 
> A: Yes, upgrading from beta to stable can be done via apt. 
> https://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_V 
> irtual_Environment_5.x_to_latest_5.0 
> 
> Q: Can I install Proxmox VE 5.0 on top of Debian Stretch? 
> A: Yes, see 
> https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch 
> 
> Q: Can I upgrade Proxmox VE 4.4 to 5.0 with apt dist-upgrade? 
> A: Yes, see https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0 
> 
> Q: I am running Ceph Server on V4.4 in a production setup - should I upgrade 
> now? 
> A: Not yet. Ceph packages in Proxmox VE 5.0 are based on the latest Ceph 
> Luminous release (release candidate status). Therefore not yet 
> recommended for production. But you should start testing in a testlab 
> environment, here is one important wiki article - 
> https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous 
> 
> Many thank you's to our active community for all feedback, testing, bug 
> reporting and patch submissions! 
> 
> -- 
> Best Regards, 
> 
> Martin Maurer 
> Proxmox VE project leader 
> 
> ___ 
> pve-user mailing list 
> pve-user@pve.proxmox.com 
> https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 
___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] Proxmox VE 5.0 released!

2017-07-04 Thread Alexandre DERUMIER
Congrats to proxmox team !


BTW, small typo on 

https://www.proxmox.com/en/training/video-tutorials 

"New open-source storage replikation stack"

/replikation/replication


- Mail original -
De: "Martin Maurer" 
À: "pve-devel" , "proxmoxve" 

Envoyé: Mardi 4 Juillet 2017 17:25:10
Objet: [PVE-User] Proxmox VE 5.0 released!

Hi all, 

We are very happy to announce the final release of our Proxmox VE 5.0 - 
based on the great Debian 9 codename "Stretch" and a Linux Kernel 4.10. 

New Proxmox VE Storage Replication 

Replicas provide asynchronous data replication between two or multiple 
nodes in a cluster, thus minimizing data loss in case of failure. For 
all organizations using local storage the Proxmox replication feature is 
a great option to increase data redundancy for high I/Os avoiding the 
need of complex shared or distributed storage configurations. 
=> https://pve.proxmox.com/wiki/Storage_Replication 

With Proxmox VE 5.0 Ceph RBD becomes the de-facto standard for 
distributed storage. Packaging is now done by the Proxmox team. The Ceph 
Luminous is not yet production ready but already available for testing. 
If you use Ceph, follow the recommendations below. 

We also have a simplified procedure for disk import from different 
hypervisors. You can now easily import disks from VMware, Hyper-V, or 
other hypervisors via a new command line tool called ‘qm importdisk’. 

Other new features are the live migration with local storage via QEMU, 
added USB und Host PCI address visibility in the GUI, bulk actions and 
filtering options in the GUI and an optimized NoVNC console. 

And as always we have included countless bugfixes and improvements on a 
lot of places. 

Video 
Watch our short introduction video - What's new in Proxmox VE 5.0? 
https://www.proxmox.com/en/training/video-tutorials 

Release notes 
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.0 

Download 
https://www.proxmox.com/en/downloads 
Alternate ISO download: 
http://download.proxmox.com/iso/ 

Source Code 
https://git.proxmox.com 

Bugtracker 
https://bugzilla.proxmox.com 

FAQ 
Q: Can I upgrade a 5.x beta installation to the stable 5.0 release via apt? 
A: Yes, upgrading from beta to stable can be done via apt. 
https://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_Virtual_Environment_5.x_to_latest_5.0
 

Q: Can I install Proxmox VE 5.0 on top of Debian Stretch? 
A: Yes, see 
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch 

Q: Can I upgrade Proxmox VE 4.4 to 5.0 with apt dist-upgrade? 
A: Yes, see https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0 

Q: I am running Ceph Server on V4.4 in a production setup - should I 
upgrade now? 
A: Not yet. Ceph packages in Proxmox VE 5.0 are based on the latest Ceph 
Luminous release (release candidate status). Therefore not yet 
recommended for production. But you should start testing in a testlab 
environment, here is one important wiki article - 
https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous 

Many thank you's to our active community for all feedback, testing, bug 
reporting and patch submissions! 

-- 
Best Regards, 

Martin Maurer 
Proxmox VE project leader 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] Proxmox VE 5.0 beta2 released!

2017-07-01 Thread Alexandre DERUMIER
I'll not be include in proxmox 5.0. (proxmox devs are working on other things)

I'll try to push it for proxmox 5.1.

- Mail original -
De: lemonni...@ulrar.net
À: "proxmoxve" 
Envoyé: Vendredi 30 Juin 2017 19:48:25
Objet: Re: [PVE-User] [pve-devel] Proxmox VE 5.0 beta2 released!

On Fri, Jun 30, 2017 at 07:33:55PM +0200, Mehmet wrote: 
> I guess "when it is done" :) 
> 
> Do you miss something or just to know when it will become done? 

Just know when it's done. As you can imagine, we've all been waiting for 
cloud-init for ever. 
If it's done in two weeks we'll wait, but if it's done in two months we'll 
have to install a new cluster on pve 4 with the old not so great deployment 
system, hence the question. 

___ 
pve-user mailing list 
pve-user@pve.proxmox.com 
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user 

___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


Re: [PVE-User] [pve-devel] Proxmox VE 5.0 beta2 released!

2017-06-18 Thread Alexandre DERUMIER
>>with this package error

sorry, I have build package again last git,

so maybe other packages in pvetest are not yet updated.


- Mail original -
De: "lyt_yudi" 
À: "proxmoxve" 
Cc: "aderumier" 
Envoyé: Lundi 19 Juin 2017 06:38:22
Objet: Re: [PVE-User] [pve-devel]Proxmox VE 5.0 beta2 released!





在 2017年6月19日,上午11:45,lyt_yudi < [ mailto:lyt_y...@icloud.com | 
lyt_y...@icloud.com ] > 写道: 


BQ_BEGIN
在 2017年6月18日,下午11:50,Alexandre Derumier < [ mailto:aderum...@odiso.com | 
aderum...@odiso.com ] > 写道: 

here a new cloudinit version 

[ http://odisoweb1.odiso.net/qemu-server_5.0-6_amd64.deb | 
http://odisoweb1.odiso.net/qemu-server_5.0-6_amd64.deb ] < [ 
http://odisoweb1.odiso.net/qemu-server_5.0-6_amd64.deb | 
http://odisoweb1.odiso.net/qemu-server_5.0-6_amd64.deb ] > 

for last pvetest 


with this package error 

... 
1 not fully installed or removed. 
After this operation, 0 B of additional disk space will be used. 
Setting up pve-manager (5.0-10) ... 
Job for pvedaemon.service failed because the control process exited with error 
code. 
See "systemctl status pvedaemon.service" and "journalctl -xe" for details. 
dpkg: error processing package pve-manager (--configure): 
subprocess installed post-installation script returned error exit status 1 
Errors were encountered while processing: 
pve-manager 
E: Sub-process /usr/bin/dpkg returned an error code (1) 

BQ_END


Got it. 

pvedaemon[3237036]: Can't locate PVE/ReplicationConfig.pm in @INC (you may need 
to install the PVE::ReplicationConfig module) (@INC contains: /etc/perl 
/usr/local/lib/x86_64-linux-gnu/perl/5.24.1 /usr/local/share/perl/5.24.1 
/usr/lib/x86_64-linux-gnu/perl5/5.24 /usr/share/perl5 
/usr/lib/x86_64-linux-gnu/perl/5.24 /usr/share/perl/5.24 
/usr/local/lib/site_perl /usr/lib/x86_64-linux-gnu/perl-base) at 
/usr/share/perl5/PVE/API2/Qemu.pm line 18. 


___
pve-user mailing list
pve-user@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user


  1   2   3   4   5   >