Hi,
I had this problem with cephfs in the vm mainly, when firewall is stopped
(rules are flushed - but existing connections still conntrack), then start
again the firewall,
and conntrack put in invalid because it don't have tracked connection sequence
when firewall was stopped.
This could hap
Doing some more research this evening, it turns out the big divergence
between the POOLS %USED and GLOBAL %RAW USED I've had is because the pool
numbers are based on the amount of space that the most full OSD has left.
So if you have 1 OSD that is disproportionately full, the %USED for POOLS
will
Only volumes with names having format 'vm-X-disk-Y' are shown. Other
volumes are not shown in web, this is expected behavior.
08.05.2019 18:17, Roland @web.de пишет:
Hello,
i have added an existing LVM Volume Group as a Datastore/Storage in
Proxmox.
For my curiosity already existing logic
Hello,
i have added an existing LVM Volume Group as a Datastore/Storage in Proxmox.
For my curiosity already existing logical volumes on that datastore are
not shown in proxmox webgui.
Furthermore, i cannot even see any properties of that storage, i.e.
"Show Configuration" button is greyed, so
On Wed, 8 May 2019 at 11:34, Alwin Antreich wrote:
> On Wed, May 08, 2019 at 09:34:44AM +0100, Mark Adams wrote:
> > Thanks for getting back to me Alwin. See my response below.
> >
> >
> > I have the same size and count in each node, but I have had a disk
> failure
> > (has been replaced) and als
On Wed, May 08, 2019 at 09:34:44AM +0100, Mark Adams wrote:
> Thanks for getting back to me Alwin. See my response below.
>
>
> I have the same size and count in each node, but I have had a disk failure
> (has been replaced) and also had issues with osds dropping when that memory
> allocation bug
On 5/8/19 10:15 AM, Igor Podlesny wrote:
> On Wed, 8 May 2019 at 15:02, Thomas Lamprecht wrote:
> [...]
>>> -- I didn't open no ticket, neither did I __complain__. I just let
>>> others know there's a pitfall, meanwhile thoroughly describing what it
>>> was. That's it.
>>
>> In a mail were a user
Thanks for getting back to me Alwin. See my response below.
On Wed, 8 May 2019 at 08:10, Alwin Antreich wrote:
> Hello Mark,
>
> On Tue, May 07, 2019 at 11:26:17PM +0100, Mark Adams wrote:
> > Hi All,
> >
> > I would appreciate a little pointer or clarification on this.
> >
> > My "ceph" vm pool
On Wed, 8 May 2019 at 15:02, Thomas Lamprecht wrote:
[...]
> > -- I didn't open no ticket, neither did I __complain__. I just let
> > others know there's a pitfall, meanwhile thoroughly describing what it
> > was. That's it.
>
> In a mail were a user ask where to open a request for this (i.e., pro
On 5/8/19 9:37 AM, Igor Podlesny wrote:
> On Wed, 8 May 2019 at 14:14, Thomas Lamprecht wrote:
>> On 5/8/19 8:57 AM, Igor Podlesny wrote:
>>> On Wed, 8 May 2019 at 13:11, Thomas Lamprecht
>>> wrote:
> [...]
>>> In short: pain, suffering and all That.
>>>
>>
>> Yes, things are not always perfect.
On Wed, 8 May 2019 at 14:14, Thomas Lamprecht wrote:
> On 5/8/19 8:57 AM, Igor Podlesny wrote:
> > On Wed, 8 May 2019 at 13:11, Thomas Lamprecht
> > wrote:
[...]
> > In short: pain, suffering and all That.
> >
>
> Yes, things are not always perfect. But instead of complaining, in a bit
> dramati
Il 08/05/2019 08:57, Muhammad Monowar Hossain ha scritto:
>>> 1. Live migration is not working from GUI
>>
>> What kind of problem/error are you experiencing?
>
> 2019-05-08 06:54:18 starting migration of VM 115 to node 'HV-02’ (x.x.x.x)
> 2019-05-08 06:54:18 found local disk 'vmdata:vm-115-disk-0
On 5/8/19 8:57 AM, Igor Podlesny wrote:
> On Wed, 8 May 2019 at 13:11, Thomas Lamprecht wrote:
> [...]
So what happens when one of the configured servers fails, Proxmox
recognize the failure and mounts the secondary? If this so the running
>>>
>>> Proxmox tells you go suffer, that's what
Hello Mark,
On Tue, May 07, 2019 at 11:26:17PM +0100, Mark Adams wrote:
> Hi All,
>
> I would appreciate a little pointer or clarification on this.
>
> My "ceph" vm pool is showing 84.80% used. But the %RAW usage is only 71.88%
> used. is this normal? there is nothing else on this ceph cluster a
14 matches
Mail list logo