Am 4/26/19 um 4:36 PM schrieb Mira Limbeck:
>
> On 4/26/19 4:30 PM, Thomas Lamprecht wrote:
>> Am 4/26/19 um 3:00 PM schrieb Mira Limbeck:
>>> Adds a fallback to 'Plugin::path' in the default implementation of
>>> 'map_volume' to make the additional call to 'path' unnecessary if
>>> 'map_volume'
Am 4/26/19 um 12:08 PM schrieb Dominik Csapak:
> On 4/26/19 10:43 AM, Thomas Lamprecht wrote:
>> Am 4/26/19 um 8:21 AM schrieb Dominik Csapak:
>>> similar to how we handle the cluster wide tasklist and rrd data,
>>> have an interface that can sync data across the cluster
>>>
>>> this data is only
On 4/26/19 4:30 PM, Thomas Lamprecht wrote:
Am 4/26/19 um 3:00 PM schrieb Mira Limbeck:
Adds a fallback to 'Plugin::path' in the default implementation of
'map_volume' to make the additional call to 'path' unnecessary if
'map_volume' is not implemented in the plugin used. 'Plugin::path' is now
Am 4/26/19 um 3:00 PM schrieb Mira Limbeck:
> Adds a fallback to 'Plugin::path' in the default implementation of
> 'map_volume' to make the additional call to 'path' unnecessary if
> 'map_volume' is not implemented in the plugin used. 'Plugin::path' is now
> always returned if the plugin in
Adds a fallback to 'Plugin::path' in the default implementation of
'map_volume' to make the additional call to 'path' unnecessary if
'map_volume' is not implemented in the plugin used. 'Plugin::path' is now
always returned if the plugin in question does not override 'map_volume'.
Signed-off-by:
Am 4/26/19 um 12:44 PM schrieb Thomas Lamprecht:
> Signed-off-by: Thomas Lamprecht
> ---
>
> as of now, available through our appliance download infrastructure:
> # pveam update
> # pveam download ubuntu-19.04-standard_19.04-1_amd64.tar.gz
>
forgot "already applied" tag..
>
Signed-off-by: Thomas Lamprecht
---
as of now, available through our appliance download infrastructure:
# pveam update
# pveam download ubuntu-19.04-standard_19.04-1_amd64.tar.gz
ubuntu-disco-standard-64/Makefile | 19 +++
ubuntu-disco-standard-64/dab.conf | 13 +
On 4/26/19 10:47 AM, Thomas Lamprecht wrote:
Am 4/26/19 um 8:21 AM schrieb Dominik Csapak:
this returns a hash of existing service links for
mds/mgr/mons so that we know which services exists
this is necessary since ceph itself does not save if a service is
defined somewhere, only when it runs
On 4/26/19 10:43 AM, Thomas Lamprecht wrote:
Am 4/26/19 um 8:21 AM schrieb Dominik Csapak:
similar to how we handle the cluster wide tasklist and rrd data,
have an interface that can sync data across the cluster
this data is only transient and will not be written to disk
we can use this for a
Hi,
Am 4/26/19 um 11:10 AM schrieb Francesco Ongaro:
> Dear PVE devs,
>
> I'd like to suggest a change that is useful in case umask is 077.
>
> On the wiki page:
>
> https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch
>
> After:
>
> wget
Dear PVE devs,
I'd like to suggest a change that is useful in case umask is 077.
On the wiki page:
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Stretch
After:
wget http://download.proxmox.com/debian/proxmox-ve-release-5.x.gpg -O
/etc/apt/trusted.gpg.d/proxmox-ve-release-5.x.gpg
Am 4/25/19 um 7:53 PM schrieb Stoiko Ivanov:
> Following the discussion in a forum-thread [0] I wanted to see if we could
> make
> the ArchLinux templates yield fewer (not really material) error messages
> during
> bootup and operation.
>
> I considered masking the 'sys-kernel-config.mount' and
Am 4/26/19 um 8:21 AM schrieb Dominik Csapak:
> this returns a hash of existing service links for
> mds/mgr/mons so that we know which services exists
>
> this is necessary since ceph itself does not save if a service is
> defined somewhere, only when it runs
>
> Signed-off-by: Dominik Csapak
>
Am 4/26/19 um 8:21 AM schrieb Dominik Csapak:
> similar to how we handle the cluster wide tasklist and rrd data,
> have an interface that can sync data across the cluster
>
> this data is only transient and will not be written to disk
>
> we can use this for a number of things, e.g. getting the
Am 4/25/19 um 5:14 PM schrieb Christian Ebner:
> If a vdisk_create_base fails because the storage backend does not support the
> base image creation, it leaves behind the original disk image, this is
> correct. This should not create further problems. For such templates, the
> user gets a
so that we have a list of all existing ceph services in the cluster
Signed-off-by: Dominik Csapak
---
PVE/Service/pvestatd.pm | 14 ++
1 file changed, 14 insertions(+)
diff --git a/PVE/Service/pvestatd.pm b/PVE/Service/pvestatd.pm
index ce2adbbd..b8abc8f2 100755
---
similar to how we handle the cluster wide tasklist and rrd data,
have an interface that can sync data across the cluster
this data is only transient and will not be written to disk
we can use this for a number of things, e.g. getting the locks of the
guests clusterwide, listing ceph services
add two new api calls in /cluster/ceph
status:
the same as /nodes/NODE/ceph/status, but accessible without
nodename, which we don't need, as in the hyperconverged case, all nodes
have the ceph.conf which contains the info on how to connect to the
monitors
metadata:
combines data from the cluster
this returns a hash of existing service links for
mds/mgr/mons so that we know which services exists
this is necessary since ceph itself does not save if a service is
defined somewhere, only when it runs
Signed-off-by: Dominik Csapak
---
PVE/Ceph/Services.pm | 18 ++
1 file
in order to have a better ceph dashboard that is available cluster-wide,
we have to add a few things:
* cluster-wide status api call
not that hard, since in a hyperconverged setup we always have the
info about the monitor and how to connect there
* a list of existing services
ceph
20 matches
Mail list logo