> I am the maintainer of StorPool’s external storage plugin for PVE[0]
> which integrates our storage solution as a backend for VM disks. Our
> software has the ability to create atomic (crash-consistent) snapshots
> of a group of storage volumes.
We already make sure that shaphots of a group
> This approach would also use more storage as you now have the overhead
> of FS metadata for every single ID you have marked as used.
>
> Dietmar, what do you think is the best option here? I'm personally
> leaning towards using the list with your run-length encoding suggestion,
> but I'm open to
The format of the used_vmids.list is simply, but can lead to
a very large file over time (we want to avoid large files on /etc/pev/).
>PVE::Cluster::cfs_write_file('used_vmids.list', join("\n", @$vmid_list));
A future version could compress that list, by using integer ranges,
for example:
--
Please can you add the column showing write amplification using dd instead of
file_set_contents, so that we can also see the minimal write amplf. from sqlite.
> The table below illustrates the drastic reduction in write
> amplification when writing files of different sizes to `/etc/pve/` using
>
> > [...]
> > so a factor of 32 less calls to cfs_fuse_write (including memdb_pwrite)
>
> That can be huge or not so big at all, i.e. as mentioned above, it would we
> good to
> measure the impact through some other metrics.
>
> And FWIW, I used bpftrace to count [0] with an unpatched pmxcfs, t
> Hello, i wan't to make a patch for proxmox to implements DRBD, in a
> different way that LINSTOR do. I want to discuss about its usefulness
> and implementation with the community.
I think you should discuss that with the DRBD people (LINSTOR).
But I am not sure they are on this list.
__
> In hyper-converged deployments, the node performing the backup is sourcing
> ((nodes-1)/(nodes))*bytes) of backup data (i.e., ingress traffic) and then
> sending 1*bytes to PBS (i.e., egress traffic). If PBS were to pull the data
> from the nodes directly, the maximum load on any one host woul
> The biggest issue we see reported related to QEMU bitmaps is
> persistence. The lack of durability results in unpredictable backup
> behavior at scale. If a host, rack, or data center loses power, you're
> in for a full backup cycle. Even if several VMs are powered off for
> some reason, it can
> Today, I believe the client is reading the data and pushing it to
> PBS. In the case of CEPH, wouldn't this involve sourcing data from
> multiple nodes and then sending it to PBS? Wouldn't it be more
> efficient for PBS to read it directly from storage? In the case of
> centralized storage, we'd
> Would adding support for offloading incremental difference detection
> to the underlying storage be feasible with the API updates? The QEMU
> bitmap strategy works for all storage devices but is far from
> optimal.
Sorry, but why do you think this is far from optimal?
_
Remove ureq, because it does not support unix sockets.
Signed-off-by: Dietmar Maurer
---
Changes sinve v2:
split out the command line help text change into patch:
[PATCH pve-xtermjs] termproxy: fix the command line help text
Changes since v1:
- use extra --authsocket cli option
- use
The need to be the first argument.
Signed-off-by: Dietmar Maurer
---
termproxy/src/cli.rs | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/termproxy/src/cli.rs b/termproxy/src/cli.rs
index cc44655..adfd830 100644
--- a/termproxy/src/cli.rs
+++ b/termproxy/src/cli.rs
@@ -4,7
Remove ureq, because it does not support unix sockets.
Signed-off-by: Dietmar Maurer
---
Changes since v1:
- use extra --authsocket cli option
- use single format!() instead of multiple push_str()
- cleanup variable names
termproxy/Cargo.toml | 2 +-
termproxy/src/cli.rs | 26
Remove ureq, because it does not support unix sockets.
Signed-off-by: Dietmar Maurer
---
termproxy/Cargo.toml | 2 +-
termproxy/src/cli.rs | 29 +
termproxy/src/main.rs | 59 +--
3 files changed, 71 insertions(+), 19 deletions
Signed-off-by: Dietmar Maurer
---
proxmox-acme/src/account.rs | 2 +-
proxmox-acme/src/eab.rs | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/proxmox-acme/src/account.rs b/proxmox-acme/src/account.rs
index e244c09..7f00143 100644
--- a/proxmox-acme/src/account.rs
+++ b
Because AccountData is exposed via our API (currently as type Object).
Signed-off-by: Dietmar Maurer
---
proxmox-acme/Cargo.toml | 3 +++
proxmox-acme/src/account.rs | 7 +++
proxmox-acme/src/eab.rs | 5 +
3 files changed, 15 insertions(+)
diff --git a/proxmox-acme/Cargo.toml b
> On 29.2.2024 16:09 CET Roland Kammerer via pve-devel
> wrote:
> All in all, yes, this is specific for our use case, otherwise
> parse_volname would already have that additional parameter as all the
> other plugin functions, but I don't see where this would hurt existing
> code, and it certain
> while I don't mind (at all!) that that part of the UI/API is labelled syslog
> (I don't think it's hard to understand that it gives you the system logs of
> that node, and "syslog" is a bit like "Kleenex" in that regard ;)) - I do
> have to disagree here ;) journald does a lot more than just c
I can also imaging using "Events" instead of "Syslog".
- title: 'Syslog',
+ title: gettext('Events'),
IMHO this is easier to translate.
> With your change:
>
> - title: 'Syslog',
> + title: gettext('System Log'),
>
> we now need to translate
> The information gathered by the API call comes from the systemd
> journal. While 'Syslog' could be interpreted as a shorthand for
> "System Log", it's better to be explicit to avoid any confusion.
> - title: 'Syslog',
> + title: gettext('System Log'),
From Wikipedia: htt
> >>Stupid question: Wouldn't It be much easier to add a simple IO-buffer
> >>with limited capacity, implemented inside the RUST backup code?
>
> At work, we are running a backup cluster on remote location with hdd ,
> and a production cluster with super fast nvme,
> and sometimes I have really b
Stupid question: Wouldn't It be much easier to add a simple IO-buffer
with limited capacity, implemented inside the RUST backup code?
> +WARNING: Theoretically, the fleecing image can grow to the same size as the
> +original image, e.g. if the guest re-writes a whole disk while the backup is
> +b
> Do note the following examples:
>
> #: pve-manager/www/manager6/qemu/IPConfigEdit.js:97
> #: pve-manager/www/manager6/qemu/IPConfigEdit.js:155
> msgid "DHCP"
> msgstr "بروتوكول التهيئة الآلية للمضيفين (DHCP)"
>
> #: pve-manager/www/manager6/qemu/IPConfigEdit.js:163
> ms
; Am 28/11/2023 um 15:46 schrieb Dietmar Maurer:
> > To be more clear, I would use:
> >
> > proxmox.Utils.defaultText + ' (' + gettext('Free') + ')'
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> I'm taking on a lot of contributions to translations and the common complaint
> I hear is that not all can be translated correctly due to such tricks (or just
> missing gettext), most translators care much more about a correct translation
> than some over-optimized ones than then break depending
To be more clear, I would use:
proxmox.Utils.defaultText + ' (' + gettext('Free') + ')'
> On 28.11.2023 15:39 CET Dietmar Maurer wrote:
>
>
> > The string `proxmox.Utils.defaultText + ' (free)'` was inlined as
> > `Default (
> The string `proxmox.Utils.defaultText + ' (free)'` was inlined as
> `Default (Free)` cutting translatable strings makes them harder or even
> impossible to translate in certain languages.
Well, this also duplicates the number of things to translate!
So what languages are the problem exactly? Pl
Do you also plan to fix those typos in the translation files?
Else we need to re-tralstale them for all languages!
> On 24.11.2023 15:58 CET Maximiliano Sandoval wrote:
>
>
> It would be preferable to use "won't" but I would rather err on the safe
> side when it comes to escapes in gettext.
>
> > As a compromise, maybe we could just add a note to the docs
> > that discusses the reliability aspects of 'sendmail' vs 'smtp'
> > endpoints?
> >
>
> Sure, for now adding a general hint to the documentation that they are
> send one-shot only would be good.
Ok for me.
__
> On 11/8/23 16:52, Dietmar Maurer wrote:
> >> This patch series adds support for a new notification endpoint type,
> >> smtp. As the name suggests, this new endpoint allows PVE to talk
> >> to SMTP server directly, without using the system's MTA (postfix).
>
> This patch series adds support for a new notification endpoint type,
> smtp. As the name suggests, this new endpoint allows PVE to talk
> to SMTP server directly, without using the system's MTA (postfix).
Isn't this totally unreliable? What if the server responds with a
temporary error code? (A
> One of our users ran into issues with running Ceph on older CPU
> architectures [1]. This is apparently due to a bug in gcc-12 that
> leads to SSE 4.1 instructions always being executed rather than
> dynamically dispatching functions using those instructions.
Cant we fix the GCC bug instead?
_
> One could argue that the case for not existent should return undef,
> while an empty file should return an empty string, but for that we
> might want to check all use-sites first.
AFAIR I use this function many times assuming that it does not throw errors in
case of empty files. That is quite c
> Can I use a small local fast PBS instance without need to keep the full
> datastore chunks ?
>
> I have 300TB nvme in production, I don't want to buy 300TB nvme for backup.
no, sorry.
Do you want to use Ceph as temp backup storage, or simply an additional (node
local) nvme?
___
> This is really a blocker for me,I can't use pbs because I'm using nvme
> is production, and a 7200k hdd backup in a remote 200km site with 5ms
> latency.
Why don't you use a local(fast) PBS instance, then sync to the slow remote?
___
pve-devel mail
> addendum:
>
> 'it doesn't do anything here' is not completely correct
> for 'regular' vm displays it just does not set the ticket which
> breaks the connection
I think this ("break the connection") is important, because otherwise it would
allow unecrypted VNC traffic over the network. I guess
in qemu-server, I wonder why we set $ENV{LC_PVE_TICKET} conditionally? Does not
make any sense to me, because it make all other connection failing...
diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 99b426e..c6a3ac1 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -2102,7 +2102,7 @@
Live migration works?
> +Limitations:
> +
> +* Memory usage on host is always wrong and around 82% Usage
> +* Snapshots do not work
> +* edk2-OVMF required
> +* Recommendable: VirtIO RNG for more entropy (VMs sometimes will not
___
pve-devel mailing li
> let response = if let Method::POST = request.method {
> -req.send(&*request.body)
> +let bytes = request.body.as_slice();
> +req.send_bytes(bytes)
Does this have the side effect of changing the transfer encoding? If so, it is
worth to add an inline comment.
> Let’s say my cluster get hacked !
> Is there a way it compromises the backups ?
First, you can use the PBS access control system to limit access...
A reasonable setup would also:
1.) sync the backups to another physical location
2.) make tape backups an store the tape offsite
___
> On 02/24/2022 3:49 PM Stefan Sterz wrote:
>
>
> To be consistent with PBS's implementation of multi-line comments
> remove "\s*" here too. Since the regex isn't lazy .* matches
> everything \s* would anyway.
But the old regex trimm spaces from the end, so this is quite different!
___
>Would it be possible for PVE to create dirty-bitmaps per backup
storage/PBS storage? That would make this kind of setups more efficient
We decided against that because this can be a big memory leak. Please notice
that we can never free those bitmaps, so they can accumulate when the user
chang
> Currently both are at 80%,
> that mean that ballooning is vm reducing memory fast, and ksm don't
> have time to run.
>
> as ballooning is a lot more intrusive than ksm, I wonder if it couldn't
> be set to something like 90%.
That sounds reasonable to me, but can you see that theoretical effect
> This endpoint here would be Google Workspace (i.e. Google's OIDC provider).
>
> Currently, in the Proxmox LDAP sync - it translates Google Groups (in the
> Google Workspace domain) into LDAP groups, which is what we want.
>
> I'm not too familiar with the OIDC - I do know that Google Workspace
> However, is there any support for groups in OpenID Connect, or a similar
> concept?
In OpenID, it is possible to request "scopes" from the server, which can then
send additional data (claims).
But I am unsure if and how people use those system to manage groups. So what
kind of OpenID server
Signed-off-by: Dietmar Maurer
---
Cargo.toml | 15 +++
src/backup.rs | 11 +++
src/commands.rs | 15 +--
src/lib.rs | 11 ++-
src/restore.rs | 13 ++---
src/shared_cache.rs | 2 +-
src/upload_queue.rs | 6
> No one has encountered a similar problem?
I guess I also saw this when running sync jobs and backups in parallel.
Do you run any sync jobs during backup?
___
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/l
> Given that the support to write on a tape is already there, there is any
> plan to have something similar to write on a removable disk?
Yes. First patches already on the list...
___
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.prox
> I don't know where to find the PVE::RESTHandler module.
Any (working) PVE installation should have that installed...
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> > Backup to a Proxmox Backup Server PBS fails though
> root@hbase01 ~ # vzdump 100 --mode snapshot --node hbase01 --compress
> zstd --all 0 --storage PVE-FE-3 --mailnotification always
> INFO: starting new backup job: vzdump 100 --all 0 --node hbase01
> --mailnotification always --storage PVE
Isn't it easy enough to edit a po file?
I am not particularly keen to setup and maintain another tool/service.
> On 08/18/2021 12:46 PM Claudio Ferreira wrote:
>
>
> Hi for all
>
> My name is Claudio and I did some translations for many open source
> projects, and I want to help in ProxMox f
---
pveum.adoc | 88 +-
1 file changed, 87 insertions(+), 1 deletion(-)
diff --git a/pveum.adoc b/pveum.adoc
index a1adbaa..9329583 100644
--- a/pveum.adoc
+++ b/pveum.adoc
@@ -29,7 +29,7 @@ endif::manvolnum[]
Proxmox VE supports multiple aut
This moves compute_api_permission() into RPCEnvironment.pm.
---
src/PVE/API2/AccessControl.pm | 60 ++
src/PVE/API2/Makefile | 3 +-
src/PVE/API2/OpenId.pm| 211 ++
src/PVE/RPCEnvironment.pm | 49
4 files changed, 270 inserti
---
src/PVE/API2/OpenId.pm | 35 +++
1 file changed, 31 insertions(+), 4 deletions(-)
diff --git a/src/PVE/API2/OpenId.pm b/src/PVE/API2/OpenId.pm
index d0b29fc..8384729 100644
--- a/src/PVE/API2/OpenId.pm
+++ b/src/PVE/API2/OpenId.pm
@@ -9,9 +9,10 @@ use PVE::RS::
---
src/PVE/AccessControl.pm | 2 ++
src/PVE/Auth/Makefile| 3 +-
src/PVE/Auth/OpenId.pm | 68
3 files changed, 72 insertions(+), 1 deletion(-)
create mode 100755 src/PVE/Auth/OpenId.pm
diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl
Changes in v2:
- also check if user is expired (in check_user_enabled)
- always die with newline
- rename "user-attr" to "username-claim"
Dietmar Maurer (5):
check_user_enabled: also check if user is expired
add OpenId configuration
depend on libpve-rs-perl
api:
---
debian/control | 2 ++
1 file changed, 2 insertions(+)
diff --git a/debian/control b/debian/control
index 81a32bd..3ef748b 100644
--- a/debian/control
+++ b/debian/control
@@ -10,6 +10,7 @@ Build-Depends: debhelper (>= 12~),
lintian,
perl,
libpv
---
src/PVE/AccessControl.pm | 16 +++-
1 file changed, 7 insertions(+), 9 deletions(-)
diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl.pm
index 2569a35..8628678 100644
--- a/src/PVE/AccessControl.pm
+++ b/src/PVE/AccessControl.pm
@@ -428,12 +428,10 @@ sub verify_token {
This moves compute_api_permission() into RPCEnvironment.pm.
---
src/PVE/API2/AccessControl.pm | 60 ++
src/PVE/API2/Makefile | 3 +-
src/PVE/API2/OpenId.pm| 214 ++
src/PVE/RPCEnvironment.pm | 49
4 files changed, 273 inserti
---
src/PVE/AccessControl.pm | 2 ++
src/PVE/Auth/Makefile| 3 +-
src/PVE/Auth/OpenId.pm | 67
3 files changed, 71 insertions(+), 1 deletion(-)
create mode 100755 src/PVE/Auth/OpenId.pm
diff --git a/src/PVE/AccessControl.pm b/src/PVE/AccessControl
---
debian/control | 2 ++
1 file changed, 2 insertions(+)
diff --git a/debian/control b/debian/control
index 81a32bd..3ef748b 100644
--- a/debian/control
+++ b/debian/control
@@ -10,6 +10,7 @@ Build-Depends: debhelper (>= 12~),
lintian,
perl,
libpv
---
PVE/HTTPServer.pm | 4 +-
www/manager6/Utils.js | 8 +++
www/manager6/window/LoginWindow.js | 105 -
3 files changed, 114 insertions(+), 3 deletions(-)
diff --git a/PVE/HTTPServer.pm b/PVE/HTTPServer.pm
index 636b562b..dabdf7f3 100
---
src/PVE/API2/OpenId.pm | 33 ++---
1 file changed, 30 insertions(+), 3 deletions(-)
diff --git a/src/PVE/API2/OpenId.pm b/src/PVE/API2/OpenId.pm
index db9f9eb..3814895 100644
--- a/src/PVE/API2/OpenId.pm
+++ b/src/PVE/API2/OpenId.pm
@@ -9,9 +9,10 @@ use PVE::RS::Op
> On 06/02/2021 12:16 PM wb wrote:
>
>
> > I also wonder why SAML? Would it be an option to use OpenId connect instead?
> As I was able to use SAML, I know the functional part and therefore, if I
> used SAML, it is only by ease.
>
> Switch to OpenID, why not. The time I set up a functional P
> > I wonder why you want to store temporary data in /etc/pve/tmp/saml.
> > Wouldn't it we good enough
> > to store that on the local file system?
> On the one hand, I enjoyed reusing your work.
> On the other hand, I think it is more secure to put this kind of data in
> /etc/pve/tmp/saml than in
Unfortunately, your code depends on code not packaged for Debian. Any idea
how to replace that (cpanm Net::SAML2)?
Or better, is there a 'rust' implementaion for SAML2? If so, we could make perl
bindings
for that and reuse the code with Proxmox Backup Server.
Other ideas?
> diff --git a/src/PV
I wonder why you want to store temporary data in /etc/pve/tmp/saml. Wouldn't it
we good enough
to store that on the local file system?
> On 05/27/2021 11:55 PM Julien BLAIS wrote:
>
>
> Added a new endpoint usable by api2/html/access/saml?realm=$DOM
> which allows to initiate a redirection
I am trying to test your code, so I need a SAML Identity provider. What is
the best OSS implementation for that?
I tried lemonldap-ng, but there example configuration is a nightmare and
I was unable to get that running. Is there anything else I can use to test?.
- Dietmar
_
Hi Julien,
> Hello to all.
>
> I have the plan to implement the SSO authentication feature with the SAML
> protocol.
> However, I have an error that prevents me from validating the authentication
> process.
> It is about the locks.
> The first step is to store the request_saml_id. If I try to
> maybe I missed something, but I haven't found a way to list groups and
> backups in a datastore via CLI yet. Is there a way to do that?
proxmox-backup-client is the API client tool (not proxmox-backup-manager).
___
pve-user mailing list
pve-user@list
:api::api`
> --> src/api/api_type_macros.rs:6:5
> |
> 4 | use proxmox::api::api;
> | ^ no `api` in `api`
Sorry, this was a bug in the proxmox crate. Please update an test again:
commit fa3b5374ed61da3c40a1fc58070d6a16c877c3af (HEAD -> master, origin/master,
origin/
> Since a removable disk is generally a simple and cheap solution for
> off-site backups, is there any possibility to extend this feature to
> save data on an ordinary file?
Sync to a removable disk is unrelated to tape backup.
But we have plans to support that also in the future...
_
FYI, I do it without any regex in rust:
https://git.proxmox.com/?p=proxmox-backup.git;a=blob;f=src/config/acl.rs;h=61e507ec42bf5a30f64f56564a1fb107d148fb7b;hb=HEAD#l272
I guess this is faster (at least in rust).
> On 04/19/2021 9:16 AM Lorenz Stechauner wrote:
>
>
> Syntax for permission pa
> This is a good article on the subject:
> https://klarasystems.com/articles/openzfs1-understanding-transparent-compression/
Can't find where the explain it. ZFS magically detects if data is compressible?
Please can someone give me a hint how they do that?
___
> I may be wrong, but AFAIK ZFS detects compressed data and thus it is not
> doing double-compression in such cases,
AFAIK the only way to detect compressed data is to actually compress it, then
test the size. So this is double-compression ...
___
pv
> Is there a reason why we assume that users without subscription do not want
> such notifications?
>
> As far as I see it, if we change it to
> > $dccfg->{notify_updates} // 1
> Then (until they change something)
> - users with active subscription should _continue_ to get notifications
> - enterp
What about using a memory mapped files as cache. That way, you do not
need to care about available memory?
> >> Maybe we could get the available memory and use that as hint, I mean as
> >> memory
> >> usage can be highly dynamic it will never be perfect, but better than just
> >> ignoring
> >> i
This code is quite strange. Please can you use a
normal: if .. then .. else ..
> +push @$cmd, '-H' if $healthonly;
> +push @$cmd, '-a', '-A', '-f', 'brief' if !$healthonly;
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.pr
> >>> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> >>> index f401baf..e579cdf 100644
> >>> --- a/PVE/QemuServer.pm
> >>> +++ b/PVE/QemuServer.pm
> >>> @@ -6991,7 +6991,15 @@ sub clone_disk {
> >>> # that is given by the OVMF_VARS.fd
> >>> my $src_path = PVE::Storage::pa
> BUT we do NOT have experience with CEPH
> so please,
> can somebody send us example of HW configuration suitable for our idea
> ???
>
https://www.proxmox.com/de/downloads/item/proxmox-ve-ceph-benchmark-2020-09
___
pve-user mailing list
pve-user@lis
> On 02/25/2021 4:09 PM mj wrote:
>
>
> On 2/25/21 3:22 PM, Dietmar Maurer wrote:
> > We are working on tape support ...
> >
>
> That is GREAT news. We are looking to replace our storix LTO tape backup
> infra, and we would love replacing it with PBS.
>
We are working on tape support ...
> On 02/25/2021 3:02 PM Simone Piccardi via pve-user
> wrote:
>
>
> ___
> pve-user mailing list
> pve-user@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user
> Hi,
>
> I'm looking to P
> I hit the issue 3 times, and tried creating a new VM and a new Debian
> installation each time.
>
> Any ideas? I have the "broken" PBS backups saved.
Already tried to verify the datastore? (Datastore Verify is OK?)
___
pve-user mailing list
pve-use
This is true for anything. X11 forwarding simply works that way. So I am quite
unsure if we should add xauth here...
Or is this a common practice (I am unaware of)?
>
> When installing the ha-simulator on a PVE node to start it via ssh with
> x11 forwarding, the xauth package helps to avoid `Un
> I would like to add new data storage. This storage would resemble ZFS
> over iSCSI but will use different API to access storage.
I am curios, what API exactly?
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/
> On 12/06/2020 8:41 PM Kamil Trzciński wrote:
>
>
> I'm slightly progressing, but I stumbled across some `debcargo` problem. It
> appears that
> Proxmox uses their own fork of `debcargo`, which is needed in order to
> build crates
> without the usage of crates.io.
I guess you can simply ado
> On 12/01/2020 10:41 AM Dietmar Maurer wrote:
>
>
> > for (my $i = 100; $i < 1; $i++) {
> > - return $i if !defined($idlist->{$i});
> > + return int($i) if !defined($idlist->{$i});
>
> IMO, this does not solve the problem, becau
> for (my $i = 100; $i < 1; $i++) {
> - return $i if !defined($idlist->{$i});
> + return int($i) if !defined($idlist->{$i});
IMO, this does not solve the problem, because $i is already and int.
___
pve-devel mailing list
pve-d
> It has been 5 months since my patch has been applied, however the version
> for pve-zsync has not been incremented and this patch is not in the version
> presented by the repo. What needs to be done?
Ok, just bumped the version and created a new package. So this will be part
of the next release
> What text do you mean exactly? The interface name?
> Arbitrary null-terminated byte string...
Ok
> (Yes I can name an interface "---" or 💩 (poop-emoji)...,
> neither of which our iface schema in JSONSchema.pm would allow...)
great.
___
pve-devel ma
Thanks for the info. But what encoding does that text use? I cannot find that
in RFC4007 (they only
talk about strings and text).
> > Answering myself, it is defined in RFC4007.
> >
> > But "man resolv.conf" say address must be RFC2373 ?
>
> It'll still work. It's a very common notation for lin
Answering myself, it is defined in RFC4007.
But "man resolv.conf" say address must be RFC2373 ?
> On 11/25/2020 12:08 PM Dietmar Maurer wrote:
>
>
> What kind of format is that? RFC2373 does not mention it. Please ca
What kind of format is that? RFC2373 does not mention it. Please can
you give me a hint?
> On 11/25/2020 11:36 AM Wolfgang Bumiller wrote:
>
>
> Signed-off-by: Wolfgang Bumiller
> ---
> changes to v2:
> * use `for of` loop in verify_ip64_address_list
>
> www/manager6/Toolkit.js | 17 ++
applied (and built a new proxmox-backup-qemu package)
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> proxmox-backup-qemu is missing a not-pushed version bump commit, otherwise
> I'd have applied this.
sorry, just pushed the missing commit.
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-
applied
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> Container backup is very slow compared to VM backup. I have a 500 GB
> container (sftp server) with minimal changing files, but even the incremental
> bakcups take 2 hours with heavy disk activity. Almost nothing is transfered
> to the backup server. It seems that it it reads the whole conta
> Whenever I run commands from within one of the nodes, they appear to be
> targeted to the local system only.
Yes, this is how it works.
___
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-use
applied
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Answering myself, this works as expected.
I now simply use Arc::new(()) to count references.
> On 11/03/2020 6:45 PM Dietmar Maurer wrote:
>
>
> > > +Ok((sock, _addr)) => {
> > > +sock.set_nodelay(true).unwrap();
1 - 100 of 9703 matches
Mail list logo