[Puppet Users] Apache with HTTP2 and PHP setup using puppet

2024-05-20 Thread jochen....@gmail.com
Hi,

I would like to update my apache-php servers with http2 but am facing 
several difficulties. It seems like apache must be multithreaded for http2 
and a multithreaded apache does not work with mod-php. 
I have the feeling I will need something like php-fpm from now on and it 
seems like this cannot be configured using puppet::apache alone.

Looks like there are quite to many changes to cope with. I am looking for 
some examples to setup a sane Debian setup with apache and http2 and PHP 
using puppet.

Can anyone share a link or two please?

Thanks, best
Jochen

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/c979c2f3-8a4e-48ee-b966-a9c391fedeabn%40googlegroups.com.


[jira] [Commented] (IO-783) Fetching file extension using FilenameUtils.getExtension method throws error in windows machine

2024-05-19 Thread Jochen Wiedmann (Jira)


[ 
https://issues.apache.org/jira/browse/IO-783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847634#comment-17847634
 ] 

Jochen Wiedmann commented on IO-783:


I would like to point out, that the check, as it is, is broken anyways. For 
example c:\\file.txt is certainly a valid file name on Windows.

> Fetching file extension using FilenameUtils.getExtension method throws error 
> in windows machine
> ---
>
> Key: IO-783
> URL: https://issues.apache.org/jira/browse/IO-783
> Project: Commons IO
>  Issue Type: Bug
>  Components: Utilities
>Affects Versions: 2.7, 2.8.0, 2.9.0, 2.10.0, 2.11.0
> Environment: Run the below line of code in windows environment.
> String fileName = FilenameUtils.getExtension("Top of Information 
> Store\\Archive\\Informational-severity alert: eDiscovery search started or 
> exported.msg");
>         System.out.println(fileName);
>  
> We are getting the error,
> Exception in thread "main" java.lang.IllegalArgumentException: NTFS ADS 
> separator (':') in file name is forbidden.
>     at 
> org.apache.commons.io.FilenameUtils.indexOfExtension(FilenameUtils.java:737)
>     at 
> org.apache.commons.io.FilenameUtils.getExtension(FilenameUtils.java:1057)
>Reporter: Samraj
>Priority: Major
>
> Hi Team,
> I am using FilenameUtils.getExtension method to get the file extension from 
> the file path ( Available as string). Due to one of the bug fix happen after 
> 2.7 breaks the code. 
> Run the below line of code in windows environment.
> String fileName = FilenameUtils.getExtension("Top of Information 
> Store\\Archive
> Informational-severity alert: eDiscovery search started or exported.msg");
>         System.out.println(fileName);
>  
> We are getting the error,
> Exception in thread "main" java.lang.IllegalArgumentException: NTFS ADS 
> separator (':') in file name is forbidden.
>     at 
> org.apache.commons.io.FilenameUtils.indexOfExtension(FilenameUtils.java:737)
>     at 
> org.apache.commons.io.FilenameUtils.getExtension(FilenameUtils.java:1057)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IO-830) Rethink AbstractOrigin

2024-05-17 Thread Jochen Wiedmann (Jira)


[ 
https://issues.apache.org/jira/browse/IO-830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847387#comment-17847387
 ] 

Jochen Wiedmann commented on IO-830:


[~elharo] While I agree with you, in general (for example, I'd like to see a 
distinction between an origin, that is based on a byte stream, and an origin, 
that is based on a character stream), I don't see any real pain. The mere fact, 
that an UnsupportedOperationException is being used, is (in my opinion) not 
enough reason to implement changes.

 

In particular, as far as I can tell, the UOE isn't actually thrown, but just 
declared in the Javadocs.

 

> Rethink AbstractOrigin
> --
>
> Key: IO-830
> URL: https://issues.apache.org/jira/browse/IO-830
> Project: Commons IO
>  Issue Type: Bug
>Reporter: Elliotte Rusty Harold
>Priority: Critical
>
> UnuspportedOperationException is a code smell that indicates the class 
> hierarchy doesn't really fit the problem and violates the Liskov Subsitution 
> Principle
> See 
> https://softwareengineering.stackexchange.com/questions/337850/is-expecting-the-api-user-to-implement-an-unsupportedoperationexception-okay
> It doesn't work to treat all origins the same. E.g. CharSequences really, 
> really need a character set before they can be converted to byte arrays or 
> input streams, but byte arrays and files don't. In reverse files need a 
> character set to be converted to a reader but char sequences don't.
> Different classes need different arguments, whether you use a builder or a 
> constructor. There's not common type here. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (IO-833) Every origin needs a charset

2024-05-17 Thread Jochen Wiedmann (Jira)


[ 
https://issues.apache.org/jira/browse/IO-833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17847382#comment-17847382
 ] 

Jochen Wiedmann commented on IO-833:


[~elharo] As far as I can tell, the implementation does use a Charset:

 

{{    {color:#5a5a5a}{color:#7f0055}private{color}{color:#00} Charset 
{color}{color:#c0}charset{color}{color:#00} = 
Charset.{color}{color:#00}defaultCharset{color}{color:#00}();{color}{color}}}

 

I admit, that the choice of Charset is questionable, I'd have recommended 
UTF_8, but that can't be changed without loosing compatibility.

 

> Every origin needs a charset
> 
>
> Key: IO-833
> URL: https://issues.apache.org/jira/browse/IO-833
> Project: Commons IO
>  Issue Type: Bug
>Reporter: Elliotte Rusty Harold
>Priority: Critical
>
> Every origin except possibly URIOrigin needs a charset. There is no reliable, 
> acceptable way to convert bytes to chars (ByteOrigin, PathOrigin) or chars to 
> bytes (CharSequenceOrigin) without it.
> The only possible exception is URIOrigin which can have enough metadata to 
> usefully deduce the charset.
> Methods like getBytes and getReader should throw an IllegalStateException if  
> charset is needed and not supplied.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [Openvpn-users] TLS key negotiation failed to occur ISP screws up the VPN

2024-05-17 Thread Jochen Bern

On 17.05.24 15:49, shadowbladeee via Openvpn-users wrote:

Time is correct on the machines, certs expire in 2049.


Any *CRLs* that might have expired?

I note that the tcpdump shows only quite *small* packets. MTU issues 
that could lead to (persistent) loss of large ones from the other end?


Kind regards,
--
Jochen Bern
Systemingenieur

Binect GmbH


smime.p7s
Description: S/MIME Cryptographic Signature
___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


(commons-lang) branch master updated: Minor documentation fixes.

2024-05-16 Thread jochen
This is an automated email from the ASF dual-hosted git repository.

jochen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/commons-lang.git


The following commit(s) were added to refs/heads/master by this push:
 new 797f9a4f5 Minor documentation fixes.
797f9a4f5 is described below

commit 797f9a4f5d8746a8c2c5dc28c422176ead897516
Author: Jochen Wiedmann 
AuthorDate: Fri May 17 01:02:06 2024 +0200

Minor documentation fixes.
---
 src/main/java/org/apache/commons/lang3/annotations/Safe.java | 2 +-
 src/main/java/org/apache/commons/lang3/annotations/package-info.java | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/main/java/org/apache/commons/lang3/annotations/Safe.java 
b/src/main/java/org/apache/commons/lang3/annotations/Safe.java
index 4b5212c71..c3a710cf2 100644
--- a/src/main/java/org/apache/commons/lang3/annotations/Safe.java
+++ b/src/main/java/org/apache/commons/lang3/annotations/Safe.java
@@ -26,7 +26,7 @@ import java.lang.annotation.Target;
  * This annotation is used to indicate, that a variable, field, or parameter
  * contains a safe value. If so, the annotated element may be used in an
  * invocation of a constructor, or method, which is annotated with
- * {@code @Trusted}.
+ * {@code @Insecure}.
  *
  * For example, suggest the following method declaration:
  * 
diff --git 
a/src/main/java/org/apache/commons/lang3/annotations/package-info.java 
b/src/main/java/org/apache/commons/lang3/annotations/package-info.java
index 43d54d606..720d61069 100644
--- a/src/main/java/org/apache/commons/lang3/annotations/package-info.java
+++ b/src/main/java/org/apache/commons/lang3/annotations/package-info.java
@@ -30,7 +30,7 @@
  *   By annotating a variable with {@code @Safe}, the API user
  * declares, that the variable contains trusted input, that can be
  * used as a parameter in an invocation of a constructor, or method,
- * that is annotated with {@code @Trusted}.
+ * that is annotated with {@code @Insecure}.
  * 
  * @since 3.15
  */



(commons-lang) branch master updated: Adding the @Insecure, and @Safe annotations.

2024-05-16 Thread jochen
This is an automated email from the ASF dual-hosted git repository.

jochen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/commons-lang.git


The following commit(s) were added to refs/heads/master by this push:
 new 3322d9748 Adding the @Insecure, and @Safe annotations.
3322d9748 is described below

commit 3322d974876b8d4f934d3544967103ebbcaef726
Author: Jochen Wiedmann 
AuthorDate: Fri May 17 00:28:39 2024 +0200

Adding the @Insecure, and @Safe annotations.
---
 src/changes/changes.xml|   1 +
 .../apache/commons/lang3/annotations/Insecure.java |  48 
 .../org/apache/commons/lang3/annotations/Safe.java |  61 +++
 .../commons/lang3/annotations/package-info.java|  37 +++
 .../commons/lang3/annotations/AnnotationsTest.java | 122 +
 5 files changed, 269 insertions(+)

diff --git a/src/changes/changes.xml b/src/changes/changes.xml
index 34841687a..b69e1f8a2 100644
--- a/src/changes/changes.xml
+++ b/src/changes/changes.xml
@@ -140,6 +140,7 @@ The  type attribute can be add,update,fix,remove.
 Bump org.apache.commons:commons-text from 1.11.0 to 1.12.0 
#1200. 
 
 Drop obsolete JDK 13 Maven profile #1142.
+Added the annotations 
package, including the Insecure, and Safe annotations.
   
   
 
diff --git a/src/main/java/org/apache/commons/lang3/annotations/Insecure.java 
b/src/main/java/org/apache/commons/lang3/annotations/Insecure.java
new file mode 100644
index 0..2802f1189
--- /dev/null
+++ b/src/main/java/org/apache/commons/lang3/annotations/Insecure.java
@@ -0,0 +1,48 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.commons.lang3.annotations;
+
+import java.lang.annotation.Documented;
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+/**
+ * This annotation is used to indicate, that a constructor, or method
+ * is insecure to use, unless the input parameters contain safe ("trusted")
+ * values.
+ *
+ * For example, consider a method like 
+ *   {@literal @Insecure}
+ *   public void runCommand(String pCmdLine) {
+ *   }
+ * 
+ *
+ * The example method would invoke {@code /bin/sh} (Linux, Unix, or MacOS), or
+ * {@code cmd} (Windows) to run an external command, as given by the parameter
+ * {@code pCmdLine}. Obviously, depending on the value of the parameter,
+ * this can be dangerous, unless the API user (downstream developer)
+ * knows, that the parameter value is safe (for example, because it
+ * is hard coded, or because it has been compared to a white list of
+ * permissible values).
+ */
+@Retention(RetentionPolicy.RUNTIME)
+@Target({ElementType.CONSTRUCTOR, ElementType.METHOD})
+@Documented
+public @interface Insecure {
+}
diff --git a/src/main/java/org/apache/commons/lang3/annotations/Safe.java 
b/src/main/java/org/apache/commons/lang3/annotations/Safe.java
new file mode 100644
index 0..4b5212c71
--- /dev/null
+++ b/src/main/java/org/apache/commons/lang3/annotations/Safe.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.commons.lang3.annotations;
+
+import java.lang.annotation.Documented;
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.Re

[go-nuts] replacement for filepath.HasPrefix?

2024-05-16 Thread Jochen Voss
Dear all,

filepath.HasPrefix is deprecated, because it doesn't alway work.  What 
would be a replacement for this function, which at least respects path 
boundaries, and maybe also ignores case when needed?

All the best,
Jochen

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/4cca691c-ab0a-4081-9e60-2fa0b2155f62n%40googlegroups.com.


Bug#1071190: golang-github-shirou-gopsutil fails to build with no physical disks present

2024-05-15 Thread Jochen Sprickerhof
Source: golang-github-shirou-gopsutil
Version: 3.24.1-1
Severity: normal
X-Debbugs-Cc: d...@debian.org, wb-t...@buildd.debian.org
Control: affects -1 buildd.debian.org

Hi,

golang-github-shirou-gopsutil fails to build when there are no physical
drives mounted:

=== RUN   TestDisk_partitions
disk_test.go:38: error 
disk_test.go:40: []
disk_test.go:43: ret is empty
--- FAIL: TestDisk_partitions (0.00s)

This happens for example in the sbuild unshare backend.

Cheers Jochen



[Bug 2037302] Re: ros-robot and ros-simulators-dev missing related packages

2024-05-15 Thread Jochen Sprickerhof
apt show ros-robot gives:

Description: Python Robot OS robot metapackage
 This package is part of Robot OS (ROS). It is a metapackage which
 provides all the ROS robot system (including ROS base).
 .
 Different to upstream, this package does not provide:
 control_msgs, executive_smach, filters,
 xacro.
 Please install them from source, if you need them.

So it clearly describes the situation. What do you mean by packaging
errors?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2037302

Title:
  ros-robot and ros-simulators-dev missing related packages

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ros-metapackages/+bug/2037302/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: Groovy Poster for Community Over Code EU

2024-05-15 Thread Jochen Theodorou

I like it

On 14.05.24 13:47, Paul King wrote:

Hi folks,

We have a poster that will be displayed at Community Over Code EU in
Bratislava in a few weeks.

Here is my current draft:

https://github.com/apache/apachecon-eu/blob/main/static/posters/CoCEU_WhyGroovyToday.pdf

There is a small window to make changes before they send the posters
off to the printers. It will be printed I think on A1 size paper,
about 594mm W x 841mm H (23.4 x 33.1 inches).

At the moment, it is rich in technical content - perhaps a little
light in marketing the benefits. If I was to make changes I'd prefer
to maybe reduce the first slightly and increase the latter. Let me
know if you have any feedback.

Thanks, Paul.




Re: weird error report

2024-05-15 Thread Jochen Theodorou

On 13.05.24 16:16, o...@ocs.cz wrote:
[...]

2074 ocs /tmp>  /usr/local/groovy-4.0.18/bin/groovy q
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
/private/tmp/q.groovy: 1: Unexpected input: '{' @ line 1, column 32.
def all=['hi','there'].findAll { it.startsWith('h')) }
   ^
1 error
2076 ocs /tmp>
===


I assume the problem is the )) for the startsWith. The parser backtracks
because there is no opening ( for the closing one and then finds that {
can't be a valid input here.

Could you please make an issue for this? We should look into how we can
improve this

bye  Jochen


[go-nuts] tls.VerifyClientCertIfGiven

2024-05-15 Thread Jochen Voss
Hello,

In a server I use tls.Config.ClientAuth=tls.VerifyClientCertIfGiven.
If then a client manages to connect and I can see a certificate in
http.Request.TLS.PeerCertificates, does this just mean that the client
has the certificate, or does this also prove that the client has the 
associated private key?

Many thanks,
Jochen

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/7003fa55-2436-4b85-a5ab-eed2d54430d0n%40googlegroups.com.


[Bug 2046047] Re: No aptX support

2024-05-15 Thread Jochen Sprickerhof
Thanks for the reply, the actual problem is that it must be --with-
libopenaptx not --enable-libopenaptx as currently in the package.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2046047

Title:
  No aptX support

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bluez-alsa/+bug/2046047/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[FRIAM] The Rise of the Maya Civilization

2024-05-14 Thread Jochen Fromm
Takeshi Inomata from the University of Arizona does interesting work on the 
rise of the Maya civilization:Monumental architecture at Aguada Fenix and the 
rise of Maya civilization, Nature 582 (2020) 
530-533https://pasolibre.grecu.mx/wp-content/uploads/2020/07/41586_2020_2343_opt.pdf-J.-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[ClusterLabs] Mixing globally-unique with non-globally-unique resources

2024-05-14 Thread Jochen
I have the following use case: There are several cluster IP addresses in the 
cluster. Each address is different, and multiple addresses can be scheduled on 
the same node. This makes the address clone a globally-unique clone as far as I 
understood. Then I have one service per node which manages traffic for all 
addresses on a node where an address is active, which makes the service clone 
not-globally-unique. The service should only run if at least one address is 
active on the node, and there cannot be more than one instance of the service 
on each node.

How would I create this pattern in Pacemaker?
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


[Bug 2046047] Re: No aptX support

2024-05-13 Thread Jochen Sprickerhof
From my understanding --enable-aptx and --enable-aptx-hd depend on
https://github.com/Arkq/openaptx which states in the readme:

This project is for research purposes only. Without a proper license
private and commercial usage might be a case of a patent infringement.

So we can't distribute it in Debian or depend on it.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2046047

Title:
  No aptX support

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/bluez-alsa/+bug/2046047/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [DISCUSS] RAT tickets

2024-05-13 Thread Jochen Wiedmann
On Sat, May 11, 2024 at 9:45 AM Claude Warren  wrote:
>
> I think that we should do the following:

...

> create a licenses section in the configuration.
>
> --licenses : a list of files to be read as license files.
> --licenses-approved : a list of license IDs to approve.
> --licenses-approved-file : A file containing license IDs to approve.
> --licenses-no-default : An enumeration of DEF (do not load license
> definitions), APPROVAL (do not load default license approvals)

Not exactly sure, what you refer to as "the configuration". However,
I'd like to expand on your idea by proposing that we ought to have not
only "license files" in the source tree (or external from the source
tree in another shared location ), but also a configuration file,
which controls Rat in the absence of command line options / Maven
properties / Ant arguments. Basically, by simply invoking Rat, the
configuration would be specified by the configuration file only. The
purpose of command line options / Maven properties / Ant arguments
would be to overrule the configuration file.

Jochen


Re: Replacing GrapeIvy

2024-05-12 Thread Jochen Theodorou

On 03.05.24 00:28, Paul King wrote:

Hi folks,

One of the things we know is that Apache Ivy (used by Grab/grapes) is
being maintained less these days. I am going to start a spike to
better understand what a replacement using Apache Maven Resolver might
look like. If anyone has strong opinions or free cycles and wants to
help, let me know and you can join in the fun. Otherwise, I'll create
future issues(s)/PR(s) for folks to look at in due course assuming all
goes well.


I'd love to help, but I have currently no spare cycles, sorry.

I was wondering... hos does @GrabConfig(systemClassLoader=true) work
these days? From my knowledge this cannot work anymore in later JDK
versions (9+) because of the changes to the system class loader. Or did
we somehow bypass the limitation?

I personally use these in script files only. There is a potential class
loader suitable to do such work, but it is still not the system class
loader. If something really would need that, it would be having no
chance of working.

I hear really really little for such a nice feature having a problem. So
maybe it does not matter in reality?

bye Jochen



Bug#1070952: ros-vcstools: FTBFS in bullseye

2024-05-12 Thread Jochen Sprickerhof

Hi Santiago,

thanks for the report. This seems to be due to git 1:2.30.2-1+deb11u1 as 
it works with the version before (1:2.30.2-1). Give that it is a 
security fix and a testing only problem that could worked around easily, 
I would leave this as is.


Cheers Jochen

* Santiago Vila  [2024-05-11 21:53]:

Package: src:ros-vcstools
Version: 0.1.42-3
Severity: serious
Control: close -1 0.1.42-7
Tags: ftbfs bullseye

Dear maintainer:

During a rebuild of all packages in bullseye, your package failed to build:


[...]
debian/rules binary
dh binary --with python3 --buildsystem=pybuild
  dh_update_autotools_config -O--buildsystem=pybuild
  dh_autoreconf -O--buildsystem=pybuild
  dh_auto_configure -O--buildsystem=pybuild
I: pybuild base:232: python3.9 setup.py config
/<>/setup.py:3: DeprecationWarning: the imp module is deprecated 
in favour of importlib; see the module's documentation for alternative uses
 import imp
running config
  dh_auto_build -O--buildsystem=pybuild
I: pybuild base:232: /usr/bin/python3 setup.py build
/<>/setup.py:3: DeprecationWarning: the imp module is deprecated 
in favour of importlib; see the module's documentation for alternative uses
 import imp

[... snipped ...]

6 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_checkout_into_subdir_without_existing_parent (test.test_hg.HGClientTest) 
... updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_checkout_specific_version_and_update (test.test_hg.HGClientTest) ... 
updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
0 files updated, 0 files merged, 2 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
0 files updated, 0 files merged, 2 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_get_current_version_label (test.test_hg.HGClientTest) ... updating to 
branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
0 files updated, 0 files merged, 5 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
5 files updated, 0 files merged, 0 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_get_environment_metadata (test.test_hg.HGClientTest) ... ok
test_get_remote_version (test.test_hg.HGClientTest) ... updating to branch 
default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
abort: destination '/tmp/tmp18ac112f/local' is not empty
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_get_type_name (test.test_hg.HGClientTest) ... ok
test_get_url_by_reading (test.test_hg.HGClientTest) ... updating to branch 
default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_get_url_nonexistant (test.test_hg.HGClientTest) ... ok
marked working directory as branch test_branch
(branches are permanent and global, did you want a bookmark?)
updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
testStatusUntracked (test.test_hg.HGDiffStatClientTest) ... ok
test_diff (test.test_hg.HGDiffStatClientTest) ... ok
test_diff_relpath (test.test_hg.HGDiffStatClientTest) ... ok
test_get_version_modified (test.test_hg.HGDiffStatClientTest) ... ok
test_hg_diff_path_change_None (test.test_hg.HGDiffStatClientTest) ... ok
test_status (test.test_hg.HGDiffStatClientTest) ... ok
test_status_relpath (test.test_hg.HGDiffStatClientTest) ... ok
marked working directory as branch test_branch
(branches are permanent and global, did you want a bookmark?)
updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
test_export_repository (test.test_hg.HGExportRepositoryClientTest) ... ok
marked working directory as branch test_branch
(branches are permanent and global, did you want a bookmark?)
updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
test_get_branches (test.test_hg.HGGetBranchesClientTest) ... 

Bug#1070952: ros-vcstools: FTBFS in bullseye

2024-05-12 Thread Jochen Sprickerhof

Hi Santiago,

thanks for the report. This seems to be due to git 1:2.30.2-1+deb11u1 as 
it works with the version before (1:2.30.2-1). Give that it is a 
security fix and a testing only problem that could worked around easily, 
I would leave this as is.


Cheers Jochen

* Santiago Vila  [2024-05-11 21:53]:

Package: src:ros-vcstools
Version: 0.1.42-3
Severity: serious
Control: close -1 0.1.42-7
Tags: ftbfs bullseye

Dear maintainer:

During a rebuild of all packages in bullseye, your package failed to build:


[...]
debian/rules binary
dh binary --with python3 --buildsystem=pybuild
  dh_update_autotools_config -O--buildsystem=pybuild
  dh_autoreconf -O--buildsystem=pybuild
  dh_auto_configure -O--buildsystem=pybuild
I: pybuild base:232: python3.9 setup.py config
/<>/setup.py:3: DeprecationWarning: the imp module is deprecated 
in favour of importlib; see the module's documentation for alternative uses
 import imp
running config
  dh_auto_build -O--buildsystem=pybuild
I: pybuild base:232: /usr/bin/python3 setup.py build
/<>/setup.py:3: DeprecationWarning: the imp module is deprecated 
in favour of importlib; see the module's documentation for alternative uses
 import imp

[... snipped ...]

6 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_checkout_into_subdir_without_existing_parent (test.test_hg.HGClientTest) 
... updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_checkout_specific_version_and_update (test.test_hg.HGClientTest) ... 
updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
0 files updated, 0 files merged, 2 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
0 files updated, 0 files merged, 2 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_get_current_version_label (test.test_hg.HGClientTest) ... updating to 
branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
0 files updated, 0 files merged, 5 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
5 files updated, 0 files merged, 0 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_get_environment_metadata (test.test_hg.HGClientTest) ... ok
test_get_remote_version (test.test_hg.HGClientTest) ... updating to branch 
default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
abort: destination '/tmp/tmp18ac112f/local' is not empty
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_get_type_name (test.test_hg.HGClientTest) ... ok
test_get_url_by_reading (test.test_hg.HGClientTest) ... updating to branch 
default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_get_url_nonexistant (test.test_hg.HGClientTest) ... ok
marked working directory as branch test_branch
(branches are permanent and global, did you want a bookmark?)
updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
testStatusUntracked (test.test_hg.HGDiffStatClientTest) ... ok
test_diff (test.test_hg.HGDiffStatClientTest) ... ok
test_diff_relpath (test.test_hg.HGDiffStatClientTest) ... ok
test_get_version_modified (test.test_hg.HGDiffStatClientTest) ... ok
test_hg_diff_path_change_None (test.test_hg.HGDiffStatClientTest) ... ok
test_status (test.test_hg.HGDiffStatClientTest) ... ok
test_status_relpath (test.test_hg.HGDiffStatClientTest) ... ok
marked working directory as branch test_branch
(branches are permanent and global, did you want a bookmark?)
updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
test_export_repository (test.test_hg.HGExportRepositoryClientTest) ... ok
marked working directory as branch test_branch
(branches are permanent and global, did you want a bookmark?)
updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
test_get_branches (test.test_hg.HGGetBranchesClientTest) ... 

Bug#1070952: ros-vcstools: FTBFS in bullseye

2024-05-12 Thread Jochen Sprickerhof

Hi Santiago,

thanks for the report. This seems to be due to git 1:2.30.2-1+deb11u1 as 
it works with the version before (1:2.30.2-1). Give that it is a 
security fix and a testing only problem that could worked around easily, 
I would leave this as is.


Cheers Jochen

* Santiago Vila  [2024-05-11 21:53]:

Package: src:ros-vcstools
Version: 0.1.42-3
Severity: serious
Control: close -1 0.1.42-7
Tags: ftbfs bullseye

Dear maintainer:

During a rebuild of all packages in bullseye, your package failed to build:


[...]
debian/rules binary
dh binary --with python3 --buildsystem=pybuild
  dh_update_autotools_config -O--buildsystem=pybuild
  dh_autoreconf -O--buildsystem=pybuild
  dh_auto_configure -O--buildsystem=pybuild
I: pybuild base:232: python3.9 setup.py config
/<>/setup.py:3: DeprecationWarning: the imp module is deprecated 
in favour of importlib; see the module's documentation for alternative uses
 import imp
running config
  dh_auto_build -O--buildsystem=pybuild
I: pybuild base:232: /usr/bin/python3 setup.py build
/<>/setup.py:3: DeprecationWarning: the imp module is deprecated 
in favour of importlib; see the module's documentation for alternative uses
 import imp

[... snipped ...]

6 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_checkout_into_subdir_without_existing_parent (test.test_hg.HGClientTest) 
... updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_checkout_specific_version_and_update (test.test_hg.HGClientTest) ... 
updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
0 files updated, 0 files merged, 2 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
0 files updated, 0 files merged, 2 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_get_current_version_label (test.test_hg.HGClientTest) ... updating to 
branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
0 files updated, 0 files merged, 5 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
5 files updated, 0 files merged, 0 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_get_environment_metadata (test.test_hg.HGClientTest) ... ok
test_get_remote_version (test.test_hg.HGClientTest) ... updating to branch 
default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
abort: destination '/tmp/tmp18ac112f/local' is not empty
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
pulling from /tmp/tmp18ac112f/remote
searching for changes
no changes found
1 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_get_type_name (test.test_hg.HGClientTest) ... ok
test_get_url_by_reading (test.test_hg.HGClientTest) ... updating to branch 
default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
ok
test_get_url_nonexistant (test.test_hg.HGClientTest) ... ok
marked working directory as branch test_branch
(branches are permanent and global, did you want a bookmark?)
updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
testStatusUntracked (test.test_hg.HGDiffStatClientTest) ... ok
test_diff (test.test_hg.HGDiffStatClientTest) ... ok
test_diff_relpath (test.test_hg.HGDiffStatClientTest) ... ok
test_get_version_modified (test.test_hg.HGDiffStatClientTest) ... ok
test_hg_diff_path_change_None (test.test_hg.HGDiffStatClientTest) ... ok
test_status (test.test_hg.HGDiffStatClientTest) ... ok
test_status_relpath (test.test_hg.HGDiffStatClientTest) ... ok
marked working directory as branch test_branch
(branches are permanent and global, did you want a bookmark?)
updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
test_export_repository (test.test_hg.HGExportRepositoryClientTest) ... ok
marked working directory as branch test_branch
(branches are permanent and global, did you want a bookmark?)
updating to branch default
6 files updated, 0 files merged, 0 files removed, 0 files unresolved
test_get_branches (test.test_hg.HGGetBranchesClientTest) ... 

Bug#1070973: Please add a include_optional to /etc/mpd.conf

2024-05-12 Thread Jochen Sprickerhof
Package: mpd
Version: 0.23.14-2+b2
Severity: wishlist
Tags: patch

Hi,

can you please add a include_optional to simplify local modifications?

Something like this should do:

echo 'include_optional "mpd_local.conf"' >> debian/mpd.conf

Thanks!

Jochen



[Qt-creator] Unclear requirement in setting "Skip clean whitespace for file types"

2024-05-09 Thread Jochen Becher via Qt-creator
Hi,

I wondered why "Skip clean whitespace for file types" does not work: If
I edit a makefile named "Makefile" the tabs are still translated to
spaces on save.

There seems to be some unclear requirements here: the label of the
checkbox is "Skip clean whitespace for file types" but the
implementation names this checkbox "skipTrailingWhitespace" and this is
really the implemented functionality.

I would like to fix that. Renaming the label to "Skip removing trailing
whitespaces for file types" doesn't make much sense to me. I would
prefer renaming the checkbox to "skipCleanupWhitespaceForFileTypes" and
implement it that way.

I could also add another checkbox "skipTrailingWhitespace" with that
label but no text edit box for specific file types.

On the long term, most of the TextEditor settings should be mappable to
user selectable file types.

What do you think?

-- 
Qt-creator mailing list
Qt-creator@qt-project.org
https://lists.qt-project.org/listinfo/qt-creator


Bug#1070332: Wont fix

2024-05-06 Thread Jochen Sprickerhof

Hi Thomas,

* Thomas Goirand  [2024-05-06 08:21]:
I already explained this: I am *NOT* interested in addressing this 
type of failure. Designate is "OpenStack DNS as a Service", therefore, 
it is expected that it's going to check/use /etc/resolv.conf. If you 
carefully look at what's going on, you'll see that it's not even doing 
DNS queries to the outside, it's simply testing itself.


Removing the test would mean less Q/A, which is not desirable.
"Fixing" the test would mean more work, which isn't needed in this 
case (the package works perfectly).


Feel free to bug upstream and resolve it there if you think that's 
valuable, though I am of the opinion it's a loss of time.


Also, note that the package builds perfectly fine in the current 
buildd environment (and on my laptop's sbuild setup). If that was 
going to change, of course, I'd review my opinion. In the mean time, I 
see no point in this bug. Fix your build env...


Note that the buildds started switching to the unshare backend so the 
package will FTBFS soon.


Cheers Jochen


signature.asc
Description: PGP signature


Bug#1070436: autopkgtest-virt-schroot: error when using 'unshare --net' even though schroot allows this

2024-05-05 Thread Jochen Sprickerhof

Hi Richard,

* Richard Lewis  [2024-05-05 11:32]:

If i try and run tests that use 'unshare --net' with a
schroot backend they fail inside autopkgtest even though
this works in the schroot being used.

This works fine in a 'plain schroot' (I expect i allowed
the calling user to run the schroot as root in the schroot
in /etc/schroot):

$ schroot --chroot chroot:unstable-amd64-sbuild --directory / --user root -- 
unshare --net --map-root-user ls
bin  boot  build  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  
sbin  srv  sys  tmp  usr  var


I can't reproduce this. Testing in a fresh debvm:

$ debvm-create --size=2G --release=stable -- \
--include=sbuild,schroot,debootstrap,autopkgtest \
--hook-dir=/usr/share/mmdebstrap/hooks/useradd
$ debvm-run
# echo "inside debvm"
# sbuild-createchroot unstable /srv/chroot/unstable-amd64-sbuild \
http://deb.debian.org/debian
# sbuild-adduser user
# su - user
$ schroot --chroot chroot:unstable-amd64-sbuild --directory / --user root -- 
unshare --net --map-root-user ls
unshare: unshare failed: Operation not permitted

Do you have any idea why it works for you?


But if i have an autopkgtest with eg a debian/tests/control with

Test-Command: unshare --map-root-user --net ./debian/tests/foo
Depends: @
Features: test-name=foo
Restrictions: needs-root


This looks odd. If you only want to unshare the network, as stated in 
the bug title, you neither need --map-root-user nor needs-root. Indeed 
dropping both makes it work for me. Can you give some background what 
you actually want to do here?



then even adding '--user root' doesnt work:

$ /usr/bin/autopkgtest package.changes --user root -- schroot 
unstable-amd64-sbuild


I guess this is due to autopkgtest-virt-schroot starts an schroot 
session but I can't verify without reproducing your example without a 
session.



i get errors like

unshare: unshare failed: Operation not permitted


This maps to unshare(2) returning EPERM. From the manpage:

| CLONE_NEWUSER was specified in flags and the caller is in a chroot 
| environment (i.e., the caller's root directory does not match the root 
| directory of the mount namespace in which it resides).


I think this is what happens here.

Over all I think using unshare --map-root-user in 
autopkgtest-virt-schroot is not supported and I don't think there is a 
way around that except using a different autopkgtest backend.


Cheers Jochen


signature.asc
Description: PGP signature


Bug#1070415: runc fails to build as a normal user due to cgroups access

2024-05-05 Thread Jochen Sprickerhof
Source: runc
Version: 1.1.12+ds1-2
Severity: important
X-Debbugs-Cc: debian-wb-team@lists.debian.org
Usertags: unshare

Hi,

runc tries to write cgroups files during the build which fails as a
normal user:

=== RUN   TestDevicesSetAllow
--- FAIL: TestDevicesSetAllow (0.00s)
panic: runtime error: index out of range [0] with length 0 [recovered]
panic: runtime error: index out of range [0] with length 0

goroutine 63 [running]:
testing.tRunner.func1.2({0x5e12c0, 0xc0001ed2c0})
/usr/lib/go-1.22/src/testing/testing.go:1631 +0x24a
testing.tRunner.func1()
/usr/lib/go-1.22/src/testing/testing.go:1634 +0x377
panic({0x5e12c0?, 0xc0001ed2c0?})
/usr/lib/go-1.22/src/runtime/panic.go:770 +0x132
github.com/opencontainers/runc/libcontainer/cgroups/fs.TestDevicesSetAllow(0xc0001fcd00)

/<>/_build/src/github.com/opencontainers/runc/libcontainer/cgroups/fs/devices_test.go:42
 +0x45e
testing.tRunner(0xc0001fcd00, 0x607748)
/usr/lib/go-1.22/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
/usr/lib/go-1.22/src/testing/testing.go:1742 +0x390
FAILgithub.com/opencontainers/runc/libcontainer/cgroups/fs  0.044s


https://salsa.debian.org/go-team/packages/runc/-/blob/debian/1.1.5+ds1-1/libcontainer/cgroups/fs/devices_test.go?ref_type=tags#L42

This also fails with the sbuild unshare backend.

Cheers Jochen



Bug#1070415: runc fails to build as a normal user due to cgroups access

2024-05-05 Thread Jochen Sprickerhof
Source: runc
Version: 1.1.12+ds1-2
Severity: important
X-Debbugs-Cc: debian-wb-t...@lists.debian.org
Usertags: unshare

Hi,

runc tries to write cgroups files during the build which fails as a
normal user:

=== RUN   TestDevicesSetAllow
--- FAIL: TestDevicesSetAllow (0.00s)
panic: runtime error: index out of range [0] with length 0 [recovered]
panic: runtime error: index out of range [0] with length 0

goroutine 63 [running]:
testing.tRunner.func1.2({0x5e12c0, 0xc0001ed2c0})
/usr/lib/go-1.22/src/testing/testing.go:1631 +0x24a
testing.tRunner.func1()
/usr/lib/go-1.22/src/testing/testing.go:1634 +0x377
panic({0x5e12c0?, 0xc0001ed2c0?})
/usr/lib/go-1.22/src/runtime/panic.go:770 +0x132
github.com/opencontainers/runc/libcontainer/cgroups/fs.TestDevicesSetAllow(0xc0001fcd00)

/<>/_build/src/github.com/opencontainers/runc/libcontainer/cgroups/fs/devices_test.go:42
 +0x45e
testing.tRunner(0xc0001fcd00, 0x607748)
/usr/lib/go-1.22/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
/usr/lib/go-1.22/src/testing/testing.go:1742 +0x390
FAILgithub.com/opencontainers/runc/libcontainer/cgroups/fs  0.044s


https://salsa.debian.org/go-team/packages/runc/-/blob/debian/1.1.5+ds1-1/libcontainer/cgroups/fs/devices_test.go?ref_type=tags#L42

This also fails with the sbuild unshare backend.

Cheers Jochen



Bug#1070414: fails to build when not build inside schroot

2024-05-05 Thread Jochen Sprickerhof
Source: kel-agent
Version: 0.4.6-2
Severity: important
X-Debbugs-Cc: debian-wb-team@lists.debian.org
Usertags: unshare

Hi,

kel-agent hard codes to skip a test when build inside schroot:

https://sources.debian.org/src/kel-agent/0.4.6-2/integration/suite_test.go/#L27

But the test also fails in other environments for me, for example as a
local user or in the sbuild unshare backend. Please either fix or
disable the test.

Cheers Jochen



Bug#1070414: fails to build when not build inside schroot

2024-05-05 Thread Jochen Sprickerhof
Source: kel-agent
Version: 0.4.6-2
Severity: important
X-Debbugs-Cc: debian-wb-t...@lists.debian.org
Usertags: unshare

Hi,

kel-agent hard codes to skip a test when build inside schroot:

https://sources.debian.org/src/kel-agent/0.4.6-2/integration/suite_test.go/#L27

But the test also fails in other environments for me, for example as a
local user or in the sbuild unshare backend. Please either fix or
disable the test.

Cheers Jochen



Bug#1070413: sogo fails to build when test succeeds

2024-05-05 Thread Jochen Sprickerhof
Source: sogo
Version: 5.10.0-2
Severity: important

Hi,

the sogo package contains a patch that hard coded the number of failing
tests to two:

https://sources.debian.org/src/sogo/5.10.0-2/debian/patches/0006-Update-unit-test-expected-failures.patch/

This makes the package FTBFS when more tests succeeds, for example in a
local build or in sbuild with the unshare backend. Please drop this
patch and fix or disable the failing tests instead.

Cheers Jochen



Bug#1070412: Fails to build due to hard coded OS platform

2024-05-05 Thread Jochen Sprickerhof
Source: golang-github-kardianos-service
Version: 1.2.0-2
Severity: important
X-Debbugs-Cc: debian-wb-team@lists.debian.org
Usertags: unshare

Hi,

golang-github-kardianos-service fails to build when it can't detect the
OS platform:

=== RUN   TestPlatformName
name_test.go:15: Platform is unix-systemv
name_test.go:18: Platform() want: /^linux-.*$/, got: unix-systemv
--- FAIL: TestPlatformName (0.00s)


This happens for example in the sbuild unshare bachend.

The problem is that in the test:

https://sources.debian.org/src/golang-github-kardianos-service/1.2.1-1/name_test.go/?hl=13#L13

runtime.GOOS is hard coded to linux.

Cheers Jochen



Bug#1070412: Fails to build due to hard coded OS platform

2024-05-05 Thread Jochen Sprickerhof
Source: golang-github-kardianos-service
Version: 1.2.0-2
Severity: important
X-Debbugs-Cc: debian-wb-t...@lists.debian.org
Usertags: unshare

Hi,

golang-github-kardianos-service fails to build when it can't detect the
OS platform:

=== RUN   TestPlatformName
name_test.go:15: Platform is unix-systemv
name_test.go:18: Platform() want: /^linux-.*$/, got: unix-systemv
--- FAIL: TestPlatformName (0.00s)


This happens for example in the sbuild unshare bachend.

The problem is that in the test:

https://sources.debian.org/src/golang-github-kardianos-service/1.2.1-1/name_test.go/?hl=13#L13

runtime.GOOS is hard coded to linux.

Cheers Jochen



Bug#1070411: containerd fails to build as a normal user due to sysctl

2024-05-05 Thread Jochen Sprickerhof
Source: containerd
Version: 1.6.20~ds1-1
Severity: important
X-Debbugs-Cc: debian-wb-team@lists.debian.org
Usertags: unshare

Hi,

containerd uses sysctl during the build which fails as a normal user:

=== RUN   TestLinuxSandboxContainerSpec
sandbox_run_linux_test.go:241: TestCase "spec should reflect original 
config"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
sandbox_run_linux_test.go:124:
Error Trace:
/<>/_build/src/github.com/containerd/containerd/pkg/cri/server/sandbox_run_linux_test.go:124

/<>/_build/src/github.com/containerd/containerd/pkg/cri/server/sandbox_run_linux_test.go:259
Error:  "" does not contain "0 2147483647"
Test:   TestLinuxSandboxContainerSpec
sandbox_run_linux_test.go:241: TestCase "host namespace"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
sandbox_run_linux_test.go:241: TestCase "should set supplemental groups 
correctly"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
sandbox_run_linux_test.go:241: TestCase "should overwrite default sysctls"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
sandbox_run_linux_test.go:241: TestCase "sandbox sizing annotations should 
be set if LinuxContainerResources were provided"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
sandbox_run_linux_test.go:241: TestCase "sandbox sizing annotations should 
not be set if LinuxContainerResources were not provided"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
sandbox_run_linux_test.go:241: TestCase "sandbox sizing annotations are 
zero if the resources are set to 0"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
--- FAIL: TestLinuxSandboxContainerSpec (0.00s)

https://salsa.debian.org/go-team/packages/containerd/-/blob/debian/sid/pkg/cri/server/sandbox_run_linux_test.go#L124

This make the build fail for example in the sbuild unshare backend.

Cheers Jochen



Bug#1070411: containerd fails to build as a normal user due to sysctl

2024-05-05 Thread Jochen Sprickerhof
Source: containerd
Version: 1.6.20~ds1-1
Severity: important
X-Debbugs-Cc: debian-wb-t...@lists.debian.org
Usertags: unshare

Hi,

containerd uses sysctl during the build which fails as a normal user:

=== RUN   TestLinuxSandboxContainerSpec
sandbox_run_linux_test.go:241: TestCase "spec should reflect original 
config"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
sandbox_run_linux_test.go:124:
Error Trace:
/<>/_build/src/github.com/containerd/containerd/pkg/cri/server/sandbox_run_linux_test.go:124

/<>/_build/src/github.com/containerd/containerd/pkg/cri/server/sandbox_run_linux_test.go:259
Error:  "" does not contain "0 2147483647"
Test:   TestLinuxSandboxContainerSpec
sandbox_run_linux_test.go:241: TestCase "host namespace"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
sandbox_run_linux_test.go:241: TestCase "should set supplemental groups 
correctly"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
sandbox_run_linux_test.go:241: TestCase "should overwrite default sysctls"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
sandbox_run_linux_test.go:241: TestCase "sandbox sizing annotations should 
be set if LinuxContainerResources were provided"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
sandbox_run_linux_test.go:241: TestCase "sandbox sizing annotations should 
not be set if LinuxContainerResources were not provided"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
sandbox_run_linux_test.go:241: TestCase "sandbox sizing annotations are 
zero if the resources are set to 0"
sandbox_run_linux_test.go:71: Check PodSandbox annotations
--- FAIL: TestLinuxSandboxContainerSpec (0.00s)

https://salsa.debian.org/go-team/packages/containerd/-/blob/debian/sid/pkg/cri/server/sandbox_run_linux_test.go#L124

This make the build fail for example in the sbuild unshare backend.

Cheers Jochen



Bug#1070410: golang-github-pion-webrtc.v3 accesses the internet during build

2024-05-04 Thread Jochen Sprickerhof
Source: golang-github-pion-webrtc.v3
Version: 3.1.56-2
Severity: serious
Justification: Policy 4.9
X-Debbugs-Cc: d...@debian.org, wb-t...@buildd.debian.org
Control: affects -1 buildd.debian.org

Hi,

golang-github-pion-webrtc.v3 attempts network access during build.
This is forbidden by Policy 4.9:

  For packages in the main archive, required targets must not attempt
  network access, except, via the loopback interface, to services on the
  build host that have been started by the build.

This can be tested with the sbuild unshare backend:

=== NAME  TestDataChannelParamters_Go
util.go:41: Unexpected routines on test end:
goroutine 34 [select]:

github.com/pion/interceptor/pkg/nack.(*GeneratorInterceptor).loop(0xc240a0, 
{0x9f4b80, 0xc3ec30})

/<>/_build/src/github.com/pion/interceptor/pkg/nack/generator_interceptor.go:139
 +0x12d
created by 
github.com/pion/interceptor/pkg/nack.(*GeneratorInterceptor).BindRTCPWriter in 
goroutine 16

/<>/_build/src/github.com/pion/interceptor/pkg/nack/generator_interceptor.go:74
 +0x115

goroutine 35 [select]:

github.com/pion/interceptor/pkg/report.(*ReceiverInterceptor).loop(0xc0001303c0,
 {0x9f4b80, 0xc3ec30})

/<>/_build/src/github.com/pion/interceptor/pkg/report/receiver_interceptor.go:97
 +0x19c
created by 
github.com/pion/interceptor/pkg/report.(*ReceiverInterceptor).BindRTCPWriter in 
goroutine 16

/<>/_build/src/github.com/pion/interceptor/pkg/report/receiver_interceptor.go:86
 +0x115

goroutine 36 [select]:

github.com/pion/interceptor/pkg/report.(*SenderInterceptor).loop(0xc000130420, 
{0x9f4b80, 0xc3ec30})

/<>/_build/src/github.com/pion/interceptor/pkg/report/sender_interceptor.go:98
 +0x19c
created by 
github.com/pion/interceptor/pkg/report.(*SenderInterceptor).BindRTCPWriter in 
goroutine 16

/<>/_build/src/github.com/pion/interceptor/pkg/report/sender_interceptor.go:87
 +0x115

[...]

Cheers Jochen



Bug#1070410: golang-github-pion-webrtc.v3 accesses the internet during build

2024-05-04 Thread Jochen Sprickerhof
Source: golang-github-pion-webrtc.v3
Version: 3.1.56-2
Severity: serious
Justification: Policy 4.9
X-Debbugs-Cc: d...@debian.org, wb-t...@buildd.debian.org
Control: affects -1 buildd.debian.org

Hi,

golang-github-pion-webrtc.v3 attempts network access during build.
This is forbidden by Policy 4.9:

  For packages in the main archive, required targets must not attempt
  network access, except, via the loopback interface, to services on the
  build host that have been started by the build.

This can be tested with the sbuild unshare backend:

=== NAME  TestDataChannelParamters_Go
util.go:41: Unexpected routines on test end:
goroutine 34 [select]:

github.com/pion/interceptor/pkg/nack.(*GeneratorInterceptor).loop(0xc240a0, 
{0x9f4b80, 0xc3ec30})

/<>/_build/src/github.com/pion/interceptor/pkg/nack/generator_interceptor.go:139
 +0x12d
created by 
github.com/pion/interceptor/pkg/nack.(*GeneratorInterceptor).BindRTCPWriter in 
goroutine 16

/<>/_build/src/github.com/pion/interceptor/pkg/nack/generator_interceptor.go:74
 +0x115

goroutine 35 [select]:

github.com/pion/interceptor/pkg/report.(*ReceiverInterceptor).loop(0xc0001303c0,
 {0x9f4b80, 0xc3ec30})

/<>/_build/src/github.com/pion/interceptor/pkg/report/receiver_interceptor.go:97
 +0x19c
created by 
github.com/pion/interceptor/pkg/report.(*ReceiverInterceptor).BindRTCPWriter in 
goroutine 16

/<>/_build/src/github.com/pion/interceptor/pkg/report/receiver_interceptor.go:86
 +0x115

goroutine 36 [select]:

github.com/pion/interceptor/pkg/report.(*SenderInterceptor).loop(0xc000130420, 
{0x9f4b80, 0xc3ec30})

/<>/_build/src/github.com/pion/interceptor/pkg/report/sender_interceptor.go:98
 +0x19c
created by 
github.com/pion/interceptor/pkg/report.(*SenderInterceptor).BindRTCPWriter in 
goroutine 16

/<>/_build/src/github.com/pion/interceptor/pkg/report/sender_interceptor.go:87
 +0x115

[...]

Cheers Jochen



Bug#1070409: golang-github-pion-ice.v2: accesses the internet during build

2024-05-04 Thread Jochen Sprickerhof
v2/transport_test.go:219
#   0x746424
github.com/pion/ice/v2.TestConnectionStateCallback+0x344
/<>/_build/src/github.com/pion/ice/v2/agent_test.go:653
#   0x4fa01atesting.tRunner+0xfa
/usr/lib/go-1.22/src/testing/testing.go:1689

1 @ 0x43f36e 0x4510c5 0x73a965 0x766d45 0x766d46 0x476061
#   0x73a964github.com/pion/ice/v2.(*Agent).connect+0x124   
/<>/_build/src/github.com/pion/ice/v2/transport.go:53
#   0x766d44github.com/pion/ice/v2.(*Agent).Accept+0x64 
/<>/_build/src/github.com/pion/ice/v2/transport.go:21
#   0x766d45github.com/pion/ice/v2.connect.func1+0x65   
/<>/_build/src/github.com/pion/ice/v2/transport_test.go:213

panic: timeout

goroutine 195 [running]:
github.com/pion/ice/v2.TestConnectionStateCallback.TimeOut.func2()
/<>/_build/src/github.com/pion/transport/test/util.go:24 
+0x8c
created by time.goFunc
/usr/lib/go-1.22/src/time/sleep.go:177 +0x2d
FAILgithub.com/pion/ice/v2  8.728s

Cheers Jochen



Bug#1070409: golang-github-pion-ice.v2: accesses the internet during build

2024-05-04 Thread Jochen Sprickerhof
v2/transport_test.go:219
#   0x746424
github.com/pion/ice/v2.TestConnectionStateCallback+0x344
/<>/_build/src/github.com/pion/ice/v2/agent_test.go:653
#   0x4fa01atesting.tRunner+0xfa
/usr/lib/go-1.22/src/testing/testing.go:1689

1 @ 0x43f36e 0x4510c5 0x73a965 0x766d45 0x766d46 0x476061
#   0x73a964github.com/pion/ice/v2.(*Agent).connect+0x124   
/<>/_build/src/github.com/pion/ice/v2/transport.go:53
#   0x766d44github.com/pion/ice/v2.(*Agent).Accept+0x64 
/<>/_build/src/github.com/pion/ice/v2/transport.go:21
#   0x766d45github.com/pion/ice/v2.connect.func1+0x65   
/<>/_build/src/github.com/pion/ice/v2/transport_test.go:213

panic: timeout

goroutine 195 [running]:
github.com/pion/ice/v2.TestConnectionStateCallback.TimeOut.func2()
/<>/_build/src/github.com/pion/transport/test/util.go:24 
+0x8c
created by time.goFunc
/usr/lib/go-1.22/src/time/sleep.go:177 +0x2d
FAILgithub.com/pion/ice/v2  8.728s

Cheers Jochen



Bug#1070334: libnet-frame-device-perl needs network access during build

2024-05-03 Thread Jochen Sprickerhof
Source: libnet-frame-device-perl
Version: 1.12-1
Severity: serious
Justification: Policy 4.9
X-Debbugs-Cc: d...@debian.org, wb-t...@buildd.debian.org
Control: affects -1 buildd.debian.org

Hi,

libnet-frame-device-perl fails to build with no network connection:

1..1
# Running under perl version 5.038002 for linux
# Current time local: Sat Apr 27 12:53:04 2024
# Current time GMT:   Sat Apr 27 12:53:04 2024
# Using Test.pm version 1.31
ok 1 # skip Test::Pod 1.00 required for testing
ok
Net::Frame::Device: updateFromDefault: unable to get dnet

This can be tested with the sbuild unshare backend.

Cheers Jochen



Bug#1070334: libnet-frame-device-perl needs network access during build

2024-05-03 Thread Jochen Sprickerhof
Source: libnet-frame-device-perl
Version: 1.12-1
Severity: serious
Justification: Policy 4.9
X-Debbugs-Cc: d...@debian.org, wb-t...@buildd.debian.org
Control: affects -1 buildd.debian.org

Hi,

libnet-frame-device-perl fails to build with no network connection:

1..1
# Running under perl version 5.038002 for linux
# Current time local: Sat Apr 27 12:53:04 2024
# Current time GMT:   Sat Apr 27 12:53:04 2024
# Using Test.pm version 1.31
ok 1 # skip Test::Pod 1.00 required for testing
ok
Net::Frame::Device: updateFromDefault: unable to get dnet

This can be tested with the sbuild unshare backend.

Cheers Jochen



Bug#1070333: python-eventlet fails to build with an empty /etc/resolv.conf

2024-05-03 Thread Jochen Sprickerhof
Source: python-eventlet
Version: 0.35.1-1
Severity: normal
X-Debbugs-Cc: d...@debian.org, wb-t...@buildd.debian.org
Control: affects -1 buildd.debian.org

Hi,

python-eventlet fails to build with no nameserver specified in
/etc/resolv.conf:

=== FAILURES ===
_ TestProxyResolver.test_clear _

self = 

def test_clear(self):
rp = greendns.ResolverProxy()
assert rp._cached_resolver is None
>   resolver = rp._resolver

tests/greendns_test.py:304:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
eventlet/support/greendns.py:347: in _resolver
self.clear()
eventlet/support/greendns.py:355: in clear
self._resolver = dns.resolver.Resolver(filename=self._filename)
/usr/lib/python3/dist-packages/dns/resolver.py:944: in __init__
self.read_resolv_conf(filename)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = 
f = <_io.TextIOWrapper name='/etc/resolv.conf' mode='r' encoding='UTF-8'>

def read_resolv_conf(self, f: Any) -> None:
"""Process *f* as a file in the /etc/resolv.conf format.  If f is
a ``str``, it is used as the name of the file to open; otherwise it
is treated as the file itself.

Interprets the following items:

- nameserver - name server IP address

- domain - local domain name

- search - search list for host-name lookup

- options - supported options are rotate, timeout, edns0, and ndots

"""

nameservers = []
if isinstance(f, str):
try:
cm: contextlib.AbstractContextManager = open(f)
except OSError:
# /etc/resolv.conf doesn't exist, can't be read, etc.
raise NoResolverConfiguration(f"cannot open {f}")
else:
cm = contextlib.nullcontext(f)
with cm as f:
for l in f:
if len(l) == 0 or l[0] == "#" or l[0] == ";":
continue
tokens = l.split()

# Any line containing less than 2 tokens is malformed
if len(tokens) < 2:
continue

if tokens[0] == "nameserver":
nameservers.append(tokens[1])
elif tokens[0] == "domain":
self.domain = dns.name.from_text(tokens[1])
# domain and search are exclusive
self.search = []
elif tokens[0] == "search":
# the last search wins
self.search = []
for suffix in tokens[1:]:
self.search.append(dns.name.from_text(suffix))
# We don't set domain as it is not used if
# len(self.search) > 0
   elif tokens[0] == "options":
for opt in tokens[1:]:
if opt == "rotate":
self.rotate = True
elif opt == "edns0":
self.use_edns()
elif "timeout" in opt:
try:
self.timeout = int(opt.split(":")[1])
except (ValueError, IndexError):
pass
elif "ndots" in opt:
try:
self.ndots = int(opt.split(":")[1])
except (ValueError, IndexError):
pass
if len(nameservers) == 0:
>   raise NoResolverConfiguration("no nameservers")
E   dns.resolver.NoResolverConfiguration: no nameservers

/usr/lib/python3/dist-packages/dns/resolver.py:1038: NoResolverConfiguration


This fails in sbuild with the unshare backend.

Cheers Jochen



Bug#1070332: designate fails to build with no nameserver specified in /etc/resolv.conf

2024-05-03 Thread Jochen Sprickerhof
Source: designate
Version: 1:18.0.0-1
Severity: normal
X-Debbugs-Cc: d...@debian.org, wb-t...@buildd.debian.org
Control: affects -1 buildd.debian.org

Hi,

designate fails to build with no nameserver specified in
/etc/resolv.conf:

==
FAIL: designate.tests.unit.mdns.test_handler.MdnsHandleTest.test_notify
designate.tests.unit.mdns.test_handler.MdnsHandleTest.test_notify
--
testtools.testresult.real._StringException: Traceback (most recent call last):
  File "/usr/lib/python3.12/unittest/mock.py", line 1390, in patched
return func(*newargs, **newkeywargs)
   ^
  File "/<>/designate/tests/unit/mdns/test_handler.py", line 79, 
in test_notify
self.assertEqual(dns.rcode.NOERROR, tuple(response)[0].rcode())
^^^
  File "/<>/designate/mdns/handler.py", line 142, in _handle_notify
resolver = dns.resolver.Resolver()
   ^^^
  File "/usr/lib/python3/dist-packages/dns/resolver.py", line 944, in __init__
self.read_resolv_conf(filename)
  File "/usr/lib/python3/dist-packages/dns/resolver.py", line 1038, in 
read_resolv_conf
raise NoResolverConfiguration("no nameservers")
dns.resolver.NoResolverConfiguration: no nameservers


This fails in sbuild with the unshare backend. Please disable the
failing tests:

designate.tests.unit.mdns.test_handler.MdnsHandleTest.test_notify
designate.tests.unit.mdns.test_handler.MdnsHandleTest.test_notify_same_serial

Cheers Jochen



Bug#1070325: fails to build without a non local IP

2024-05-03 Thread Jochen Sprickerhof
Source: servefile
Version: 0.5.4-3
Severity: normal
Tags: patch
X-Debbugs-Cc: d...@debian.org, wb-t...@buildd.debian.org
Control: affects -1 buildd.debian.org

Hi,

servefile fails to build when self.getIPs() does not return an IP:

Traceback (most recent call last):
  File "", line 198, in _run_module_as_main
  File "", line 88, in _run_code
  File 
"/<>/.pybuild/cpython3_3.12_servefile/build/servefile/__main__.py",
 line 3, in 
servefile.main()
  File 
"/<>/.pybuild/cpython3_3.12_servefile/build/servefile/servefile.py",
 line 1289, in main
server.serve()
  File 
"/<>/.pybuild/cpython3_3.12_servefile/build/servefile/servefile.py",
 line 1008, in serve
self.server.append(self._createServer(self.handler))
   
  File 
"/<>/.pybuild/cpython3_3.12_servefile/build/servefile/servefile.py",
 line 982, in _createServer
self.genKeyPair()
  File 
"/<>/.pybuild/cpython3_3.12_servefile/build/servefile/servefile.py",
 line 927, in genKeyPair
for ip in self.getIPs() + ["127.0.0.1", "::1"]:
  ~~^~
TypeError: unsupported operand type(s) for +: 'NoneType' and 'list'


This fails in sbuild with the unshare backend. A simple fix would be:

--- servefile-0.5.4.orig/servefile/servefile.py
+++ servefile-0.5.4/servefile/servefile.py
@@ -890,7 +890,7 @@ class ServeFile():
 ips = [ip for ip in ips if ':' in ip]

 return ips
-return None
+return []

 def setSSLKeys(self, cert, key):
 """ Set SSL cert/key. Can be either path to file or pyopenssl 
X509/PKey object. """


Cheers Jochen



Bug#1070324: fails to build when no local ssh server is running

2024-05-03 Thread Jochen Sprickerhof
Source: python-scrapli
Version: 2023.7.30-2
Severity: normal
X-Debbugs-Cc: d...@debian.org, wb-t...@buildd.debian.org
Control: affects -1 buildd.debian.org

Hi,

python-scrapli has a test that tries to connect to localhost port 22:

https://sources.debian.org/src/python-scrapli/2023.7.30-2/tests/unit/transport/base/test_base_socket.py/#L6

This fails in sbuild with the unshare backend:


=== FAILURES ===
 test_socket_open_close_isalive 

self = 
socket_address_families = {}

def _connect(self, socket_address_families: Set["socket.AddressFamily"]) -> 
None:
"""
Try to open socket to host using all possible address families

It seems that very occasionally when resolving a hostname (i.e. 
localhost during functional
tests against vrouter devices), a v6 address family will be the first 
af the socket
getaddrinfo returns, in this case, because the qemu hostfwd is not 
listening on ::1, instead
only listening on 127.0.0.1 the connection will fail. Presumably this 
is something that can
happen in real life too... something gets resolved with a v6 address 
but is denying
connections or just not listening on that ipv6 address. This little 
connect wrapper is
intended to deal with these weird scenarios.

Args:
socket_address_families: set of address families available for the 
provided host
really only should ever be v4 AND v6 if providing a hostname 
that resolves with
both addresses, otherwise if you just provide a v4/v6 address 
it will just be a
single address family for that type of address

Returns:
None

Raises:
ScrapliConnectionNotOpened: if socket refuses connection on all 
address families
ScrapliConnectionNotOpened: if socket connection times out on all 
address families

"""
for address_family_index, address_family in 
enumerate(socket_address_families, start=1):
self.sock = socket.socket(address_family, socket.SOCK_STREAM)
self.sock.settimeout(self.timeout)

try:
>   self.sock.connect((self.host, self.port))
E   ConnectionRefusedError: [Errno 111] Connection refused

scrapli/transport/base/base_socket.py:82: ConnectionRefusedError

The above exception was the direct cause of the following exception:

socket_transport = 

def test_socket_open_close_isalive(socket_transport):
"""Test socket initialization/opening"""
assert socket_transport.host == "localhost"
assert socket_transport.port == 22
assert socket_transport.timeout == 10.0

>   socket_transport.open()


Please disable those tests tests:

tests/unit/transport/base/test_base_socket.py::test_socket_open_close_isalive
tests/unit/transport/base/test_base_socket.py::test_socket_bool

Cheers Jochen



Bug#1070319: fails to build without a non lo IP address

2024-05-03 Thread Jochen Sprickerhof
Source: google-guest-agent
Version: 2026.00-6
Severity: normal
X-Debbugs-Cc: d...@debian.org, wb-t...@buildd.debian.org
Control: affects -1 buildd.debian.org

Hi,

google-guest-agent has a test that depends on having an IP address
available in the build environment:

https://sources.debian.org/src/google-guest-agent/2026.00-6/google_guest_agent/wsfc_test.go/#L206

This fails in sbuild with the unshare backend:

=== RUN   TestWsfcRunAgentE2E
wsfc_test.go:207: health check failed with , got = , want 1
wsfc_test.go:209: EOF
--- FAIL: TestWsfcRunAgentE2E (1.00s)

Cheers Jochen



Bug#1070317: fails to build without a non lo IP address

2024-05-03 Thread Jochen Sprickerhof

* Jochen Sprickerhof  [2024-05-03 18:55]:

This fails in sbuild with the chroot backend:


I mean the unshare backend.


signature.asc
Description: PGP signature


Bug#1070319: fails to build without a non lo IP address

2024-05-03 Thread Jochen Sprickerhof
Source: google-guest-agent
Version: 2026.00-6
Severity: normal
X-Debbugs-Cc: d...@debian.org, wb-t...@buildd.debian.org
Control: affects -1 buildd.debian.org

Hi,

google-guest-agent has a test that depends on having an IP address
available in the build environment:

https://sources.debian.org/src/google-guest-agent/2026.00-6/google_guest_agent/wsfc_test.go/#L206

This fails in sbuild with the unshare backend:

=== RUN   TestWsfcRunAgentE2E
wsfc_test.go:207: health check failed with , got = , want 1
wsfc_test.go:209: EOF
--- FAIL: TestWsfcRunAgentE2E (1.00s)

Cheers Jochen



Bug#1070317: fails to build without a non lo IP address

2024-05-03 Thread Jochen Sprickerhof
Source: golang-github-likexian-gokit
Version: 0.25.9-3
Severity: normal
X-Debbugs-Cc: d...@debian.org, wb-t...@buildd.debian.org
Control: affects -1 buildd.debian.org


Hi,

golang-github-likexian-gokit has a test that depends on having an IP
address available in the build environment:

https://sources.debian.org/src/golang-github-likexian-gokit/0.25.9-3/xip/xip_test.go/#L213

This fails in sbuild with the chroot backend:

=== RUN   TestGetEthIPv4
assert.go:197: 
/<>/obj-x86_64-linux-gnu/src/github.com/likexian/gokit/xip/xip_test.go:213
assert.go:172: ! expected true, but got false
--- FAIL: TestGetEthIPv4 (0.00s)

Cheers Jochen



Re: Vulnerability in dropwizard-client

2024-04-29 Thread Jochen Schalanda
Hi Manuel,

Your dependency check is taking a sh*t on you and your valuable time. I would 
ditch it for something actually working.

For the record, Dropwizard 4.0.7 is not using any of the vulnerable versions of 
Apache HttpClient.

https://github.com/dropwizard/dropwizard/blob/v4.0.7/dropwizard-dependencies/pom.xml#L37-L38

The message mentions "metrics-httpclient5" which is an entirely different thing 
*and also not vulnerable*.

https://github.com/dropwizard/metrics/blob/v4.2.25/metrics-httpclient5/pom.xml#L21


Cheers,
Jochen

> Am 24.04.2024 um 14:38 schrieb 'Manuel Baden' via dropwizard-dev 
> :
> 
> Hello there,
> 
> i am using dropwizard (version 4.0.7) and when i run a dependency check it 
> shows the following (transitive) vulnerability:
> 
> metrics-httpclient5-4.2.25.jar 
> (pkg:maven/io.dropwizard.metrics/metrics-httpclient5@4.2.25, 
> cpe:2.3:a:apache:httpclient:4.2.25:*:*:*:*:*:*:*) : CVE-2014-3577, 
> CVE-2020-13956
> 
> Is this problem getting fixed?
> 
> Thank you for your help
> Manuel

-- 
You received this message because you are subscribed to the Google Groups 
"dropwizard-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dropwizard-dev+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dropwizard-dev/546E5471-CB71-4840-9B25-7682F692EEAA%40schalanda.name.


[Freeipa-users] Fedora 40: new warning in ipa-healthckeck

2024-04-26 Thread Jochen Kellner via FreeIPA-users

Hi,

I've upgraded my freeipa server to Fedora 40 (the system was installed
several releases ago). After the upgrade I get the following new warning
from ipa-healthcheck:

  {
"source": "ipahealthcheck.ds.backends",
"check": "BackendsCheck",
"result": "WARNING",
"uuid": "875db8e3-029c-46f7-87e5-bf9a216d9637",
"when": "20240426184431Z",
"duration": "0.031642",
"kw": {
  "key": "DSBLE0005",
  "items": [
"nsslapd-dbcachesize",
"nsslapd-db-logdirectory",
"nsslapd-db-transaction-wait",
"nsslapd-db-checkpoint-interval",
"nsslapd-db-compactdb-interval",
"nsslapd-db-compactdb-time",
"nsslapd-db-transaction-batch-val",
"nsslapd-db-transaction-batch-min-wait",
"nsslapd-db-transaction-batch-max-wait",
"nsslapd-db-logbuf-size",
"nsslapd-db-page-size",
"nsslapd-db-locks",
"nsslapd-db-locks-monitoring-enabled",
"nsslapd-db-locks-monitoring-threshold",
"nsslapd-db-locks-monitoring-pause",
"nsslapd-db-private-import-mem",
"nsslapd-db-deadlock-policy"
  ],
  "msg": "Found configuration attributes that are not applicable for the 
configured backend type."
}
  },

According to
https://www.port389.org/docs/389ds/FAQ/Berkeley-DB-deprecation.html the
bdb backend is deprecated. The system was installed with
389-ds-base < 1.4.4.9-1.fc33.x86_64 (I see the upgrade to that version
in /var/log/dnf.rpm.log*. Since 3.0 new installations should use LMBD as
the backend. Is that true for new installations?

What is the desired action that I should take?

I can remove the options from the dirsrv configuration. Should I?

Shall I switch to lmdb manually? Or is that something that
ipa-server-upgrade should be doing?

Otherwise I can suppress the message in ipa-healthcheck for now. But I
guess I should fix my installation before the deprecated support really
gets dropped... Is deploying a new replica and decommisioning the old
server we the preferred action?

Jochen

-- 
This space is intentionally left blank.
--
___
FreeIPA-users mailing list -- freeipa-users@lists.fedorahosted.org
To unsubscribe send an email to freeipa-users-le...@lists.fedorahosted.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedorahosted.org/archives/list/freeipa-users@lists.fedorahosted.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Bug#1069809: xhtml2pdf accesses network resources during the build

2024-04-25 Thread Jochen Sprickerhof
Source: xhtml2pdf
Version: 0.2.15+dfsg-1
Severity: serious
Tags: sid trixie ftbfs

xhtml2pdf accesses network resources during the build:

==
FAIL: test_document_cannot_identify_image 
(tests.test_document.DocumentTest.test_document_cannot_identify_image)
Test that images which cannot be identified don't cause stack trace to be 
printed
--
Traceback (most recent call last):
  File 
"/build/package/package/.pybuild/cpython3_3.11_xhtml2pdf/build/tests/test_document.py",
 line 189, in test_document_cannot_identify_image
self.assertEqual(
AssertionError: Lists differ: ['WAR[16 chars]ags:Could not get image data from 
src attribut[265 chars]>\''] != ['WAR[16 chars]ags:Cannot identify image 
file:\n\'\''

+ ['WARNING:xhtml2pdf.tags:Cannot identify image file:\n'
- ['WARNING:xhtml2pdf.tags:Could not get image data from src attribute: '
-  
'https://raw.githubusercontent.com/python-pillow/Pillow/7921da54a73dd4a30c23957369b79cda176005c6/Tests/images/zero_width.gif\n'
   "'https://raw.githubusercontent.com/python-pillow/Pillow/7921da54a73dd4a30c23957369b79cda176005c6/Tests/images/zero_width.gif"/>\'']

==
FAIL: test_document_with_broken_image 
(tests.test_document.DocumentTest.test_document_with_broken_image)
Test that broken images don't cause unhandled exception
--
Traceback (most recent call last):
  File 
"/build/package/package/.pybuild/cpython3_3.11_xhtml2pdf/build/tests/test_document.py",
 line 169, in test_document_with_broken_image
self.assertEqual(
AssertionError: Lists differ: [] != 
["WARNING:xhtml2pdf.xhtml2pdf_reportlab:SV[151 chars]ml'"]

Second list contains 1 additional elements.
First extra element 0:
"WARNING:xhtml2pdf.xhtml2pdf_reportlab:SVG drawing could not be resized: 
'https://raw.githubusercontent.com/xhtml2pdf/xhtml2pdf/b01b1d8f9497dedd0f2454409d03408bdeea997c/tests/samples/images.html'"

- []
+ ['WARNING:xhtml2pdf.xhtml2pdf_reportlab:SVG drawing could not be resized: '
+  
"'https://raw.githubusercontent.com/xhtml2pdf/xhtml2pdf/b01b1d8f9497dedd0f2454409d03408bdeea997c/tests/samples/images.html'"]



Bug#1069809: xhtml2pdf accesses network resources during the build

2024-04-25 Thread Jochen Sprickerhof
Source: xhtml2pdf
Version: 0.2.15+dfsg-1
Severity: serious
Tags: sid trixie ftbfs

xhtml2pdf accesses network resources during the build:

==
FAIL: test_document_cannot_identify_image 
(tests.test_document.DocumentTest.test_document_cannot_identify_image)
Test that images which cannot be identified don't cause stack trace to be 
printed
--
Traceback (most recent call last):
  File 
"/build/package/package/.pybuild/cpython3_3.11_xhtml2pdf/build/tests/test_document.py",
 line 189, in test_document_cannot_identify_image
self.assertEqual(
AssertionError: Lists differ: ['WAR[16 chars]ags:Could not get image data from 
src attribut[265 chars]>\''] != ['WAR[16 chars]ags:Cannot identify image 
file:\n\'\''

+ ['WARNING:xhtml2pdf.tags:Cannot identify image file:\n'
- ['WARNING:xhtml2pdf.tags:Could not get image data from src attribute: '
-  
'https://raw.githubusercontent.com/python-pillow/Pillow/7921da54a73dd4a30c23957369b79cda176005c6/Tests/images/zero_width.gif\n'
   "'https://raw.githubusercontent.com/python-pillow/Pillow/7921da54a73dd4a30c23957369b79cda176005c6/Tests/images/zero_width.gif"/>\'']

==
FAIL: test_document_with_broken_image 
(tests.test_document.DocumentTest.test_document_with_broken_image)
Test that broken images don't cause unhandled exception
--
Traceback (most recent call last):
  File 
"/build/package/package/.pybuild/cpython3_3.11_xhtml2pdf/build/tests/test_document.py",
 line 169, in test_document_with_broken_image
self.assertEqual(
AssertionError: Lists differ: [] != 
["WARNING:xhtml2pdf.xhtml2pdf_reportlab:SV[151 chars]ml'"]

Second list contains 1 additional elements.
First extra element 0:
"WARNING:xhtml2pdf.xhtml2pdf_reportlab:SVG drawing could not be resized: 
'https://raw.githubusercontent.com/xhtml2pdf/xhtml2pdf/b01b1d8f9497dedd0f2454409d03408bdeea997c/tests/samples/images.html'"

- []
+ ['WARNING:xhtml2pdf.xhtml2pdf_reportlab:SVG drawing could not be resized: '
+  
"'https://raw.githubusercontent.com/xhtml2pdf/xhtml2pdf/b01b1d8f9497dedd0f2454409d03408bdeea997c/tests/samples/images.html'"]



Bug#1069805: scikit-build tries pip install during build

2024-04-25 Thread Jochen Sprickerhof
Source: scikit-build
Version: 0.17.6-1
Severity: serious
Tags: trixie sid ftbfs

scikit-build accesses network resources during the build:

process = 
stdout = None, stderr = None, retcode = 1

def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, 
**kwargs):
"""Run command with arguments and return a CompletedProcess instance.

The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those 
attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture 
them,
or pass capture_output=True to capture both.

If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return 
code
in the returncode attribute, and output & stderr attributes if those 
streams
were captured.

If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.

There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin.  If you use this 
argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.

By default, all communication is in bytes, and therefore any "input" 
should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings 
decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or 
universal_newlines.

The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be 
used.')
kwargs['stdin'] = PIPE

if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not 
None:
raise ValueError('stdout and stderr arguments may not be used '
 'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE

with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads.  communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except:  # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
>   raise CalledProcessError(retcode, process.args,
 output=stdout, stderr=stderr)
E   subprocess.CalledProcessError: Command '['/usr/bin/python3.12', 
'-m', 'pip', 'wheel', '--wheel-dir', 
'/tmp/pytest-of-jspricke/pytest-21/wheelhouse0', '/build/package/package']' 
returned non-zero exit status 1.



Bug#1069805: scikit-build tries pip install during build

2024-04-25 Thread Jochen Sprickerhof
Source: scikit-build
Version: 0.17.6-1
Severity: serious
Tags: trixie sid ftbfs

scikit-build accesses network resources during the build:

process = 
stdout = None, stderr = None, retcode = 1

def run(*popenargs,
input=None, capture_output=False, timeout=None, check=False, 
**kwargs):
"""Run command with arguments and return a CompletedProcess instance.

The returned instance will have attributes args, returncode, stdout and
stderr. By default, stdout and stderr are not captured, and those 
attributes
will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture 
them,
or pass capture_output=True to capture both.

If check is True and the exit code was non-zero, it raises a
CalledProcessError. The CalledProcessError object will have the return 
code
in the returncode attribute, and output & stderr attributes if those 
streams
were captured.

If timeout is given, and the process takes too long, a TimeoutExpired
exception will be raised.

There is an optional argument "input", allowing you to
pass bytes or a string to the subprocess's stdin.  If you use this 
argument
you may not also use the Popen constructor's "stdin" argument, as
it will be used internally.

By default, all communication is in bytes, and therefore any "input" 
should
be bytes, and the stdout and stderr will be bytes. If in text mode, any
"input" should be a string, and stdout and stderr will be strings 
decoded
according to locale encoding, or by "encoding" if set. Text mode is
triggered by setting any of text, encoding, errors or 
universal_newlines.

The other arguments are the same as for the Popen constructor.
"""
if input is not None:
if kwargs.get('stdin') is not None:
raise ValueError('stdin and input arguments may not both be 
used.')
kwargs['stdin'] = PIPE

if capture_output:
if kwargs.get('stdout') is not None or kwargs.get('stderr') is not 
None:
raise ValueError('stdout and stderr arguments may not be used '
 'with capture_output.')
kwargs['stdout'] = PIPE
kwargs['stderr'] = PIPE

with Popen(*popenargs, **kwargs) as process:
try:
stdout, stderr = process.communicate(input, timeout=timeout)
except TimeoutExpired as exc:
process.kill()
if _mswindows:
# Windows accumulates the output in a single blocking
# read() call run on child threads, with the timeout
# being done in a join() on those threads.  communicate()
# _after_ kill() is required to collect that and add it
# to the exception.
exc.stdout, exc.stderr = process.communicate()
else:
# POSIX _communicate already populated the output so
# far into the TimeoutExpired exception.
process.wait()
raise
except:  # Including KeyboardInterrupt, communicate handled that.
process.kill()
# We don't call process.wait() as .__exit__ does that for us.
raise
retcode = process.poll()
if check and retcode:
>   raise CalledProcessError(retcode, process.args,
 output=stdout, stderr=stderr)
E   subprocess.CalledProcessError: Command '['/usr/bin/python3.12', 
'-m', 'pip', 'wheel', '--wheel-dir', 
'/tmp/pytest-of-jspricke/pytest-21/wheelhouse0', '/build/package/package']' 
returned non-zero exit status 1.



Bug#1069804: rust-mio-0.6 accesses network resources during the build

2024-04-25 Thread Jochen Sprickerhof
Source: rust-mio-0.6
Version: 0.6.23-3
Severity: serious
Tags: sid trixie ftbfs

rust-mio-0.6 accesses network resources during the build:

Test executable failed (exit status: 101).

stderr:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { 
code: 101, kind: NetworkUnreachable, message: "Network is unreachable" }', 
src/sys/unix/ready.rs:22:16
stack backtrace:
   0: rust_begin_unwind
 at /usr/src/rustc-1.70.0/library/std/src/panicking.rs:578:5
   1: core::panicking::panic_fmt
 at /usr/src/rustc-1.70.0/library/core/src/panicking.rs:67:14
   2: core::result::unwrap_failed
 at /usr/src/rustc-1.70.0/library/core/src/result.rs:1687:5
   3: core::result::Result::unwrap
   4: rust_out::main
   5: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose 
backtrace.



failures:
src/poll.rs - poll::Poll (line 267)
src/poll.rs - poll::Poll::deregister (line 877)
src/poll.rs - poll::Poll::register (line 735)
src/poll.rs - poll::Poll::reregister (line 820)
src/sys/unix/ready.rs - sys::unix::ready::UnixReady (line 66)

test result: FAILED. 74 passed; 5 failed; 0 ignored; 0 measured; 0 filtered 
out; finished in 4.37s



Bug#1069804: rust-mio-0.6 accesses network resources during the build

2024-04-25 Thread Jochen Sprickerhof
Source: rust-mio-0.6
Version: 0.6.23-3
Severity: serious
Tags: sid trixie ftbfs

rust-mio-0.6 accesses network resources during the build:

Test executable failed (exit status: 101).

stderr:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Os { 
code: 101, kind: NetworkUnreachable, message: "Network is unreachable" }', 
src/sys/unix/ready.rs:22:16
stack backtrace:
   0: rust_begin_unwind
 at /usr/src/rustc-1.70.0/library/std/src/panicking.rs:578:5
   1: core::panicking::panic_fmt
 at /usr/src/rustc-1.70.0/library/core/src/panicking.rs:67:14
   2: core::result::unwrap_failed
 at /usr/src/rustc-1.70.0/library/core/src/result.rs:1687:5
   3: core::result::Result::unwrap
   4: rust_out::main
   5: core::ops::function::FnOnce::call_once
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose 
backtrace.



failures:
src/poll.rs - poll::Poll (line 267)
src/poll.rs - poll::Poll::deregister (line 877)
src/poll.rs - poll::Poll::register (line 735)
src/poll.rs - poll::Poll::reregister (line 820)
src/sys/unix/ready.rs - sys::unix::ready::UnixReady (line 66)

test result: FAILED. 74 passed; 5 failed; 0 ignored; 0 measured; 0 filtered 
out; finished in 4.37s



Bug#1069803: php-codeigniter-framework tries pip install during build

2024-04-25 Thread Jochen Sprickerhof
Source: php-codeigniter-framework
Version: 3.1.13+dfsg1-3
Severity: serious
Tags: sid trixie ftbfs

php-codeigniter-framework accesses network resources during the build:

python3 -m venv --system-site-packages --without-pip debian/build-doc/pythonvenv
if ! debian/build-doc/pythonvenv/bin/python -m pip show pycilexer; then \
  echo "Installing pycilexer" && \
  cd user_guide_src/cilexer && \
  ../../debian/build-doc/pythonvenv/bin/python -m pip install .; \
fi
WARNING: Package(s) not found: pycilexer
Installing pycilexer
Processing /build/package/package/user_guide_src/cilexer
  Installing build dependencies: started
  Installing build dependencies: finished with status 'error'
  error: subprocess-exited-with-error

  × pip subprocess to install build dependencies did not run successfully.
  │ exit code: 1
  ╰─> [7 lines of output]
  WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary 
failure in name resolution')': /simple/setuptools/
  WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary 
failure in name resolution')': /simple/setuptools/
  WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary 
failure in name resolution')': /simple/setuptools/
  WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary 
failure in name resolution')': /simple/setuptools/
  WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary 
failure in name resolution')': /simple/setuptools/
  ERROR: Could not find a version that satisfies the requirement 
setuptools>=40.8.0 (from versions: none)
  ERROR: No matching distribution found for setuptools>=40.8.0
  [end of output]

  note: This error originates from a subprocess, and is likely not a problem 
with pip.
error: subprocess-exited-with-error

× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with 
pip.


Bug#1069803: php-codeigniter-framework tries pip install during build

2024-04-25 Thread Jochen Sprickerhof
Source: php-codeigniter-framework
Version: 3.1.13+dfsg1-3
Severity: serious
Tags: sid trixie ftbfs

php-codeigniter-framework accesses network resources during the build:

python3 -m venv --system-site-packages --without-pip debian/build-doc/pythonvenv
if ! debian/build-doc/pythonvenv/bin/python -m pip show pycilexer; then \
  echo "Installing pycilexer" && \
  cd user_guide_src/cilexer && \
  ../../debian/build-doc/pythonvenv/bin/python -m pip install .; \
fi
WARNING: Package(s) not found: pycilexer
Installing pycilexer
Processing /build/package/package/user_guide_src/cilexer
  Installing build dependencies: started
  Installing build dependencies: finished with status 'error'
  error: subprocess-exited-with-error

  × pip subprocess to install build dependencies did not run successfully.
  │ exit code: 1
  ╰─> [7 lines of output]
  WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary 
failure in name resolution')': /simple/setuptools/
  WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary 
failure in name resolution')': /simple/setuptools/
  WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary 
failure in name resolution')': /simple/setuptools/
  WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary 
failure in name resolution')': /simple/setuptools/
  WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, 
status=None)) after connection broken by 
'NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary 
failure in name resolution')': /simple/setuptools/
  ERROR: Could not find a version that satisfies the requirement 
setuptools>=40.8.0 (from versions: none)
  ERROR: No matching distribution found for setuptools>=40.8.0
  [end of output]

  note: This error originates from a subprocess, and is likely not a problem 
with pip.
error: subprocess-exited-with-error

× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with 
pip.


Re: [ClusterLabs] Remote nodes in an opt-in cluster

2024-04-23 Thread Jochen



> On 23. Apr 2024, at 17:41, Andrei Borzenkov  wrote:
> 
> On 23.04.2024 10:02, Jochen wrote:
>> When trying to add a remote node to an opt-in cluster, the cluster does not 
>> start the remote resource. When I change the cluster to opt-out the remote 
>> resource is started.
> 
> It's not clear what do you mean. Is "remote resource" the resource used to 
> integrate the remote node (i.e. ocf:pacemaker:remote) or is the "remote 
> resource" a resource you want to start on the remote node itself?

The "ocf:pacemaker:remote" resource to integrate the remote node.

> 
>> I guess I have to add a location constraint to allow the cluster to schedule 
>> the resource. Is that correct?
>> And if yes, how do I create a location constraint to allow the cluster to 
>> start the remote resource anywhere on the cluster? Since I don't want to 
>> name each node in the constraint, I looked for a rule that always is true, 
>> or an attribute that is defined by default, but did not find one. I then 
>> tried
>>  crm configure location skylla-location skylla rule 
>> skylla-location-rule: defined '#uname'
>> But this did not work either. Any help would be greatly appreciated.
>> Regards
>> Jochen
>> ___
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>> ClusterLabs home: https://www.clusterlabs.org/
> 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


RE: No physical disk in FAI 6.2.2 plain text

2024-04-23 Thread Paul, Jochen
Hi there,

now from my business account. The fixed version is working wonderful. Thanks 
for solving this issue.
Freundliche Grüße / Kind regards,

i.A. Jochen Paul
Softwareentwickler / Softwaredeveloper
Research & Development

Fon: +49 6274 932 341
jochen.p...@mosca.com<mailto:jochen.p...@mosca.com>


Re: [FRIAM] Daniel Dennett (1942-2024)

2024-04-23 Thread Jochen Fromm
Jennifer Ouellette's piece on Daniel is well written too - like his books very 
readable and always interesting. Jennifer is the wife of Sean 
Carroll.https://arstechnica.com/science/2024/04/philosopher-daniel-dennett-dead-at-82/-J.
 Original message From: Russ Abbott  
Date: 4/23/24  12:16 AM  (GMT+01:00) To: The Friday Morning Applied Complexity 
Coffee Group  Subject: Re: [FRIAM] Daniel Dennett 
(1942-2024) Stephen,Thanks for the link to Michael Levin's piece on Dennett. 
Levin, also at Tufts, is one of the most insightful and creative biologists 
around. The article points to a joint Aeon piece Levin wrote with Dennett. Even 
though it was published 3 1/2 years ago, it's very much worth reading. It will 
give you a sense of the kind of work Levin does.It also illustrates Dennett's 
notion of competence without comprehension from his From Bacteria to Bach and 
Back, published 3 years earlier. The article is a very nice example of the 
melding of two minds.Many will want to maintain that real cognition is what 
brains do, and what happens in biochemistry only seems like it’s doing similar 
things. We propose an inversion of this familiar idea; the point is not to 
anthropomorphise morphogenesis – the point is to naturalise cognition. There is 
nothing magic that humans (or other smart animals) do that doesn’t have a 
phylogenetic history. Taking evolution seriously means asking what cognition 
looked like all the way back. Modern data in the field of basal cognition makes 
it impossible to maintain an artificial dichotomy of ‘real’ and ‘as-if’ 
cognition. There is one continuum along which all living systems (and many 
nonliving ones) can be placed, with respect to how much thinking they can do.  
-- Russ Abbott                                       Professor Emeritus, 
Computer ScienceCalifornia State University, Los AngelesOn Sun, Apr 21, 2024 at 
1:15 PM Stephen Guerin  wrote:Michael Levin's 
farewell to Dan Dennett  
https://thoughtforms.life/farewell-dan-dennett-i-will-really-miss-you/On Fri, 
Apr 19, 2024 at 2:46 PM Stephen Guerin  wrote:Such 
a loss 
:-(https://www.telegraph.co.uk/obituaries/2024/04/19/daniel-dennett-philosopher-atheist-darwinist/yes
 a loss of a great person and intellectual. Though in the loss is the 
possibility of progress.  if you consider the above "atheist-darwinist" url 
representing a certain paradigm in which Dennett has been cast as a central 
figure. Planck's Principle on paradigms and funerals:An important scientific 
innovation rarely makes its way by gradually winning over and converting its 
opponents: it rarely happens that Saul becomes Paul. What does happen is that 
its opponents gradually die out, and that the growing generation is 
familiarized with the ideas from the beginning: another instance of the fact 
that the future lies with the youth.— Max Planck, Scientific autobiography, 
1950, p. 33, 97Colloquially, this is often paraphrased as "Science progresses 
one funeral at a 
time"_Stephen 
GuerinCEO, Founder https://simtable.comstephen.gue...@simtable.com 
stephenguerin@fas.harvard.eduHarvard Visualization Research and Teaching 
Labmobile: (505)577-5828On Fri, Apr 19, 2024 at 1:03 PM Jochen Fromm 
 wrote:Such a loss 
:-(https://www.telegraph.co.uk/obituaries/2024/04/19/daniel-dennett-philosopher-atheist-darwinist/I
 will put his autobiography "I’ve Been Thinking" from last year on my reading 
listhttps://www.theguardian.com/books/2023/oct/01/ive-been-thinking-by-daniel-c-dennett-review-an-engaging-vexing-memoir-with-a-humility-bypass-J.
 Original message From: Jochen Fromm  Date: 4/19/24 
 7:32 PM  (GMT+01:00) To: The Friday Morning Applied Complexity Coffee Group 
 Subject: [FRIAM] Daniel Dennett (1942-2024) A sad day 
today. Daniel Dennett has died :-( For every big question in philosophy there 
is at least one Daniel Dennett book:"Consciousnes Explained" (1991) about 
consciousness"Darwin's Dangerous Idea" (1995) about evolution "Freedom Evolves" 
(2003) about free will"Breaking the spell" (2006) about 
religionhttps://dailynous.com/2024/04/19/daniel-dennett-death-1942-2024/-J.-. 
--- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.c

[ClusterLabs] Remote nodes in an opt-in cluster

2024-04-23 Thread Jochen
When trying to add a remote node to an opt-in cluster, the cluster does not 
start the remote resource. When I change the cluster to opt-out the remote 
resource is started.

I guess I have to add a location constraint to allow the cluster to schedule 
the resource. Is that correct?

And if yes, how do I create a location constraint to allow the cluster to start 
the remote resource anywhere on the cluster? Since I don't want to name each 
node in the constraint, I looked for a rule that always is true, or an 
attribute that is defined by default, but did not find one. I then tried

crm configure location skylla-location skylla rule 
skylla-location-rule: defined '#uname'

But this did not work either. Any help would be greatly appreciated.

Regards
Jochen



___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [go-nuts] generics question

2024-04-23 Thread Jochen Voss
Thank you both!

This works, see my code below.  Followup question: is there a way to refer 
to the new type without having to list both the element type and the 
pointer type separately?  Below I have to write C[A, *A], which looks 
slightly ugly.  And in my real application something like Creator 
OrderedArray[ProperName] would look much more readable than Creator 
OrderedArray[ProperName, *ProperName] .

Many thanks,
Jochen

type Type interface{}

type Settable[E Type] interface {
*E
Set(int)
}

type C[E any, P Settable[E]] struct {
Val []E
}

func (c *C[E, P]) Set(v int) {
for i := range c.Val {
var ptr P = &(c.Val[i])
ptr.Set(v)
}
}

var c = C[A, *A]{Val: []A{1, 2, 3}}

On Tuesday 23 April 2024 at 03:36:25 UTC+1 Nagaev Boris wrote:

> On Mon, Apr 22, 2024 at 9:54 PM Ian Lance Taylor  wrote:
> >
> > On Mon, Apr 22, 2024 at 2:25 PM Jochen Voss  wrote:
> > >
> > > Using generics, can I somehow write a constraint which says that *T 
> (instead of T) implements a certain interface? The following code 
> illustrated what I'm trying to do:
> > >
> > > type A int
> > >
> > > func (a *A) Set(x int) {
> > > *a = A(x)
> > > }
> > >
> > > type B string
> > >
> > > func (b *B) Set(x int) {
> > > *b = B(strconv.Itoa(x))
> > > }
> > >
> > > type C1 struct {
> > > Val []A
> > > }
> > >
> > > func (c *C1) Set(v int) {
> > > for i := range c.Val {
> > > c.Val[i].Set(v)
> > > }
> > > }
> > >
> > > type C2 struct {
> > > Val []B
> > > }
> > >
> > > func (c *C2) Set(v int) {
> > > for i := range c.Val {
> > > c.Val[i].Set(v)
> > > }
> > > }
> > >
> > > I would like to use generics to use a single definition for the 
> methods which here are func (c *C1) Set(v int) and func (c *C2) Set(v int). 
> (My real code has many base types, instead of just A and B.) How can I do 
> this?
> > >
> > > I tried the naive approach:
> > >
> > > type C[T interface{ Set(int) }] struct {
> > > Val []T
> > > }
> > >
> > > but when I try to use the type C[A] now, I get the error message "A 
> does not satisfy interface{Set(int)} (method Set has pointer receiver)".
> >
> >
> > type C[P interface {
> > *E
> > Set(int)
> > }, E any] struct {
> > Val []P
> > }
> >
> > Ian
> >
>
> I think it should be this (s/Val []P/Val []E/):
>
> type C[P interface {
> *E
> Set(int)
> }, E any] struct {
> Val []E
> }
>
>
>
> -- 
> Best regards,
> Boris Nagaev
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/c5571d6c-d4d1-4cb9-9617-d5405f68d66an%40googlegroups.com.


[go-nuts] generics question

2024-04-22 Thread Jochen Voss
Using generics, can I somehow write a constraint which says that *T 
(instead of T) implements a certain interface?  The following code 
illustrated what I'm trying to do:

type A int

func (a *A) Set(x int) {
*a = A(x)
}

type B string

func (b *B) Set(x int) {
*b = B(strconv.Itoa(x))
}

type C1 struct {
Val []A
}

func (c *C1) Set(v int) {
for i := range c.Val {
c.Val[i].Set(v)
}
}

type C2 struct {
Val []B
}

func (c *C2) Set(v int) {
for i := range c.Val {
c.Val[i].Set(v)
}
}

I would like to use generics to use a single definition for the methods 
which here are func (c *C1) Set(v int) and func (c *C2) Set(v int).  (My 
real code has many base types, instead of just A and B.)  How can I do this?

I tried the naive approach:

type C[T interface{ Set(int) }] struct {
Val []T
}

but when I try to use the type C[A] now, I get the error message "A does 
not satisfy interface{Set(int)} (method Set has pointer receiver)".

Many thanks,
Jochen

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/e6c1647a-9ade-496e-85b5-0e96563b5e22n%40googlegroups.com.


Bug#1065193: RFS: libhx/4.23-1 [RC] -- C library providing queue, tree, I/O and utility functions

2024-04-21 Thread Jochen Sprickerhof

Hi Jörg,

I have fixed the old debian/changelog entry and uploaded the new version 
to DELAYED/7. Please feel free to tell me if I should delay it longer.


The patch is attached.

Cheers Jochen

* Jochen Sprickerhof  [2024-04-10 07:25]:

Hi Jörg,

is there anything I can help with to get libhx updated?
(I agree with what Tobias said below.)

Cheers Jochen


* Tobias Frost  [2024-03-17 20:22]:

Control: tags -1 moreinfo

On Sun, Mar 17, 2024 at 04:57:54PM +0100, Jörg Frings-Fürst wrote:

tags 1065193 - moreinfo
thanks

Hi Tobias,



Am Sonntag, dem 17.03.2024 um 14:39 +0100 schrieb Tobias Frost:

Hi Jörg,

"debcheckout libhx" still gives me 4.17-1 as head.

After looking at your repo, I think I should point you to DEP-14
as a recommended git layout for packaging.
As the branch name indicates you are using per-package-revision
branches:
IMHO It makes little sense to have one branch per debian package
version/revision; (I had a similiar discussion on vzlogger lately,
so to avoid repetiions: #1064344#28)

Please clarify how you want to manage the package in git, as
this needs to be reflected in d/control's VCS-* fields correctly.
(this is now blocking the upload.)


I have been using Vincent Driessen's branching model and the corresponding git
extension gitflow-avh for at least 7 years with Debian and for a long time at
work.

The default branch is master and development takes place in the develop branch.

The release candidates are managed in the branch release. The extension
debian/debian-version is used as a tag during the transfer.

The master branch always contains the last published executable version to which
the git tag in debian/control points.


a> The procedure is described in the README.debian.

ok, won't further argue about how to organize your git repo, but I can
only tell the Debian perspective: It is breaking expectations as a
debcheckout libhx does today NOT give me a state that represents the
package in unstable. VCS-Git specifies where the (package)
development is happening [1].

[1] Policy 5.6.26

But as I am not using the git repo as base for the sponsoring, lets put
that topic to rest.



(The current fields say the package is maintained in the default branch
of your repo. I see a debian/ directory there, but that one is missing
released (it is at 4.17-1, while unstable is at 4.19-1.1)

The review is based on the .dsc, timestamped on mentors.d.n
2024-03-17 12:00

d/changelog is *STILL* changing history for the 4.19-1 changelog
block. (This issue must be fixed before upload.)



I think these were artifacts because my changes to the NMU were not adopted. Has
been corrected.


No it has not. I expect old changelog entries to be *identical* to
the ones that have been uploaded, and they still are not, so I fear
we are talking past each other. Please let me know what you think that
you have fixed, so that we can spot the different expectations.

For my perspective:
This is the diff from debian/changelog from the current
version in the archives and the dsc uploaded to mentors at 2024-03-17 14:45
You are still rewriting history (second hunk of this diff), this hunk
should not exists.

diff -Naur archive/libhx-4.19/debian/changelog 
mentors/libhx-4.23/debian/changelog
--- archive/libhx-4.19/debian/changelog 2024-02-28 13:48:09.0 +0100
+++ mentors/libhx-4.23/debian/changelog 2024-03-17 15:23:31.0 +0100
@@ -1,3 +1,17 @@
+libhx (4.23-1) unstable; urgency=medium
+
+  * New upstream release (Closes: #1064734).
+  * Add debian/.gitignore
+  * Remove not longer needed debian/libhx-dev.lintian-overrides.
+  * Fix debian/libhx32t64.lintian-overrides.
+  * debian/copyright:
+- Add 2024 to myself.
+  * debian/control:
+- Readd BD dpkg-dev (>= 1.22.5).
+  Thanks to Tobias Frost 
+
+ -- Jörg Frings-Fürst   Sun, 17 Mar 2024 15:23:31 +0100
+
libhx (4.19-1.1) unstable; urgency=medium

 * Non-maintainer upload.
@@ -9,11 +23,8 @@

 * New upstream release.
   - Refresh symbols files.
-  * Remove empty debian/libhx-dev.symbols.
-  * debian/rules:
-- Remove build of libhx-dev.symbols.

- -- Jörg Frings-Fürst   Sun, 17 Dec 2023 14:44:39 +0100
+ -- Jörg Frings-Fürst   Tue, 21 Nov 2023 10:41:07 +0100

libhx (4.14-1) unstable; urgency=medium



From e82f0d623be16aad21807a7a5089fbfdbbd8ba05 Mon Sep 17 00:00:00 2001
From: Jochen Sprickerhof 
Date: Sun, 21 Apr 2024 09:03:28 +0200
Subject: [PATCH] Fix old debian/changelog entry

---
 debian/changelog | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/debian/changelog b/debian/changelog
index 5471467..e4c0c41 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -23,8 +23,11 @@ libhx (4.19-1) unstable; urgency=medium
 
   * New upstream release.
 - Refresh symbols files.
+  * Remove empty debian/libhx-dev.symbols.
+  * debian/rules:
+- Remove build of libhx-dev.symbols.
 
- -- Jörg Frings-Fürst   Tue, 21 Nov 2023 10:41:07 +0100
+ -- Jörg Frings-Fürst   Sun, 17 Dec 2023 14:44:39 +0100
 
 libhx (

Bug#1065193: RFS: libhx/4.23-1 [RC] -- C library providing queue, tree, I/O and utility functions

2024-04-21 Thread Jochen Sprickerhof

Hi Jörg,

I have fixed the old debian/changelog entry and uploaded the new version 
to DELAYED/7. Please feel free to tell me if I should delay it longer.


The patch is attached.

Cheers Jochen

* Jochen Sprickerhof  [2024-04-10 07:25]:

Hi Jörg,

is there anything I can help with to get libhx updated?
(I agree with what Tobias said below.)

Cheers Jochen


* Tobias Frost  [2024-03-17 20:22]:

Control: tags -1 moreinfo

On Sun, Mar 17, 2024 at 04:57:54PM +0100, Jörg Frings-Fürst wrote:

tags 1065193 - moreinfo
thanks

Hi Tobias,



Am Sonntag, dem 17.03.2024 um 14:39 +0100 schrieb Tobias Frost:

Hi Jörg,

"debcheckout libhx" still gives me 4.17-1 as head.

After looking at your repo, I think I should point you to DEP-14
as a recommended git layout for packaging.
As the branch name indicates you are using per-package-revision
branches:
IMHO It makes little sense to have one branch per debian package
version/revision; (I had a similiar discussion on vzlogger lately,
so to avoid repetiions: #1064344#28)

Please clarify how you want to manage the package in git, as
this needs to be reflected in d/control's VCS-* fields correctly.
(this is now blocking the upload.)


I have been using Vincent Driessen's branching model and the corresponding git
extension gitflow-avh for at least 7 years with Debian and for a long time at
work.

The default branch is master and development takes place in the develop branch.

The release candidates are managed in the branch release. The extension
debian/debian-version is used as a tag during the transfer.

The master branch always contains the last published executable version to which
the git tag in debian/control points.


a> The procedure is described in the README.debian.

ok, won't further argue about how to organize your git repo, but I can
only tell the Debian perspective: It is breaking expectations as a
debcheckout libhx does today NOT give me a state that represents the
package in unstable. VCS-Git specifies where the (package)
development is happening [1].

[1] Policy 5.6.26

But as I am not using the git repo as base for the sponsoring, lets put
that topic to rest.



(The current fields say the package is maintained in the default branch
of your repo. I see a debian/ directory there, but that one is missing
released (it is at 4.17-1, while unstable is at 4.19-1.1)

The review is based on the .dsc, timestamped on mentors.d.n
2024-03-17 12:00

d/changelog is *STILL* changing history for the 4.19-1 changelog
block. (This issue must be fixed before upload.)



I think these were artifacts because my changes to the NMU were not adopted. Has
been corrected.


No it has not. I expect old changelog entries to be *identical* to
the ones that have been uploaded, and they still are not, so I fear
we are talking past each other. Please let me know what you think that
you have fixed, so that we can spot the different expectations.

For my perspective:
This is the diff from debian/changelog from the current
version in the archives and the dsc uploaded to mentors at 2024-03-17 14:45
You are still rewriting history (second hunk of this diff), this hunk
should not exists.

diff -Naur archive/libhx-4.19/debian/changelog 
mentors/libhx-4.23/debian/changelog
--- archive/libhx-4.19/debian/changelog 2024-02-28 13:48:09.0 +0100
+++ mentors/libhx-4.23/debian/changelog 2024-03-17 15:23:31.0 +0100
@@ -1,3 +1,17 @@
+libhx (4.23-1) unstable; urgency=medium
+
+  * New upstream release (Closes: #1064734).
+  * Add debian/.gitignore
+  * Remove not longer needed debian/libhx-dev.lintian-overrides.
+  * Fix debian/libhx32t64.lintian-overrides.
+  * debian/copyright:
+- Add 2024 to myself.
+  * debian/control:
+- Readd BD dpkg-dev (>= 1.22.5).
+  Thanks to Tobias Frost 
+
+ -- Jörg Frings-Fürst   Sun, 17 Mar 2024 15:23:31 +0100
+
libhx (4.19-1.1) unstable; urgency=medium

 * Non-maintainer upload.
@@ -9,11 +23,8 @@

 * New upstream release.
   - Refresh symbols files.
-  * Remove empty debian/libhx-dev.symbols.
-  * debian/rules:
-- Remove build of libhx-dev.symbols.

- -- Jörg Frings-Fürst   Sun, 17 Dec 2023 14:44:39 +0100
+ -- Jörg Frings-Fürst   Tue, 21 Nov 2023 10:41:07 +0100

libhx (4.14-1) unstable; urgency=medium



From e82f0d623be16aad21807a7a5089fbfdbbd8ba05 Mon Sep 17 00:00:00 2001
From: Jochen Sprickerhof 
Date: Sun, 21 Apr 2024 09:03:28 +0200
Subject: [PATCH] Fix old debian/changelog entry

---
 debian/changelog | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/debian/changelog b/debian/changelog
index 5471467..e4c0c41 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -23,8 +23,11 @@ libhx (4.19-1) unstable; urgency=medium
 
   * New upstream release.
 - Refresh symbols files.
+  * Remove empty debian/libhx-dev.symbols.
+  * debian/rules:
+- Remove build of libhx-dev.symbols.
 
- -- Jörg Frings-Fürst   Tue, 21 Nov 2023 10:41:07 +0100
+ -- Jörg Frings-Fürst   Sun, 17 Dec 2023 14:44:39 +0100
 
 libhx (

Re: [FRIAM] Daniel Dennett (1942-2024)

2024-04-19 Thread Jochen Fromm
Such a loss 
:-(https://www.telegraph.co.uk/obituaries/2024/04/19/daniel-dennett-philosopher-atheist-darwinist/I
 will put his autobiography "I’ve Been Thinking" from last year on my reading 
listhttps://www.theguardian.com/books/2023/oct/01/ive-been-thinking-by-daniel-c-dennett-review-an-engaging-vexing-memoir-with-a-humility-bypass-J.
 Original message From: Jochen Fromm  Date: 
4/19/24  7:32 PM  (GMT+01:00) To: The Friday Morning Applied Complexity Coffee 
Group  Subject: [FRIAM] Daniel Dennett (1942-2024) A sad day 
today. Daniel Dennett has died :-( For every big question in philosophy there 
is at least one Daniel Dennett book:"Consciousnes Explained" (1991) about 
consciousness"Darwin's Dangerous Idea" (1995) about evolution "Freedom Evolves" 
(2003) about free will"Breaking the spell" (2006) about 
religionhttps://dailynous.com/2024/04/19/daniel-dennett-death-1942-2024/-J.-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] Daniel Dennett (1942-2024)

2024-04-19 Thread Jochen Fromm
A sad day today. Daniel Dennett has died :-( For every big question in 
philosophy there is at least one Daniel Dennett book:"Consciousnes Explained" 
(1991) about consciousness"Darwin's Dangerous Idea" (1995) about evolution 
"Freedom Evolves" (2003) about free will"Breaking the spell" (2006) about 
religionhttps://dailynous.com/2024/04/19/daniel-dennett-death-1942-2024/-J.-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [darktable-user] Feature idea : DTtimelapse ? (similar to LRtimelapse in essence)

2024-04-18 Thread Jochen Keil
Hi Martin,

The exposure module offers an "automatic" mode which works reasonably well
but in my experience there are outliers that need manual correction. Apart
from exposure flicker I've also noticed what I call "White Balance
Flicker". That usually happens with changing light situations (sunrise /
sunset) when the camera tries to adjust the white balance automatically. Of
course one could set the white balance to a fixed value but there's no WB
that fits daylight and night, which means you need WB ramping.

The color calibration tool offers a functionality that's a bit in the right
direction: You can set a target from another image and use it to adjust
other pictures based on it. Works well, but there's no automation. I helped
myself with xdotool (a program for key macros) but that's clunky.

As you suggest, it's possible, just not very streamlined and
straightforward.

Cheers!



On Wed, Apr 17, 2024 at 9:35 PM Martin Straeten 
wrote:

> with recent darktable there can be a further approach: using exposure and
> color mapping.
> These functions might support setting of smooth transitions for exposure
> as well as color calibration based on keyframes
>
> Am 17.04.2024 um 19:08 schrieb William Ferguson :
>
> 
> The issue here is that we are trying to solve a darktable problem with a
> lightroom solution.  If we approach it from a darktable perspective, then
> we are looking at (probably) less than 100 lines of lua code and 30 minutes
> worth of work.
>
> Using lua I can load an image into darkroom and read or modify the
> exposure setting.
>
> So, if I go through and pick my "key frames", adjust the exposure, then
> assign a keyframe tag I can run a script that:
> * selects the images that are keyframes, loads each one sequentially, and
> reads and stores the exposure information
> * loads each image in the collection and
> -- determines the limiting keyframes
> -- computes the exposure difference
> -- applies the exposure difference
> -- loads the next image after the pixelpipe completes.
>
> Advantages of this approach
> * No decoding, extrapolating, interpolating, guessing, etc of XMP files
> * Don't have to scan for updated XMP files when starting darktable
> * No worries about XMP file corruption
> * No worries about database corruption from loading a bad XMP
> * Don't have to worry about XMP format changes
> * If the collection gets messed up, you simply select all, discard the
> history stacks and you've recovered.
> * You're not limited to just modifying exposure
>
> Disadvantages:
> * It's slower.  It has to load each image into darktable and process it.
>   I tested loading images and changing the exposure and IIRC darktable
>   processed roughly 2 images/sec.  The images were on an SSD.
>
> Bill
>
> On Wed, Apr 17, 2024 at 11:04 AM Jochen Keil 
> wrote:
>
>> Hi Sébastien,
>>
>> I wrote dtlapse back then and I'm happy to see that there's still
>> interest in it. Unfortunately, due to time constraints I cannot put much
>> work into it. Therefore, in its current state it's pretty unusable, since
>> darktable evolves faster than I can keep up.
>>
>> The basic functionality is very close to what you describe. Pick some
>> keyframes, adjust them as desired and interpolate the values in between.
>> This can be done by using the XMP files as interface, once you get around
>> decoding the module parameters. That's all in dtlapse and worked pretty
>> well for its rather hackish state.
>>
>> However, the biggest problem is that modules tend to change regularly,
>> which means that you have to manually adapt the interface description for
>> every new darktable release while keeping old versions for compatibility.
>> I've made it somewhat easy to update XMP interface descriptions by moving
>> them to JSON files separately from the code. Still, for every new release
>> you have to reengineer the XMP file because they're not documented, at
>> least last time I looked. Given the amount of modules (and even if you
>> would limit yourself to the most interesting ones) it's tedious and by the
>> time you're done a new release comes around the corner.
>>
>> I've had the idea to generate the interface description directly from the
>> source code, but you'd need to use a C/C++ parser to get it. I've just
>> checked the code and the XMP interface is STILL hardcoded in the modules.
>>
>> So, to sum it up, it can be done, but it's quite hard. It'd be much
>> easier if the darktable developers would separate the XMP interface
>> definition for each module from the code which would greatly increase
>> interoperability and is good practice anyway (separate data from cod

Re: [darktable-user] Feature idea : DTtimelapse ? (similar to LRtimelapse in essence)

2024-04-18 Thread Jochen Keil
Hi Bill,

when I started to work on dtlapse it was not possible to modify module
parameters through the LUA interface, that might have changed.

On Wed, Apr 17, 2024 at 7:08 PM William Ferguson 
wrote:

> The issue here is that we are trying to solve a darktable problem with a
> lightroom solution.  If we approach it from a darktable perspective, then
> we are looking at (probably) less than 100 lines of lua code and 30 minutes
> worth of work.
>
> Using lua I can load an image into darkroom and read or modify the
> exposure setting.
>
> So, if I go through and pick my "key frames", adjust the exposure, then
> assign a keyframe tag I can run a script that:
> * selects the images that are keyframes, loads each one sequentially, and
> reads and stores the exposure information
> * loads each image in the collection and
> -- determines the limiting keyframes
> -- computes the exposure difference
> -- applies the exposure difference
> -- loads the next image after the pixelpipe completes.
>
> Advantages of this approach
> * No decoding, extrapolating, interpolating, guessing, etc of XMP files
>

You'll still need an extrapolation or interpolation algorithm to generate
the values between the keyframes. That's simple though, at least with
Python. LUA probably offers similar libraries


> * Don't have to scan for updated XMP files when starting darktable
> * No worries about XMP file corruption
>

That's why I've made the recommendation to not work with dtlapse and
darktable in parallel :)


> * No worries about database corruption from loading a bad XMP
>

Another recommendation would to use darktable's `--library :memory:`
option, bypassing the DB (which makes sense in another way: hundreds of
timelapse photos will a) clutter your library b) hinder performance)


> * Don't have to worry about XMP format changes
> * If the collection gets messed up, you simply select all, discard the
> history stacks and you've recovered.
> * You're not limited to just modifying exposure
>
> Disadvantages:
> * It's slower.  It has to load each image into darktable and process it.
>   I tested loading images and changing the exposure and IIRC darktable
>   processed roughly 2 images/sec.  The images were on an SSD.
>

That's actually reasonable. 1000 images in less than 10 minutes. That's
just a fraction of the time necessary for a complete and proper timelapse
workflow if you add in some video post production (music, zooming, panning,
etc).

Cheers!




>
> Bill
>
> On Wed, Apr 17, 2024 at 11:04 AM Jochen Keil 
> wrote:
>
>> Hi Sébastien,
>>
>> I wrote dtlapse back then and I'm happy to see that there's still
>> interest in it. Unfortunately, due to time constraints I cannot put much
>> work into it. Therefore, in its current state it's pretty unusable, since
>> darktable evolves faster than I can keep up.
>>
>> The basic functionality is very close to what you describe. Pick some
>> keyframes, adjust them as desired and interpolate the values in between.
>> This can be done by using the XMP files as interface, once you get around
>> decoding the module parameters. That's all in dtlapse and worked pretty
>> well for its rather hackish state.
>>
>> However, the biggest problem is that modules tend to change regularly,
>> which means that you have to manually adapt the interface description for
>> every new darktable release while keeping old versions for compatibility.
>> I've made it somewhat easy to update XMP interface descriptions by moving
>> them to JSON files separately from the code. Still, for every new release
>> you have to reengineer the XMP file because they're not documented, at
>> least last time I looked. Given the amount of modules (and even if you
>> would limit yourself to the most interesting ones) it's tedious and by the
>> time you're done a new release comes around the corner.
>>
>> I've had the idea to generate the interface description directly from the
>> source code, but you'd need to use a C/C++ parser to get it. I've just
>> checked the code and the XMP interface is STILL hardcoded in the modules.
>>
>> So, to sum it up, it can be done, but it's quite hard. It'd be much
>> easier if the darktable developers would separate the XMP interface
>> definition for each module from the code which would greatly increase
>> interoperability and is good practice anyway (separate data from code).
>> However, I think there's not much incentive for them to do it, it'd even be
>> a rather elaborate redesign.
>>
>> Best,
>>
>>   Jochen
>>
>> On Wed, Apr 17, 2024 at 2:23 PM Sébastien Chaurin <
>> sebastien.chau...@gmail.com> wrote:
>>

Re: [darktable-user] Feature idea : DTtimelapse ? (similar to LRtimelapse in essence)

2024-04-17 Thread Jochen Keil
Hi Sébastien,

I wrote dtlapse back then and I'm happy to see that there's still interest
in it. Unfortunately, due to time constraints I cannot put much work into
it. Therefore, in its current state it's pretty unusable, since darktable
evolves faster than I can keep up.

The basic functionality is very close to what you describe. Pick some
keyframes, adjust them as desired and interpolate the values in between.
This can be done by using the XMP files as interface, once you get around
decoding the module parameters. That's all in dtlapse and worked pretty
well for its rather hackish state.

However, the biggest problem is that modules tend to change regularly,
which means that you have to manually adapt the interface description for
every new darktable release while keeping old versions for compatibility.
I've made it somewhat easy to update XMP interface descriptions by moving
them to JSON files separately from the code. Still, for every new release
you have to reengineer the XMP file because they're not documented, at
least last time I looked. Given the amount of modules (and even if you
would limit yourself to the most interesting ones) it's tedious and by the
time you're done a new release comes around the corner.

I've had the idea to generate the interface description directly from the
source code, but you'd need to use a C/C++ parser to get it. I've just
checked the code and the XMP interface is STILL hardcoded in the modules.

So, to sum it up, it can be done, but it's quite hard. It'd be much easier
if the darktable developers would separate the XMP interface definition for
each module from the code which would greatly increase interoperability and
is good practice anyway (separate data from code). However, I think there's
not much incentive for them to do it, it'd even be a rather elaborate
redesign.

Best,

  Jochen

On Wed, Apr 17, 2024 at 2:23 PM Sébastien Chaurin <
sebastien.chau...@gmail.com> wrote:

> omg thanks for that ! I knew somehow that I couldn't be the only one
> thinking that it'd be great to have...
>
> I'll have a closer look at that repo.
>
> Thanks again.
>
> On Tue, 16 Apr 2024 at 13:48, Martin Straeten 
> wrote:
>
>> Have a look at https://discuss.pixls.us/t/annoucement-of-dtlapse/19522
>>
>> Am 16.04.2024 um 09:59 schrieb Sébastien Chaurin <
>> sebastien.chau...@gmail.com>:
>>
>> 
>> Hello all,
>>
>> Have any one of you guys wondered about how hard it'd be to implement
>> something similar to LRTimelapse ?
>> For those of you not aware of what this is, it's an additional app that
>> looks at xmp files from LR. It looks first within a folder with hundreds of
>> pics for a timelapse (in real life), at those images with only 5 stars. In
>> this exemple let's say we only have 5 images for our timelapse.
>> Let's imagine that we only have 2 of those, the first and the last, rated
>> 5 stars. and let's also assume there is only one module with one parameter
>> that has changed : exposure.
>> This app would look at the first 5 stars rated image and see the exposure
>> value of +0.5, and the second with a value of +0.9 hypothetically.
>> It would then look at how many images there are in between in the folder
>> (those not rated 5 stars) and divide the difference of the current setting
>> by the number of pics that sit in between these. 5 pics total minus the 2
>> rated 5 stars leaves us with 3.
>> So in this toy example we only have 3 photos in between the key images (5
>> stars), then we have
>> - difference in exposure : 0.9 - 0.5 = 0.4
>> - 4 pics to arrive at that 0.9 value if we start from the first one : 0.4
>> / 4 = 0.1 incremental step of exposure to be applied.
>> it would build xmp files for the 3 non 5 star rated pic with exposure
>> values respectively of 0.6, 0.7 and 0.8. The first one being 0.5, and the
>> last 0.9.
>> This is assuming we have a linear progression, but I'm sure you can
>> imagine other types than linear.
>>
>> The idea is to adjust every parameter for the pics in between key images
>> (5 stars pics) so that in the end for the timelapse, there are smooth
>> transitions for every setting, exposure is usually one.
>>
>> Hopefully this little example makes sense ? The concept is I think easy
>> to understand : avoid editing possibly thousands of pictures with varying
>> needs in editing. You would only edit key images, and then it would ensure
>> smooth transitioning for all the in-between images, working out the
>> incremental steps it needs to put for every single setting to ensure that.
>>
>> I've used that a lot during my time with LR, and I've been thinking about
>> bringing thi

Re: Uppercase username emails are rejected

2024-04-17 Thread Jochen Bern via dovecot

On 17.04.24 08:43, Aki Tuomi wrote:

  On 17/04/24 00:51, John Stoffel via dovecot wrote:
  >>>>>> "Peter" == Peter via dovecot  writes:
Generally speaking you want auth to be case-
sensitive, but go ahead and
try it to see if it fixes the issue.
   Umm... not for emails you don't. Since the j...@stoffel.org
   and j...@stoffel.org and j...@stoffel.org are all the same
   email address


No they aren't. The *host part* is case insensitive because the DNS is, 
but nothing in the RFCs suggests that the *user part* may be (generally) 
treated as such. That only came about when the makers of a certain, 
famously case insensitive OS started selling a mail server software 
better aligned with their habits.


(Back with SunOS, when account names automatically yielded deliverable 
e-mail addresses, my dpt. had a standing rule that admins would have an 
unprivileged account like, e.g., "bern" and a separate UID=0 account 
"Bern" for the admin work. Luckily, the login(1) triggered its OH, IT 
SEEMS THAT THIS TERMINAL SUPPORTS ONLY SINGLE CASE mode only if the 
username was *entirely* in uppercase, not on the first character ...)


Having that said, nothing keeps you from setting up your MTA/MDA so as 
to ignore case entirely (because people manually entering addresses 
never make typos, but erroneously slip onto  or  all 
the time, I suppose ...), but it's a major no-no for (intermediate) MTAs.



Unfortunately some systems uppercase (or downcase) your email when sending mail
to you.


In particular, websites you create an account on, apparently in fear 
that joe@shmoe would otherwise be able to create multiple accounts with 
Joe@shmoe, jOe@shmoe etc. etc.. They rarely object to plussed user 
addresses or single-person-owned domains that could have a catchall 
configured, though ...


(I *should* have tried a user part with "ß" on an upcaseing online 
service back when that umlaut officially *didn't have* an uppercase 
version ... ;-)


Kind regards,
--
Jochen Bern
Systemingenieur

Binect GmbH
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


[FRIAM] How the mind works

2024-04-15 Thread Jochen Fromm
On "Hacker News" someone wrote today [1] that transformers in LLMs work like 
the Hamiltonian in Quantum Mechnics because prediction of the next token in the 
sequence is determined by computing the next context-aware embedding vector 
from the last context-aware embedding vector alone, similar to the Hamiltonian 
matrix which is applied in Quantum Mechanics in the Schrödinger equation to the 
high dimensional state vector in a Hilbert space to predict the next state of 
the quantum system [2]. This would imply that the human mind works like Quantum 
Mechanics if LLMs describe the functionality of the prefontal cortex. If the 
Hamiltonian represents the “energy dynamics”, what do the LLM values represent? 
The learned experience?Robert Watson who is a Professor of the Humanities at 
the University of California Los Angeles, writes in his fascinating book 
"Cultural Evolution and its Discontents" (2018) that "the human mind resembles 
a nuclear reactor: amazingly generative within the little containment dome, but 
always at risk of running fatally amok in a meltdown, and ceaselessly producing 
toxic waste" [3]. The toxic waste can be for example hallucination, imagination 
or simply lies.So is the human mind similar to Quantum Mechnics or does it 
resemble a nuclear reactor?-J.  [1] 
https://news.ycombinator.com/item?id=40038352[2] 
https://en.wikipedia.org/wiki/Hamiltonian_(quantum_mechanics)[3] Robert N. 
Watson, Cultural Evolution and its Discontents, Routledge, 2018-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[Bug 2061171] Re: [FFe] Update sbuild to 0.85.7 for Noble

2024-04-12 Thread Jochen Sprickerhof
** Description changed:

  The new sbuild version contains a number of fixes for the unshare
- backend that would be good to have in Noble. In Debian, we have rebuild
- all of Debian stable (bookworm) with sbuild+unshare and found that
- adding dumb-init fixed 46 package builds, see:
+ backend that would be good to have in Noble.
+ 
+ We have rebuild all of Debian stable (bookworm) with sbuild+unshare and
+ found that adding dumb-init fixed 46 package builds, see:
  https://salsa.debian.org/debian/sbuild/-/merge_requests/60.
  
- In addition, this upload contains the following features:
-  - Allow the usage of 4M OVMF images if available.
-  - New CLI options --enable-network and --no-enable-network to allow more 
granular network access control during builds.
-  - Run post-build-failed-commands before cleanup.
+ The other changes are:
+ 
+ Add an --enable-network option for the unshare backend, useful to build
+ packages that net a network connection (debian-installer and debian-
+ installer-netboot-images in Debian).
+ 
+ Support 4M OVMF images in addition in sbuild-qemu.
+ 
+ Run --post-build-failed-commands before cleanup: This makes sure that
+ %SBUILD_PKGBUILD_DIR is still available (as documented in sbuild(1)) and
+ allows to retrieve files from a failed build.
+ 
+ Documentation changes.

** Description changed:

  The new sbuild version contains a number of fixes for the unshare
  backend that would be good to have in Noble.
  
  We have rebuild all of Debian stable (bookworm) with sbuild+unshare and
  found that adding dumb-init fixed 46 package builds, see:
  https://salsa.debian.org/debian/sbuild/-/merge_requests/60.
  
  The other changes are:
  
  Add an --enable-network option for the unshare backend, useful to build
  packages that net a network connection (debian-installer and debian-
  installer-netboot-images in Debian).
  
  Support 4M OVMF images in addition in sbuild-qemu.
  
  Run --post-build-failed-commands before cleanup: This makes sure that
  %SBUILD_PKGBUILD_DIR is still available (as documented in sbuild(1)) and
  allows to retrieve files from a failed build.
  
  Documentation changes.
+ 
+ Note that the sbuild.log contains the changelog.diff (debian/changelog
+ as this is a native package), build log and install log.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2061171

Title:
  [FFe] Update sbuild to 0.85.7 for Noble

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sbuild/+bug/2061171/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2061171] [NEW] [FFe] Update sbuild to 0.85.7 for Noble

2024-04-12 Thread Jochen Sprickerhof
Public bug reported:

The new sbuild version contains a number of fixes for the unshare
backend that would be good to have in Noble.

We have rebuild all of Debian stable (bookworm) with sbuild+unshare and
found that adding dumb-init fixed 46 package builds, see:
https://salsa.debian.org/debian/sbuild/-/merge_requests/60.

** Affects: sbuild (Ubuntu)
 Importance: Undecided
 Status: New

** Attachment added: "sbuild.log"
   https://bugs.launchpad.net/bugs/2061171/+attachment/5764364/+files/sbuild.log

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2061171

Title:
  [FFe] Update sbuild to 0.85.7 for Noble

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sbuild/+bug/2061171/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [PATCH 14/64] i2c: cpm: reword according to newest specification

2024-04-12 Thread Jochen Friedrich

out_8(>i2c_reg->i2mod, 0x00);
-   out_8(>i2c_reg->i2com, I2COM_MASTER);/* Master mode */
+   out_8(>i2c_reg->i2com, I2COM_MASTER);/* Host mode */

I2COM_MASTER might be coming from the datasheet.

Maybe we can just drop the comment? The value we write is pretty
self-explaining.

indeed.

Andi


I also agree. You might add my Acked-By:  here. Jochen



Bug#1068798: bookworm-pu: package fdroidserver/2.2.1-1

2024-04-11 Thread Jochen Sprickerhof

Forgot the patch..
diff --git a/debian/changelog b/debian/changelog
index a990dc45..05aabd67 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,10 @@
+fdroidserver (2.2.1-1+deb12u1) bookworm; urgency=medium
+
+  * Team upload.
+  * Add patch to fix security issue in certificate checks
+
+ -- Jochen Sprickerhof   Thu, 11 Apr 2024 11:20:33 +0200
+
 fdroidserver (2.2.1-1) unstable; urgency=medium
 
   * New upstream version 2.2.1
diff --git a/debian/patches/0004-Fix-signer-certificate-checks.patch b/debian/patches/0004-Fix-signer-certificate-checks.patch
new file mode 100644
index ..8830d788
--- /dev/null
+++ b/debian/patches/0004-Fix-signer-certificate-checks.patch
@@ -0,0 +1,72 @@
+From: "FC (Fay) Stegerman" 
+Date: Thu, 11 Apr 2024 11:11:46 +0200
+Subject: Fix signer certificate checks
+
+This fixes the order the signatures are checked to be the same as
+Android does them and monkey patches androguard to handle duplicate
+signing blocks.
+
+This was reported as:
+
+https://www.openwall.com/lists/oss-security/2024/04/08/8
+
+Patch taken from:
+
+https://github.com/obfusk/fdroid-fakesigner-poc/blob/master/fdroidserver.patch
+---
+ fdroidserver/common.py | 33 -
+ 1 file changed, 20 insertions(+), 13 deletions(-)
+
+diff --git a/fdroidserver/common.py b/fdroidserver/common.py
+index bc4265e..bd1a4c8 100644
+--- a/fdroidserver/common.py
 b/fdroidserver/common.py
+@@ -3001,28 +3001,35 @@ def signer_fingerprint(cert_encoded):
+ 
+ def get_first_signer_certificate(apkpath):
+ """Get the first signing certificate from the APK, DER-encoded."""
++class FDict(dict):
++def __setitem__(self, k, v):
++if k not in self:
++super().__setitem__(k, v)
++
+ certs = None
+ cert_encoded = None
+-with zipfile.ZipFile(apkpath, 'r') as apk:
+-cert_files = [n for n in apk.namelist() if SIGNATURE_BLOCK_FILE_REGEX.match(n)]
+-if len(cert_files) > 1:
+-logging.error(_("Found multiple JAR Signature Block Files in {path}").format(path=apkpath))
+-return None
+-elif len(cert_files) == 1:
+-cert_encoded = get_certificate(apk.read(cert_files[0]))
+-
+-if not cert_encoded and use_androguard():
++if use_androguard():
+ apkobject = _get_androguard_APK(apkpath)
+-certs = apkobject.get_certificates_der_v2()
++apkobject._v2_blocks = FDict()
++certs = apkobject.get_certificates_der_v3()
+ if len(certs) > 0:
+-logging.debug(_('Using APK Signature v2'))
++logging.debug(_('Using APK Signature v3'))
+ cert_encoded = certs[0]
+ if not cert_encoded:
+-certs = apkobject.get_certificates_der_v3()
++certs = apkobject.get_certificates_der_v2()
+ if len(certs) > 0:
+-logging.debug(_('Using APK Signature v3'))
++logging.debug(_('Using APK Signature v2'))
+ cert_encoded = certs[0]
+ 
++if not cert_encoded:
++with zipfile.ZipFile(apkpath, 'r') as apk:
++cert_files = [n for n in apk.namelist() if SIGNATURE_BLOCK_FILE_REGEX.match(n)]
++if len(cert_files) > 1:
++logging.error(_("Found multiple JAR Signature Block Files in {path}").format(path=apkpath))
++return None
++elif len(cert_files) == 1:
++cert_encoded = get_certificate(apk.read(cert_files[0]))
++
+ if not cert_encoded:
+ logging.error(_("No signing certificates found in {path}").format(path=apkpath))
+ return None
diff --git a/debian/patches/series b/debian/patches/series
index ab17e6df..8e2df116 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -1,3 +1,4 @@
 debian-java-detection.patch
 ignore-irrelevant-test.patch
 scanner-tests-need-dexdump.patch
+0004-Fix-signer-certificate-checks.patch


signature.asc
Description: PGP signature


Bug#1068798: bookworm-pu: package fdroidserver/2.2.1-1

2024-04-11 Thread Jochen Sprickerhof

Forgot the patch..
diff --git a/debian/changelog b/debian/changelog
index a990dc45..05aabd67 100644
--- a/debian/changelog
+++ b/debian/changelog
@@ -1,3 +1,10 @@
+fdroidserver (2.2.1-1+deb12u1) bookworm; urgency=medium
+
+  * Team upload.
+  * Add patch to fix security issue in certificate checks
+
+ -- Jochen Sprickerhof   Thu, 11 Apr 2024 11:20:33 +0200
+
 fdroidserver (2.2.1-1) unstable; urgency=medium
 
   * New upstream version 2.2.1
diff --git a/debian/patches/0004-Fix-signer-certificate-checks.patch b/debian/patches/0004-Fix-signer-certificate-checks.patch
new file mode 100644
index ..8830d788
--- /dev/null
+++ b/debian/patches/0004-Fix-signer-certificate-checks.patch
@@ -0,0 +1,72 @@
+From: "FC (Fay) Stegerman" 
+Date: Thu, 11 Apr 2024 11:11:46 +0200
+Subject: Fix signer certificate checks
+
+This fixes the order the signatures are checked to be the same as
+Android does them and monkey patches androguard to handle duplicate
+signing blocks.
+
+This was reported as:
+
+https://www.openwall.com/lists/oss-security/2024/04/08/8
+
+Patch taken from:
+
+https://github.com/obfusk/fdroid-fakesigner-poc/blob/master/fdroidserver.patch
+---
+ fdroidserver/common.py | 33 -
+ 1 file changed, 20 insertions(+), 13 deletions(-)
+
+diff --git a/fdroidserver/common.py b/fdroidserver/common.py
+index bc4265e..bd1a4c8 100644
+--- a/fdroidserver/common.py
 b/fdroidserver/common.py
+@@ -3001,28 +3001,35 @@ def signer_fingerprint(cert_encoded):
+ 
+ def get_first_signer_certificate(apkpath):
+ """Get the first signing certificate from the APK, DER-encoded."""
++class FDict(dict):
++def __setitem__(self, k, v):
++if k not in self:
++super().__setitem__(k, v)
++
+ certs = None
+ cert_encoded = None
+-with zipfile.ZipFile(apkpath, 'r') as apk:
+-cert_files = [n for n in apk.namelist() if SIGNATURE_BLOCK_FILE_REGEX.match(n)]
+-if len(cert_files) > 1:
+-logging.error(_("Found multiple JAR Signature Block Files in {path}").format(path=apkpath))
+-return None
+-elif len(cert_files) == 1:
+-cert_encoded = get_certificate(apk.read(cert_files[0]))
+-
+-if not cert_encoded and use_androguard():
++if use_androguard():
+ apkobject = _get_androguard_APK(apkpath)
+-certs = apkobject.get_certificates_der_v2()
++apkobject._v2_blocks = FDict()
++certs = apkobject.get_certificates_der_v3()
+ if len(certs) > 0:
+-logging.debug(_('Using APK Signature v2'))
++logging.debug(_('Using APK Signature v3'))
+ cert_encoded = certs[0]
+ if not cert_encoded:
+-certs = apkobject.get_certificates_der_v3()
++certs = apkobject.get_certificates_der_v2()
+ if len(certs) > 0:
+-logging.debug(_('Using APK Signature v3'))
++logging.debug(_('Using APK Signature v2'))
+ cert_encoded = certs[0]
+ 
++if not cert_encoded:
++with zipfile.ZipFile(apkpath, 'r') as apk:
++cert_files = [n for n in apk.namelist() if SIGNATURE_BLOCK_FILE_REGEX.match(n)]
++if len(cert_files) > 1:
++logging.error(_("Found multiple JAR Signature Block Files in {path}").format(path=apkpath))
++return None
++elif len(cert_files) == 1:
++cert_encoded = get_certificate(apk.read(cert_files[0]))
++
+ if not cert_encoded:
+ logging.error(_("No signing certificates found in {path}").format(path=apkpath))
+ return None
diff --git a/debian/patches/series b/debian/patches/series
index ab17e6df..8e2df116 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -1,3 +1,4 @@
 debian-java-detection.patch
 ignore-irrelevant-test.patch
 scanner-tests-need-dexdump.patch
+0004-Fix-signer-certificate-checks.patch


signature.asc
Description: PGP signature


Bug#1068798: bookworm-pu: package fdroidserver/2.2.1-1

2024-04-11 Thread Jochen Sprickerhof
Package: release.debian.org
Severity: normal
Tags: bookworm
X-Debbugs-Cc: fdroidser...@packages.debian.org, Hans-Christoph Steiner 

Control: affects -1 + src:fdroidserver
User: release.debian@packages.debian.org
Usertags: pu

[ Reason ]
There was a security problem reported against fdroidserver:

https://www.openwall.com/lists/oss-security/2024/04/08/8

[ Impact ]
Stable users of fdroidserver running their own repo could be tricked
into providing wrongly signed files.

[ Tests ]
Manual test on F-Droid internal datasets as well as automated tests
inside fdroidserver.

[ Risks ]
Low, the relevant code is only used to extract and verify signatures.

[ Checklist ]
  [X] *all* changes are documented in the d/changelog
  [X] I reviewed all changes and I approve them
  [X] attach debdiff against the package in (old)stable
  [ ] the issue is verified as fixed in unstable

[ Changes ]
The patch reorders the code as well as changes the code of the imported
androguard library.

[ Other info ]
Upstream is still working on a long term fix that will be uploaded to
unstable later. I agreed with upstream to use use the patch provided in
the mail on oss-security already now.



Bug#1068798: bookworm-pu: package fdroidserver/2.2.1-1

2024-04-11 Thread Jochen Sprickerhof
Package: release.debian.org
Severity: normal
Tags: bookworm
X-Debbugs-Cc: fdroidser...@packages.debian.org, Hans-Christoph Steiner 

Control: affects -1 + src:fdroidserver
User: release.debian@packages.debian.org
Usertags: pu

[ Reason ]
There was a security problem reported against fdroidserver:

https://www.openwall.com/lists/oss-security/2024/04/08/8

[ Impact ]
Stable users of fdroidserver running their own repo could be tricked
into providing wrongly signed files.

[ Tests ]
Manual test on F-Droid internal datasets as well as automated tests
inside fdroidserver.

[ Risks ]
Low, the relevant code is only used to extract and verify signatures.

[ Checklist ]
  [X] *all* changes are documented in the d/changelog
  [X] I reviewed all changes and I approve them
  [X] attach debdiff against the package in (old)stable
  [ ] the issue is verified as fixed in unstable

[ Changes ]
The patch reorders the code as well as changes the code of the imported
androguard library.

[ Other info ]
Upstream is still working on a long term fix that will be uploaded to
unstable later. I agreed with upstream to use use the patch provided in
the mail on oss-security already now.



Bug#1065193: RFS: libhx/4.23-1 [RC] -- C library providing queue, tree, I/O and utility functions

2024-04-09 Thread Jochen Sprickerhof

Hi Jörg,

is there anything I can help with to get libhx updated?
(I agree with what Tobias said below.)

Cheers Jochen


* Tobias Frost  [2024-03-17 20:22]:

Control: tags -1 moreinfo

On Sun, Mar 17, 2024 at 04:57:54PM +0100, Jörg Frings-Fürst wrote:

tags 1065193 - moreinfo
thanks

Hi Tobias,



Am Sonntag, dem 17.03.2024 um 14:39 +0100 schrieb Tobias Frost:
> Hi Jörg,
>
> "debcheckout libhx" still gives me 4.17-1 as head.
>
> After looking at your repo, I think I should point you to DEP-14
> as a recommended git layout for packaging.
> As the branch name indicates you are using per-package-revision
> branches:
> IMHO It makes little sense to have one branch per debian package
> version/revision; (I had a similiar discussion on vzlogger lately,
> so to avoid repetiions: #1064344#28)
>
> Please clarify how you want to manage the package in git, as
> this needs to be reflected in d/control's VCS-* fields correctly.
> (this is now blocking the upload.)

I have been using Vincent Driessen's branching model and the corresponding git
extension gitflow-avh for at least 7 years with Debian and for a long time at
work.

The default branch is master and development takes place in the develop branch.

The release candidates are managed in the branch release. The extension
debian/debian-version is used as a tag during the transfer.

The master branch always contains the last published executable version to which
the git tag in debian/control points.


a> The procedure is described in the README.debian.

ok, won't further argue about how to organize your git repo, but I can
only tell the Debian perspective: It is breaking expectations as a
debcheckout libhx does today NOT give me a state that represents the
package in unstable. VCS-Git specifies where the (package)
development is happening [1].

[1] Policy 5.6.26

But as I am not using the git repo as base for the sponsoring, lets put
that topic to rest.


>
> (The current fields say the package is maintained in the default branch
> of your repo. I see a debian/ directory there, but that one is missing
> released (it is at 4.17-1, while unstable is at 4.19-1.1)
>
> The review is based on the .dsc, timestamped on mentors.d.n
> 2024-03-17 12:00
>
> d/changelog is *STILL* changing history for the 4.19-1 changelog
> block. (This issue must be fixed before upload.)
>

I think these were artifacts because my changes to the NMU were not adopted. Has
been corrected.


No it has not. I expect old changelog entries to be *identical* to
the ones that have been uploaded, and they still are not, so I fear
we are talking past each other. Please let me know what you think that
you have fixed, so that we can spot the different expectations.

For my perspective:
This is the diff from debian/changelog from the current
version in the archives and the dsc uploaded to mentors at 2024-03-17 14:45
You are still rewriting history (second hunk of this diff), this hunk
should not exists.

diff -Naur archive/libhx-4.19/debian/changelog 
mentors/libhx-4.23/debian/changelog
--- archive/libhx-4.19/debian/changelog 2024-02-28 13:48:09.0 +0100
+++ mentors/libhx-4.23/debian/changelog 2024-03-17 15:23:31.0 +0100
@@ -1,3 +1,17 @@
+libhx (4.23-1) unstable; urgency=medium
+
+  * New upstream release (Closes: #1064734).
+  * Add debian/.gitignore
+  * Remove not longer needed debian/libhx-dev.lintian-overrides.
+  * Fix debian/libhx32t64.lintian-overrides.
+  * debian/copyright:
+- Add 2024 to myself.
+  * debian/control:
+- Readd BD dpkg-dev (>= 1.22.5).
+  Thanks to Tobias Frost 
+
+ -- Jörg Frings-Fürst   Sun, 17 Mar 2024 15:23:31 +0100
+
libhx (4.19-1.1) unstable; urgency=medium

  * Non-maintainer upload.
@@ -9,11 +23,8 @@

  * New upstream release.
- Refresh symbols files.
-  * Remove empty debian/libhx-dev.symbols.
-  * debian/rules:
-- Remove build of libhx-dev.symbols.

- -- Jörg Frings-Fürst   Sun, 17 Dec 2023 14:44:39 +0100
+ -- Jörg Frings-Fürst   Tue, 21 Nov 2023 10:41:07 +0100

libhx (4.14-1) unstable; urgency=medium


signature.asc
Description: PGP signature


Bug#1065193: RFS: libhx/4.23-1 [RC] -- C library providing queue, tree, I/O and utility functions

2024-04-09 Thread Jochen Sprickerhof

Hi Jörg,

is there anything I can help with to get libhx updated?
(I agree with what Tobias said below.)

Cheers Jochen


* Tobias Frost  [2024-03-17 20:22]:

Control: tags -1 moreinfo

On Sun, Mar 17, 2024 at 04:57:54PM +0100, Jörg Frings-Fürst wrote:

tags 1065193 - moreinfo
thanks

Hi Tobias,



Am Sonntag, dem 17.03.2024 um 14:39 +0100 schrieb Tobias Frost:
> Hi Jörg,
>
> "debcheckout libhx" still gives me 4.17-1 as head.
>
> After looking at your repo, I think I should point you to DEP-14
> as a recommended git layout for packaging.
> As the branch name indicates you are using per-package-revision
> branches:
> IMHO It makes little sense to have one branch per debian package
> version/revision; (I had a similiar discussion on vzlogger lately,
> so to avoid repetiions: #1064344#28)
>
> Please clarify how you want to manage the package in git, as
> this needs to be reflected in d/control's VCS-* fields correctly.
> (this is now blocking the upload.)

I have been using Vincent Driessen's branching model and the corresponding git
extension gitflow-avh for at least 7 years with Debian and for a long time at
work.

The default branch is master and development takes place in the develop branch.

The release candidates are managed in the branch release. The extension
debian/debian-version is used as a tag during the transfer.

The master branch always contains the last published executable version to which
the git tag in debian/control points.


a> The procedure is described in the README.debian.

ok, won't further argue about how to organize your git repo, but I can
only tell the Debian perspective: It is breaking expectations as a
debcheckout libhx does today NOT give me a state that represents the
package in unstable. VCS-Git specifies where the (package)
development is happening [1].

[1] Policy 5.6.26

But as I am not using the git repo as base for the sponsoring, lets put
that topic to rest.


>
> (The current fields say the package is maintained in the default branch
> of your repo. I see a debian/ directory there, but that one is missing
> released (it is at 4.17-1, while unstable is at 4.19-1.1)
>
> The review is based on the .dsc, timestamped on mentors.d.n
> 2024-03-17 12:00
>
> d/changelog is *STILL* changing history for the 4.19-1 changelog
> block. (This issue must be fixed before upload.)
>

I think these were artifacts because my changes to the NMU were not adopted. Has
been corrected.


No it has not. I expect old changelog entries to be *identical* to
the ones that have been uploaded, and they still are not, so I fear
we are talking past each other. Please let me know what you think that
you have fixed, so that we can spot the different expectations.

For my perspective:
This is the diff from debian/changelog from the current
version in the archives and the dsc uploaded to mentors at 2024-03-17 14:45
You are still rewriting history (second hunk of this diff), this hunk
should not exists.

diff -Naur archive/libhx-4.19/debian/changelog 
mentors/libhx-4.23/debian/changelog
--- archive/libhx-4.19/debian/changelog 2024-02-28 13:48:09.0 +0100
+++ mentors/libhx-4.23/debian/changelog 2024-03-17 15:23:31.0 +0100
@@ -1,3 +1,17 @@
+libhx (4.23-1) unstable; urgency=medium
+
+  * New upstream release (Closes: #1064734).
+  * Add debian/.gitignore
+  * Remove not longer needed debian/libhx-dev.lintian-overrides.
+  * Fix debian/libhx32t64.lintian-overrides.
+  * debian/copyright:
+- Add 2024 to myself.
+  * debian/control:
+- Readd BD dpkg-dev (>= 1.22.5).
+  Thanks to Tobias Frost 
+
+ -- Jörg Frings-Fürst   Sun, 17 Mar 2024 15:23:31 +0100
+
libhx (4.19-1.1) unstable; urgency=medium

  * Non-maintainer upload.
@@ -9,11 +23,8 @@

  * New upstream release.
- Refresh symbols files.
-  * Remove empty debian/libhx-dev.symbols.
-  * debian/rules:
-- Remove build of libhx-dev.symbols.

- -- Jörg Frings-Fürst   Sun, 17 Dec 2023 14:44:39 +0100
+ -- Jörg Frings-Fürst   Tue, 21 Nov 2023 10:41:07 +0100

libhx (4.14-1) unstable; urgency=medium


signature.asc
Description: PGP signature


Re: End-of-life date for `log4j-1.2-api`

2024-04-08 Thread Jochen Wiedmann
On Mon, Apr 8, 2024 at 1:11 PM Apache  wrote:

> My opinion is to drop it from 3.0.0. 2.x is going to live a long time still. 
> By the time it dies Log4J 1.x will have been dead well over 15 years, maybe 
> even 20. That would give users plenty of time to be aware that they need to 
> plan to upgrade.

How long ago was it, that all these JNDI, and JMS related issues where
found? Yes, three years. And I remember very well, how customers
basically stormed my employers house, because some ancient code (which
should have been updated years ago) is using these "dead" libraries.

And, do you remember also, how long it took at the time, to push out
1.2.18? Wait, that was never published? Instead, we have
https://github.com/qos-ch/reload4j.

Please, just because you think, that you can master these things:
Don't assume, that others can.

Jochen

-- 
The woman was born in a full-blown thunderstorm. She probably told it
to be quiet. It probably did. (Robert Jordan, Winter's heart)


Re: Deadlock in Java9.getDefaultImportClasses() ?

2024-04-08 Thread Jochen Theodorou

On 08.04.24 16:48, Mutter, Florian wrote:

After updating Kubernetes from 1.27 to 1.28 one of our applications is
not working anymore.

The application uses thymeleaf templating engine that uses groove under
the hood. The application does not respond to request that require a
template to be rendered. Looking at the stacktrace did not give us any
hint what is causing this. In the profiler it looks like a lot of time
is spent waiting in Java9.getDefaultImportClasses() method. We could not
find any code in there or in ClassFinder.find() that looks like it could
cause a dead lock.

When attaching the debugger and adding some break points in
Java9.getDefaultImportClasses() it did work 路‍♂️.


This is really weird. Java, Groovy and thymeleaf versions are the same?
What Groovy version are you using btw? What version of Java?



The only thing that we could see that is different between a working
setup and a non-working one is the updated Kubernetes with updated node
images using a newer linux kernel. No idea how this could impact the code.


The linux kernel should not impact that unless it is a bug in Java.


Does anyone have an idea what could cause this or what we could do to
identify the cause of the dead lock?

I attached a screenshot of the profiler.


I would be nice to know the line number, then we know at least which of
the 3 possible supplyAsync().get() fails. I was thinking maybe this can
happen if there is an exception... but normally get() should produce an
ExecutionException in that case.

Right now I am a bit clueless

bye Jochen



Re: Potential enhancement to type checking extensions

2024-04-08 Thread Jochen Theodorou

On 08.04.24 13:56, Paul King wrote:
[...]

What I wanted to show is the same examples but using the '==' and '!='
operators, since that would be the typical Groovy style for this
scenario. Unfortunately, using the type checking extension DSL doesn't
currently work for binary operators. The swap from '==' to the
'equals' method call occurs well after type checking is finished. The
same would apply to operators eventually falling back to 'compareTo'.


the replacements may be even on ScriptBytecodeAdapter.


You can still make it work by not using the DSL and writing your own
type checking extension, but that's significantly more work.

Our options seem to be:
(1) not trying to make this work
(2) modify operators to method call expressions earlier (might remove
some optimization steps)
(3) tweak StaticTypeCheckingVisitor#visitBinaryExpression to support
before/after method call hooks for known cases like equals/compareTo
with a pretend method call
(4) alter the TypeCheckingExtension interface with before/after binary
expression calls.


[...]


Does anyone have strong opinions on this before I start having a play
and seeing what might work? In particular, a preference for option 3
or 4?


Doing that only for special cases does not sound right to me. I would be
for option 4... is there anything speaking against that?

bye Jochen



Re: [Openvpn-users] PC connects to the server but not Android

2024-04-08 Thread Jochen Bern

On 08.04.24 12:03, Peter Davis via Openvpn-users wrote:

Hello,
I can connect to OpenVPN server through PC, but it is not possible from Android.
2024-04-08 13:25:58 read UDPv4 [EMSGSIZE Path-MTU=1476]: Message too long 
(fd=6,code=90)


Well, *if* we can take that log line at face value, you might want to 
try reducing the MTU configured in your client.


Other than that, do you see any packets of a connection *attempt* arrive 
on the server, or corresponding log entries?


Kind regards,
--
Jochen Bern
Systemingenieur

Binect GmbH


smime.p7s
Description: S/MIME Cryptographic Signature
___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] [ext] Re: DNS Round-robin-records vs. "Preserving recently used remote address"

2024-04-03 Thread Jochen Bern

On 03.04.24 13:30, Ralf Hildebrandt via Openvpn-users wrote:

I don't see such an option in the docs (for 2.6, to be precise), but let me
ask a question for clarification: Does your setup answer requests to a
now-disabled IP with some explicit denial (ICMP UNREACHABLE, RST, whatever),


No, since the machine might still be active and serving existing
openvpn sessions (basically we'd like to keep serving existing clients
and disallow new clients)


... well, that wouldn't keep me from trying something along the lines of

iptables -I INPUT -p tcp --dport $MYPORT -m state --state NEW -j REJECT
iptables -I INPUT -p udp --dport $MYPORT -m state --state NEW -j REJECT

but YMDOPMV¹ ...

Note, however, that this interprets your term "new client" so as to 
include clients that *were* connected seconds ago, but choose to 
*re*connect for whatever reason.


¹ "Your Mileage, Distro, and Other Parameters May Vary"

Kind regards,
--
Jochen Bern
Systemingenieur

Binect GmbH


smime.p7s
Description: S/MIME Cryptographic Signature
___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [Openvpn-users] DNS Round-robin-records vs. "Preserving recently used remote address"

2024-04-03 Thread Jochen Bern

On 03.04.24 11:31, Ralf Hildebrandt via Openvpn-users wrote:

We're using DNS Round-robin-records with a TTL of 300s for our openvpn
endpoint servers.

Yet, clients seem to reconnect to the same IP, although the DNS entry
has expired; the log usually shows something like:

2024-02-21 11:37:04 TCP/UDP: Preserving recently used remote address: 
[AF_INET]193.175.73.xxx:1194

Yes, it makes perfect sense to re-use a known IP, especially in the
VPN context (DNS settings might just be off while dropping out of the
VPN etc.), but this does really clash with our intentionally low TTL -
at least when we're removeing one endpoint from the DNS for maintenance.


I shall assume that your question is "how do I tell the client *not* to 
try sticking to the last IP used?". ;-)


I don't see such an option in the docs (for 2.6, to be precise), but let 
me ask a question for clarification: Does your setup answer requests to 
a now-disabled IP with some explicit denial (ICMP UNREACHABLE, RST, 
whatever), in which case I'd be surprised if the client takes more than 
a second or two to give up on the old server, or are we talking about 
one or more minute-or-so timeout delays?


If the latter, would it be possible to extend your 
going-down-for-maintenance routines so as to tell some firewall to 
generate such denial packets?


On 03.04.24 12:40, Marek Zarychta via Openvpn-users wrote:

in your case setting "explicit-exit-notify 2" on the servers should solve the 
problem.


... as long as the VPNs are running in UDP mode, and the server goes 
through an *orderly* shutdown ...


Kind regards,
--
Jochen Bern
Systemingenieur

Binect GmbH


smime.p7s
Description: S/MIME Cryptographic Signature
___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


(creadur-rat) 01/01: Merge pull request #234 from apache/dependabot/maven/org.apache.maven.plugins-maven-invoker-plugin-3.6.1

2024-04-02 Thread jochen
This is an automated email from the ASF dual-hosted git repository.

jochen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/creadur-rat.git

commit 4813ac31d76f324573cfcbbdc610fd891d9a224d
Merge: cb0eccea 8ed4c16b
Author: Jochen Wiedmann 
AuthorDate: Tue Apr 2 23:50:46 2024 +0200

Merge pull request #234 from 
apache/dependabot/maven/org.apache.maven.plugins-maven-invoker-plugin-3.6.1

Bump org.apache.maven.plugins:maven-invoker-plugin from 3.6.0 to 3.6.1

 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



(creadur-rat) branch master updated (cb0eccea -> 4813ac31)

2024-04-02 Thread jochen
This is an automated email from the ASF dual-hosted git repository.

jochen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/creadur-rat.git


from cb0eccea Merge pull request #231 from 
apache/dependabot/maven/commons-io-commons-io-2.16.0
 add 8ed4c16b Bump org.apache.maven.plugins:maven-invoker-plugin from 3.6.0 
to 3.6.1
 new 4813ac31 Merge pull request #234 from 
apache/dependabot/maven/org.apache.maven.plugins-maven-invoker-plugin-3.6.1

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



Re: [GSoC 2024] Idea: A Basic Superset of Java | Java+

2024-03-31 Thread Jochen Theodorou

On 29.03.24 21:10, Caleb Brandt wrote:

Hey all,

I'm very grateful for your feedback, and it's clear that you aren't
convinced that this will be helpful enough to be an Apache-sponsored
project.


What do you understand under "Apache-sponsored"? I do for example use
Project Lombok if I have the chance to do so and it has to be in Java. I
think many here would. The things people have been pointing out are that
you probably underestimate some of the problems and that there can be
really silly reasons to reject something.

I would for one not speak of that as a language, but just as a tool. And
next you need good integration. It must be easy to use from an IDE and
in a build... at least potentially.

I can see how you could make the build variant a GSOC project. But under
the Apache Groovy flag? I mean there is no advantage for Groovy we would
get out of this. Right now it is mostly unrelated.

If you wanted to make this a transform and with this a compiler
extension, then this is a whole different matter. It would then be at
least related to Groovy. But is there a path forward from this?

Just why to explain why I mention transforms... they are annotation
based pre-processors to some extend, just on the AST level instead the
source level. Unlike in Java transforms are allowed to mutate the
classes in almost any way.

bye Jochen


Bug#1067934: jameica: Jameica cannot load plugins | service "database" not found

2024-03-31 Thread Jochen Sprickerhof

Control: severity -1 normal

Hi Ferdinand,

* Ferdinand Rau  [2024-03-29 08:51]:

  * What led up to the situation?
The plugin Hibiscus, arguably the most important plugin for jameica, was
updated.
Version 2.10.15 used to work fine and still does work fine with the jameica
packages in Debian Bookworm and Trixie. Versions 2.10.16 and the current
version 2.10.17 do not work with either Debian package, but do work fine with
the latest version of jameica (2.10.4) downloaded from the upstream source
willuhn.de: https://www.willuhn.de/products/hibiscus/download.php
Plugin-updates/installs were performed using the integrated plugin management
of jameica.


That is expected, software in stable is only supported with other 
software in stable, i.e. with the Hibiscus plugin in stable.



Staying with hibiscus 2.10.15 is not an option in the long term, because
updates introduce bug fixes and, most importantly, fix access for several
German banks that changed their online access to different servers and
protocols. Without the update to hibiscus 2.10.17, lots of bank accounts are
inaccessible for the users.


This is too unspecific to call for an update in stable. From my 
understanding all updates are configuration changes that can be done 
with the stable version as well. Downgrading the bug accordingly.

Note that Debian stable updates have a strict policy noted here:

https://lists.debian.org/debian-devel-announce/2019/08/msg0.html

* Ferdinand Rau  [2024-03-29 12:41]:

The upstream author suggests that the issue may be related to the "H2 Database 
Engine" (Debian package: jameica-h2database) in a similar, but not identical case 
here:
https://homebanking-hilfe.de/forum/topic.php?p=170111#real170111

Upstream H2 is at version 2.2.224, whereas Debian is at 1.4.197-7, which is 
approx. six years old.

An update of jameica-h2database will likely fix this issue?


Sadly no, while upstream H2 is at 2.2.224 and the Debian package 
(libh2-java) is at 2.2.220, Jameica ships 1.4.199, this is why the 
jameica-h2database was created in the first first place. I will update 
it to 1.4.199 together with Jameica in unstable.


Cheers Jochen


signature.asc
Description: PGP signature


Bug#1067934: jameica: Jameica cannot load plugins | service "database" not found

2024-03-31 Thread Jochen Sprickerhof

Control: severity -1 normal

Hi Ferdinand,

* Ferdinand Rau  [2024-03-29 08:51]:

  * What led up to the situation?
The plugin Hibiscus, arguably the most important plugin for jameica, was
updated.
Version 2.10.15 used to work fine and still does work fine with the jameica
packages in Debian Bookworm and Trixie. Versions 2.10.16 and the current
version 2.10.17 do not work with either Debian package, but do work fine with
the latest version of jameica (2.10.4) downloaded from the upstream source
willuhn.de: https://www.willuhn.de/products/hibiscus/download.php
Plugin-updates/installs were performed using the integrated plugin management
of jameica.


That is expected, software in stable is only supported with other 
software in stable, i.e. with the Hibiscus plugin in stable.



Staying with hibiscus 2.10.15 is not an option in the long term, because
updates introduce bug fixes and, most importantly, fix access for several
German banks that changed their online access to different servers and
protocols. Without the update to hibiscus 2.10.17, lots of bank accounts are
inaccessible for the users.


This is too unspecific to call for an update in stable. From my 
understanding all updates are configuration changes that can be done 
with the stable version as well. Downgrading the bug accordingly.

Note that Debian stable updates have a strict policy noted here:

https://lists.debian.org/debian-devel-announce/2019/08/msg0.html

* Ferdinand Rau  [2024-03-29 12:41]:

The upstream author suggests that the issue may be related to the "H2 Database 
Engine" (Debian package: jameica-h2database) in a similar, but not identical case 
here:
https://homebanking-hilfe.de/forum/topic.php?p=170111#real170111

Upstream H2 is at version 2.2.224, whereas Debian is at 1.4.197-7, which is 
approx. six years old.

An update of jameica-h2database will likely fix this issue?


Sadly no, while upstream H2 is at 2.2.224 and the Debian package 
(libh2-java) is at 2.2.220, Jameica ships 1.4.199, this is why the 
jameica-h2database was created in the first first place. I will update 
it to 1.4.199 together with Jameica in unstable.


Cheers Jochen


signature.asc
Description: PGP signature


Bug#1067934: jameica: Jameica cannot load plugins | service "database" not found

2024-03-31 Thread Jochen Sprickerhof

Control: severity -1 normal

Hi Ferdinand,

* Ferdinand Rau  [2024-03-29 08:51]:

  * What led up to the situation?
The plugin Hibiscus, arguably the most important plugin for jameica, was
updated.
Version 2.10.15 used to work fine and still does work fine with the jameica
packages in Debian Bookworm and Trixie. Versions 2.10.16 and the current
version 2.10.17 do not work with either Debian package, but do work fine with
the latest version of jameica (2.10.4) downloaded from the upstream source
willuhn.de: https://www.willuhn.de/products/hibiscus/download.php
Plugin-updates/installs were performed using the integrated plugin management
of jameica.


That is expected, software in stable is only supported with other 
software in stable, i.e. with the Hibiscus plugin in stable.



Staying with hibiscus 2.10.15 is not an option in the long term, because
updates introduce bug fixes and, most importantly, fix access for several
German banks that changed their online access to different servers and
protocols. Without the update to hibiscus 2.10.17, lots of bank accounts are
inaccessible for the users.


This is too unspecific to call for an update in stable. From my 
understanding all updates are configuration changes that can be done 
with the stable version as well. Downgrading the bug accordingly.

Note that Debian stable updates have a strict policy noted here:

https://lists.debian.org/debian-devel-announce/2019/08/msg0.html

* Ferdinand Rau  [2024-03-29 12:41]:

The upstream author suggests that the issue may be related to the "H2 Database 
Engine" (Debian package: jameica-h2database) in a similar, but not identical case 
here:
https://homebanking-hilfe.de/forum/topic.php?p=170111#real170111

Upstream H2 is at version 2.2.224, whereas Debian is at 1.4.197-7, which is 
approx. six years old.

An update of jameica-h2database will likely fix this issue?


Sadly no, while upstream H2 is at 2.2.224 and the Debian package 
(libh2-java) is at 2.2.220, Jameica ships 1.4.199, this is why the 
jameica-h2database was created in the first first place. I will update 
it to 1.4.199 together with Jameica in unstable.


Cheers Jochen


signature.asc
Description: PGP signature
__
This is the maintainer address of Debian's Java team
<https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/pkg-java-maintainers>.
 Please use
debian-j...@lists.debian.org for discussions and questions.


Bug#1067297: marked as pending in taskw

2024-03-28 Thread Jochen Sprickerhof
Control: tag -1 pending

Hello,

Bug #1067297 in taskw reported by you has been fixed in the
Git repository and is awaiting an upload. You can see the commit
message below and you can check the diff of the fix at:

https://salsa.debian.org/python-team/packages/taskw/-/commit/8a061fb95ba06d5abae05248efab04d2ebb70ed5


Fix tests with new pytest

Closes: #1067297


(this message was generated automatically)
-- 
Greetings

https://bugs.debian.org/1067297



RE: No physical disk in FAI 6.2.2 plain text

2024-03-28 Thread Paul, Jochen (DerPaul) via linux-fai
Diese Nachricht wurde eingewickelt um DMARC-kompatibel zu sein. Die
eigentliche Nachricht steht dadurch in einem Anhang.

This message was wrapped to be DMARC compliant. The actual message
text is therefore in an attachment.--- Begin Message ---

P.S.: Your mail got stuck because it had several pics in it,
so the size was over the limit for this mailing list. Please
try to no include pics, that are not needed for debugging and
are only advertising.


Until our IT department disabled the automatic attachments for
this mailing list I'm using my private account.


I cannot confirm that the GNOME desktop installation works.
For me both ISO I've just created are broken. I guess you
are using an older ISO for GNOME an a new one for the plain
console installation.


I was surprised by your reply. I'm using the FAI.me service on
your page. After recreating the GNOME image again with the
same settings I've discovered that this image has been created
by FAI 6.0.5 and not FAI 6.2.2.


The bug in the FAIme service is now fixed.


I can confirm that a fresh created image works.

Dear Thomas, thank you so much for your fast reply and
for solving this issue. Well done!

Regards

Jochen
--- End Message ---


Re: [GSoC 2024] Idea: A Basic Superset of Java | Java+

2024-03-27 Thread Jochen Theodorou

On 25.03.24 21:17, Caleb Brandt wrote:
[...]

If you do

class X {
  String foo = "bar"
  get() { return this.foo; }
  set(x) { this.foo = x; doSomethingElse()}
}

then the call to doSomethingElse is on line 4. But what will happen if
the IDE find doSomethingElse does not exist and wants to mark the call?
If the IDE works on the Java code for this the position will be wrong.
Yes, you would get an error message that is fitting as in
"soSomethingElse not found"... maybe even the correct line. This means
your IDE would have to know how to translate back to the code you
actually wrote. That is why this probably ends up as extension for the
compiler... which the IDEs may or may not support. I think Manifold is
doing that.


We all know what a getter and a setter is, so condensing them into one
place is easy to understand.


Sure, then I can write
private String foo = ("bar", {->this.foo}, {x->this.foo=;
doSomethingElse()})
And you will totally understand... right.. right?? I mean there are so
much in one place, you can barely compress this more and still write the
code. Well.. to get more serious again. You should not underestimate the
challenges of language design. And yes, this can be as bad as: "eh... it
is using begin/end instead of curly braces? Nah, don't like it".. "why
are there no blocks? There are? Oh the space is significant for this!?".
And I am not kidding. the first Is a C-guy disliking anything Pascal,
the second is all about Python.


But what's a "closure"? What's a
"delegate"? What's a "virtual thread"??? (I'm kidding.)  People don't
have a basis for these things in Java, so it's just /more/ stuff to
learn instead of already making sense to a Java-only programmer.


closures are quite similar to lambdas. The delegate concept is not the
difficult part. The difficult part is that you attach a code block to a
method and then the method does something with the code block. Ah yes...
conceptually those lambdas are also very near to anonymous inner
classes. A concept existing in Java for a long time already. It is all
about the approach really. Of course if the people you are showing this
are not open to the concepts and different views, then you can forget
about it. I for example never discuss with people about static
compilation if I get the impression they are thinking the false security
of static compilation solves all their problems in life.

[...]

Every addition needs to scan like Java at face value and
feel like something Oracle themselves would have added.


With lambdas they did quite the style break in my opinion. They did it
before though... with enums for example. And don't get me started about
generics or the switch-case. I really doubt I would be able to tell of a
new feature if it is as if Oracle themselves would have added it. In the
end that is not the key-point. It is how the new features work and look
together with the overall language. For example we had a feature request
to support "statement if expression", meaning "if (expression)
statement". Which exists in some language, but we decided against,
because it does not fit the style of the Groovy language.

[...]

_*1. WE CAN USE JAVA'S TOOLS*
_
Yeah so remember what I said about how "nothing will ever match Java in
a Java space"?  If you can't beat 'em, join 'em.  By acting as a
transparent frontend to real Java source code, we can leverage existing
Java analyzers to do our linting for us.


But it is not fully transparent. It is one-way.


The IntelliJ system already
has a variant of this for CLion's macro processing, being able to detect
errors in the applied macro and display the code it'll be replaced with
in a hover popup.


But the preprocessor in C/C++ was made part of the compiler even though
it originally existed and was intended as independent tool. It just did
not work, because of the line numbering being messed up. Back in the day
there existed quite a few languages that transpiled to C, all of them
got eventually their own compiler. In the word of gcc for example you
have many languages that compile to shared representation in memory,
which is then actually compiled to the target platform. This way you
have a front-end for Modula and one for C++, but lots of the back-end
part is still the same. In the end it is still like transpiling to C,
just on a different level, where the source does not matter.


We can also use Java debuggers to their fullest extent too: it'll link
to the processed Java8 sources when we freeze on a breakpoint instead of
the J+ sources, but once again that's a /feature/, not a bug.  With a
good enough formatter on the processor, they'll even thank us for it.


For this there is actually support in the Java bytecode world. If the
IDE can figure out from the file name attribute and the class name what
file to open, then the correct position in that source can be opened.
In your case the opened source would have to be the generated Java
source, since the IDE would not automatically know how that 

Bug#1067444: FileNotFoundError: [Errno 2] ... /usr/lib/python3/dist-packages/bugwarrior/docs/configuration.rst

2024-03-27 Thread Jochen Sprickerhof

Control: severity -1 normal

Hi Andreas,

* Andreas Beckmann  [2024-03-24 10:23]:
On Thu, 21 Mar 2024 18:39:00 +0100 Jochen Sprickerhof 
 wrote:

dpkg -S configuration.rst
bugwarrior: /usr/share/doc/bugwarrior/html/_sources/common_configuration.rst.txt
bugwarrior: /usr/share/doc/bugwarrior/html/_sources/configuration.rst.txt

The second one is the same file. I will push new version with a 
fixed path.


Does the package work with /usr/share/doc excluded from installation?
https://www.debian.org/doc/debian-policy/ch-docs.html#additional-documentation.
"Packages must not require the existence of any files in 
/usr/share/doc/ in order to function. [6] Any files that are used or 
read by programs but are also useful as stand alone documentation 
should be installed elsewhere, such as under /usr/share/package/, and 
then included via symbolic links in /usr/share/doc/package."


It works fine. The only missing bit is printing an example configuration 
if the config is wrong, which I think is not RC and fine if the 
documentation is not installed. Thus downgrading the bug.


Cheers Jochen


signature.asc
Description: PGP signature


Bug#1067444: FileNotFoundError: [Errno 2] ... /usr/lib/python3/dist-packages/bugwarrior/docs/configuration.rst

2024-03-27 Thread Jochen Sprickerhof

Control: severity -1 normal

Hi Andreas,

* Andreas Beckmann  [2024-03-24 10:23]:
On Thu, 21 Mar 2024 18:39:00 +0100 Jochen Sprickerhof 
 wrote:

dpkg -S configuration.rst
bugwarrior: /usr/share/doc/bugwarrior/html/_sources/common_configuration.rst.txt
bugwarrior: /usr/share/doc/bugwarrior/html/_sources/configuration.rst.txt

The second one is the same file. I will push new version with a 
fixed path.


Does the package work with /usr/share/doc excluded from installation?
https://www.debian.org/doc/debian-policy/ch-docs.html#additional-documentation.
"Packages must not require the existence of any files in 
/usr/share/doc/ in order to function. [6] Any files that are used or 
read by programs but are also useful as stand alone documentation 
should be installed elsewhere, such as under /usr/share/package/, and 
then included via symbolic links in /usr/share/doc/package."


It works fine. The only missing bit is printing an example configuration 
if the config is wrong, which I think is not RC and fine if the 
documentation is not installed. Thus downgrading the bug.


Cheers Jochen


signature.asc
Description: PGP signature


  1   2   3   4   5   6   7   8   9   10   >