Re: [VOTE] KIP-959 Add BooleanConverter to Kafka Connect

2023-07-28 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi everyone,

Thanks everyone who has reviewed and voted for this KIP. 

So far it has received 3 non-binding votes (Andrew Schofield, Yash Mayya, Kamal 
Chandraprakash) and 2 binding votes (Chris Egerton, Greg Harris)- still shy of 
one binding vote to pass.

Can we get help from a committer to push it through?

Thank you!
Hector

Sent from Bloomberg Professional for iPhone

- Original Message -
From: Greg Harris 
To: dev@kafka.apache.org
At: 07/26/23 12:23:20 UTC-04:00


Hey Hector,

Thanks for the straightforward and clear KIP!
+1 (binding)

Thanks,
Greg

On Wed, Jul 26, 2023 at 5:16 AM Chris Egerton  wrote:
>
> +1 (binding)
>
> Thanks Hector!
>
> On Wed, Jul 26, 2023 at 3:18 AM Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
>
> > +1 (non-binding). Thanks for the KIP!
> >
> > On Tue, Jul 25, 2023 at 11:12 PM Yash Mayya  wrote:
> >
> > > Hi Hector,
> > >
> > > Thanks for the KIP!
> > >
> > > +1 (non-binding)
> > >
> > > Thanks,
> > > Yash
> > >
> > > On Tue, Jul 25, 2023 at 11:01 PM Andrew Schofield <
> > > andrew_schofield_j...@outlook.com> wrote:
> > >
> > > > Thanks for the KIP. As you say, not that controversial.
> > > >
> > > > +1 (non-binding)
> > > >
> > > > Thanks,
> > > > Andrew
> > > >
> > > > > On 25 Jul 2023, at 18:22, Hector Geraldino (BLOOMBERG/ 919 3RD A) <
> > > > hgerald...@bloomberg.net> wrote:
> > > > >
> > > > > Hi everyone,
> > > > >
> > > > > The changes proposed by KIP-959 (Add BooleanConverter to Kafka
> > Connect)
> > > > have a limited scope and shouldn't be controversial. I'm opening a
> > voting
> > > > thread with the hope that it can be included in the next upcoming 3.6
> > > > release.
> > > > >
> > > > > Here are some links:
> > > > >
> > > > > KIP:
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-959%3A+Add+BooleanConverter+to+Kafka+Connect
> > > > > JIRA: https://issues.apache.org/jira/browse/KAFKA-15248
> > > > > Discussion thread:
> > > > https://lists.apache.org/thread/15c2t0kl9bozmzjxmkl5n57kv4l4o1dt
> > > > > Pull Request: https://github.com/apache/kafka/pull/14093
> > > > >
> > > > > Thanks!
> > > >
> > > >
> > > >
> > >
> >


[jira] [Updated] (HDFS-17128) RBF: SQLDelegationTokenSecretManager should use version of tokens updated by other routers

2023-07-26 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HDFS-17128:

Description: 
The SQLDelegationTokenSecretManager keeps tokens that it has interacted with in 
a memory cache. This prevents routers from connecting to the SQL server for 
each token operation, improving performance.

We've noticed issues with some tokens being loaded in one router's cache and 
later renewed on a different one. If clients try to use the token in the 
outdated router, it will throw an "Auth failed" error when the cached token's 
expiration has passed.

This can also affect cancelation scenarios since a token can be removed from 
one router's cache and still exist in another one.

A possible solution is already implemented on the 
ZKDelegationTokenSecretManager, which consists of having an executor refreshing 
each router's cache on a periodic basis. We should evaluate whether this will 
work with the volume of tokens expected to be handled by the 
SQLDelegationTokenSecretManager.

  was:
The SQLDelegationTokenSecretManager keeps tokens that it has interacted with in 
a memory cache. This prevents routers from connecting to the SQL server for 
each token operation.

We've noticed issues with some tokens being loaded in one router's cache and 
later renewed on a different one. If clients try to use the token in the 
outdated router, it will throw an "Auth failed" error when the cached token's 
expiration has passed.

This can also affect cancelation scenarios since a token can be removed from 
one router's cache and still exist in another one.

A possible solution is already implemented on the 
ZKDelegationTokenSecretManager, which consists of having an executor refreshing 
each router's cache on a periodic basis. We should evaluate whether this will 
work with the volume of tokens expected to be handled by the 
SQLDelegationTokenSecretManager.


> RBF: SQLDelegationTokenSecretManager should use version of tokens updated by 
> other routers
> --
>
> Key: HDFS-17128
> URL: https://issues.apache.org/jira/browse/HDFS-17128
> Project: Hadoop HDFS
>  Issue Type: Improvement
>      Components: rbf
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>
> The SQLDelegationTokenSecretManager keeps tokens that it has interacted with 
> in a memory cache. This prevents routers from connecting to the SQL server 
> for each token operation, improving performance.
> We've noticed issues with some tokens being loaded in one router's cache and 
> later renewed on a different one. If clients try to use the token in the 
> outdated router, it will throw an "Auth failed" error when the cached token's 
> expiration has passed.
> This can also affect cancelation scenarios since a token can be removed from 
> one router's cache and still exist in another one.
> A possible solution is already implemented on the 
> ZKDelegationTokenSecretManager, which consists of having an executor 
> refreshing each router's cache on a periodic basis. We should evaluate 
> whether this will work with the volume of tokens expected to be handled by 
> the SQLDelegationTokenSecretManager.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17128) RBF: SQLDelegationTokenSecretManager should use version of tokens updated by other routers

2023-07-26 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HDFS-17128:

Summary: RBF: SQLDelegationTokenSecretManager should use version of tokens 
updated by other routers  (was: SQLDelegationTokenSecretManager should use 
version of tokens updated by other routers)

> RBF: SQLDelegationTokenSecretManager should use version of tokens updated by 
> other routers
> --
>
> Key: HDFS-17128
> URL: https://issues.apache.org/jira/browse/HDFS-17128
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>
> The SQLDelegationTokenSecretManager keeps tokens that it has interacted with 
> in a memory cache. This prevents routers from connecting to the SQL server 
> for each token operation.
> We've noticed issues with some tokens being loaded in one router's cache and 
> later renewed on a different one. If clients try to use the token in the 
> outdated router, it will throw an "Auth failed" error when the cached token's 
> expiration has passed.
> This can also affect cancelation scenarios since a token can be removed from 
> one router's cache and still exist in another one.
> A possible solution is already implemented on the 
> ZKDelegationTokenSecretManager, which consists of having an executor 
> refreshing each router's cache on a periodic basis. We should evaluate 
> whether this will work with the volume of tokens expected to be handled by 
> the SQLDelegationTokenSecretManager.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17128) SQLDelegationTokenSecretManager should use version of tokens updated by other routers

2023-07-26 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HDFS-17128:
---

 Summary: SQLDelegationTokenSecretManager should use version of 
tokens updated by other routers
 Key: HDFS-17128
 URL: https://issues.apache.org/jira/browse/HDFS-17128
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Reporter: Hector Sandoval Chaverri


The SQLDelegationTokenSecretManager keeps tokens that it has interacted with in 
a memory cache. This prevents routers from connecting to the SQL server for 
each token operation.

We've noticed issues with some tokens being loaded in one router's cache and 
later renewed on a different one. If clients try to use the token in the 
outdated router, it will throw an "Auth failed" error when the cached token's 
expiration has passed.

This can also affect cancelation scenarios since a token can be removed from 
one router's cache and still exist in another one.

A possible solution is already implemented on the 
ZKDelegationTokenSecretManager, which consists of having an executor refreshing 
each router's cache on a periodic basis. We should evaluate whether this will 
work with the volume of tokens expected to be handled by the 
SQLDelegationTokenSecretManager.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17128) SQLDelegationTokenSecretManager should use version of tokens updated by other routers

2023-07-26 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HDFS-17128:
---

 Summary: SQLDelegationTokenSecretManager should use version of 
tokens updated by other routers
 Key: HDFS-17128
 URL: https://issues.apache.org/jira/browse/HDFS-17128
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Reporter: Hector Sandoval Chaverri


The SQLDelegationTokenSecretManager keeps tokens that it has interacted with in 
a memory cache. This prevents routers from connecting to the SQL server for 
each token operation.

We've noticed issues with some tokens being loaded in one router's cache and 
later renewed on a different one. If clients try to use the token in the 
outdated router, it will throw an "Auth failed" error when the cached token's 
expiration has passed.

This can also affect cancelation scenarios since a token can be removed from 
one router's cache and still exist in another one.

A possible solution is already implemented on the 
ZKDelegationTokenSecretManager, which consists of having an executor refreshing 
each router's cache on a periodic basis. We should evaluate whether this will 
work with the volume of tokens expected to be handled by the 
SQLDelegationTokenSecretManager.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



Re: Apache Kafka 3.6.0 release

2023-07-26 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Yes, still need one more binding vote to pass. I'll send a reminder if the vote 
is still pending after the waiting period.

Cheers,

From: dev@kafka.apache.org At: 07/26/23 12:17:10 UTC-4:00To:  
dev@kafka.apache.org
Subject: Re: Apache Kafka 3.6.0 release

Hi Hector/Yash,
Are you planning to reach out to other committers to vote on the KIP
and close the vote in the next couple of days?

Thanks,
Satish.

On Wed, 26 Jul 2023 at 20:08, Yash Mayya  wrote:
>
> Hi Hector,
>
> KIP-959 actually still requires 2 more binding votes to be accepted (
> https://cwiki.apache.org/confluence/display/KAFKA/Bylaws#Bylaws-Approvals).
> The non-binding votes from people who aren't committers (including myself)
> don't count towards the required lazy majority.
>
> Thanks,
> Yash
>
> On Wed, Jul 26, 2023 at 7:35 PM Satish Duggana 
> wrote:
>
> > Hi Hector,
> > Thanks for the update on KIP-959.
> >
> > ~Satish.
> >
> > On Wed, 26 Jul 2023 at 18:38, Hector Geraldino (BLOOMBERG/ 919 3RD A)
> >  wrote:
> > >
> > > Hi Satish,
> > >
> > > I added KIP-959 [1] to the list. The KIP has received enough votes to
> > pass, but I'm waiting the 72 hours before announcing the results. There's
> > also a (small) PR with the implementation for this KIP that hopefully will
> > get reviewed/merged soon.
> > >
> > > Best,
> > >
> > > [1]
> > 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-959%3A+Add+BooleanConverte
r+to+Kafka+Connect
> > >
> > > From: dev@kafka.apache.org At: 06/12/23 06:22:00 UTC-4:00To:
> > dev@kafka.apache.org
> > > Subject: Re: Apache Kafka 3.6.0 release
> > >
> > > Hi,
> > > I have created a release plan for Apache Kafka version 3.6.0 on the
> > > wiki. You can access the release plan and all related information by
> > > following this link:
> > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+3.6.0
> > >
> > > The release plan outlines the key milestones and important dates for
> > > version 3.6.0. Currently, the following dates have been set for the
> > > release:
> > >
> > > KIP Freeze: 26th July 23
> > > Feature Freeze : 16th Aug 23
> > > Code Freeze : 30th Aug 23
> > >
> > > Please review the release plan and provide any additional information
> > > or updates regarding KIPs targeting version 3.6.0. If you have
> > > authored any KIPs that are missing a status or if there are incorrect
> > > status details, please make the necessary updates and inform me so
> > > that I can keep the plan accurate and up to date.
> > >
> > > Thanks,
> > > Satish.
> > >
> > > On Mon, 17 Apr 2023 at 21:17, Luke Chen  wrote:
> > > >
> > > > Thanks for volunteering!
> > > >
> > > > +1
> > > >
> > > > Luke
> > > >
> > > > On Mon, Apr 17, 2023 at 2:03 AM Ismael Juma  wrote:
> > > >
> > > > > Thanks for volunteering Satish. +1.
> > > > >
> > > > > Ismael
> > > > >
> > > > > On Sun, Apr 16, 2023 at 10:08 AM Satish Duggana <
> > satish.dugg...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > > I would like to volunteer as release manager for the next release,
> > > > > > which will be Apache Kafka 3.6.0.
> > > > > >
> > > > > > If there are no objections, I will start a release plan a week
> > after
> > > > > > 3.5.0 release(around early May).
> > > > > >
> > > > > > Thanks,
> > > > > > Satish.
> > > > > >
> > > > >
> > >
> > >
> >




Re: Apache Kafka 3.6.0 release

2023-07-26 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Sorry, my bad (you can tell this is the first time one of my KIPs have made it 
this far :))

From: dev@kafka.apache.org At: 07/26/23 10:38:21 UTC-4:00To:  
dev@kafka.apache.org
Subject: Re: Apache Kafka 3.6.0 release

Hi Hector,

KIP-959 actually still requires 2 more binding votes to be accepted (
https://cwiki.apache.org/confluence/display/KAFKA/Bylaws#Bylaws-Approvals).
The non-binding votes from people who aren't committers (including myself)
don't count towards the required lazy majority.

Thanks,
Yash

On Wed, Jul 26, 2023 at 7:35 PM Satish Duggana 
wrote:

> Hi Hector,
> Thanks for the update on KIP-959.
>
> ~Satish.
>
> On Wed, 26 Jul 2023 at 18:38, Hector Geraldino (BLOOMBERG/ 919 3RD A)
>  wrote:
> >
> > Hi Satish,
> >
> > I added KIP-959 [1] to the list. The KIP has received enough votes to
> pass, but I'm waiting the 72 hours before announcing the results. There's
> also a (small) PR with the implementation for this KIP that hopefully will
> get reviewed/merged soon.
> >
> > Best,
> >
> > [1]
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-959%3A+Add+BooleanConverte
r+to+Kafka+Connect
> >
> > From: dev@kafka.apache.org At: 06/12/23 06:22:00 UTC-4:00To:
> dev@kafka.apache.org
> > Subject: Re: Apache Kafka 3.6.0 release
> >
> > Hi,
> > I have created a release plan for Apache Kafka version 3.6.0 on the
> > wiki. You can access the release plan and all related information by
> > following this link:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+3.6.0
> >
> > The release plan outlines the key milestones and important dates for
> > version 3.6.0. Currently, the following dates have been set for the
> > release:
> >
> > KIP Freeze: 26th July 23
> > Feature Freeze : 16th Aug 23
> > Code Freeze : 30th Aug 23
> >
> > Please review the release plan and provide any additional information
> > or updates regarding KIPs targeting version 3.6.0. If you have
> > authored any KIPs that are missing a status or if there are incorrect
> > status details, please make the necessary updates and inform me so
> > that I can keep the plan accurate and up to date.
> >
> > Thanks,
> > Satish.
> >
> > On Mon, 17 Apr 2023 at 21:17, Luke Chen  wrote:
> > >
> > > Thanks for volunteering!
> > >
> > > +1
> > >
> > > Luke
> > >
> > > On Mon, Apr 17, 2023 at 2:03 AM Ismael Juma  wrote:
> > >
> > > > Thanks for volunteering Satish. +1.
> > > >
> > > > Ismael
> > > >
> > > > On Sun, Apr 16, 2023 at 10:08 AM Satish Duggana <
> satish.dugg...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > > I would like to volunteer as release manager for the next release,
> > > > > which will be Apache Kafka 3.6.0.
> > > > >
> > > > > If there are no objections, I will start a release plan a week
> after
> > > > > 3.5.0 release(around early May).
> > > > >
> > > > > Thanks,
> > > > > Satish.
> > > > >
> > > >
> >
> >
>




Re: Apache Kafka 3.6.0 release

2023-07-26 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi Satish,

I added KIP-959 [1] to the list. The KIP has received enough votes to pass, but 
I'm waiting the 72 hours before announcing the results. There's also a (small) 
PR with the implementation for this KIP that hopefully will get reviewed/merged 
soon.

Best,

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-959%3A+Add+BooleanConverter+to+Kafka+Connect

From: dev@kafka.apache.org At: 06/12/23 06:22:00 UTC-4:00To:  
dev@kafka.apache.org
Subject: Re: Apache Kafka 3.6.0 release

Hi,
I have created a release plan for Apache Kafka version 3.6.0 on the
wiki. You can access the release plan and all related information by
following this link:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+3.6.0

The release plan outlines the key milestones and important dates for
version 3.6.0. Currently, the following dates have been set for the
release:

KIP Freeze: 26th July 23
Feature Freeze : 16th Aug 23
Code Freeze : 30th Aug 23

Please review the release plan and provide any additional information
or updates regarding KIPs targeting version 3.6.0. If you have
authored any KIPs that are missing a status or if there are incorrect
status details, please make the necessary updates and inform me so
that I can keep the plan accurate and up to date.

Thanks,
Satish.

On Mon, 17 Apr 2023 at 21:17, Luke Chen  wrote:
>
> Thanks for volunteering!
>
> +1
>
> Luke
>
> On Mon, Apr 17, 2023 at 2:03 AM Ismael Juma  wrote:
>
> > Thanks for volunteering Satish. +1.
> >
> > Ismael
> >
> > On Sun, Apr 16, 2023 at 10:08 AM Satish Duggana 
> > wrote:
> >
> > > Hi,
> > > I would like to volunteer as release manager for the next release,
> > > which will be Apache Kafka 3.6.0.
> > >
> > > If there are no objections, I will start a release plan a week after
> > > 3.5.0 release(around early May).
> > >
> > > Thanks,
> > > Satish.
> > >
> >




[VOTE] KIP-959 Add BooleanConverter to Kafka Connect

2023-07-25 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi everyone,

The changes proposed by KIP-959 (Add BooleanConverter to Kafka Connect) have a 
limited scope and shouldn't be controversial. I'm opening a voting thread with 
the hope that it can be included in the next upcoming 3.6 release. 

Here are some links:

KIP: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-959%3A+Add+BooleanConverter+to+Kafka+Connect
JIRA: https://issues.apache.org/jira/browse/KAFKA-15248
Discussion thread: 
https://lists.apache.org/thread/15c2t0kl9bozmzjxmkl5n57kv4l4o1dt
Pull Request: https://github.com/apache/kafka/pull/14093

Thanks!

Re: [DISCUSS] KIP-959 Add BooleanConverter to Kafka Connect

2023-07-25 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Thanks Chris for your quick reply.

Your suggestions make sense, I amended the KIP and added a note to the class 
JavaDocs. Also added unit tests to the companion PR 
[https://github.com/apache/kafka/pull/14093], and will mark it as "Ready for 
Review" in a few.

Cheers

From: dev@kafka.apache.org At: 07/25/23 10:42:58 UTC-4:00To:  
dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-959 Add BooleanConverter to Kafka Connect

Hi Hector,

Thanks for the KIP! Really appreciate the tight scope, hoping this will be
easy to review :)

I only have one question: it looks like our existing primitive converters
(string converter + subclasses of NumberConverter) are hardcoded to play
nicely with null values during deserialization by always providing an
optional schema. If that's the intent with this KIP, can we specify that
explicitly? (Could be as simple as saying "the schema returned during
deserialization will always be an optional boolean schema" with a link to
https://kafka.apache.org/35/javadoc/org/apache/kafka/connect/data/Schema.html#OP
TIONAL_BOOLEAN_SCHEMA).
I don't think we have to say anything else about null handling since
FWICT the rest is already handled by the BooleanSerializer and
BooleanDeserializer introduced in KIP-907.

Cheers,

Chris

On Tue, Jul 25, 2023 at 9:52 AM Hector Geraldino (BLOOMBERG/ 919 3RD A) <
hgerald...@bloomberg.net> wrote:

> Hi everyone,
>
> I'd like to start a discussion of KIP-959, which aims to add a
> BooleanConverter to Kafka Connect:
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-959%3A+Add+BooleanConverte
r+to+Kafka+Connect
>
> This KIP is a counterpart of KIP-907: Add Boolean Serde to public
> interface [
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-907%3A+Add+Boolean+Serde+t
o+public+interface],
> which added Boolean SerDes to the Kafka serialization APIs.
>
> The scope of this KIP is very limited, and will help us close a small gap
> that exists on the list of included converters for connect's "primitive"
> types.
>
> Looking forward for your feedback.
>
> Regards,
> Hector




[jira] [Created] (KAFKA-15248) Add BooleanConverter to Kafka Connect

2023-07-25 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-15248:


 Summary: Add BooleanConverter to Kafka Connect
 Key: KAFKA-15248
 URL: https://issues.apache.org/jira/browse/KAFKA-15248
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Hector Geraldino
Assignee: Hector Geraldino


KIP-959: Add BooleanConverter to Kafka Connect -> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-959%3A+Add+BooleanConverter+to+Kafka+Connect



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15248) Add BooleanConverter to Kafka Connect

2023-07-25 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-15248:


 Summary: Add BooleanConverter to Kafka Connect
 Key: KAFKA-15248
 URL: https://issues.apache.org/jira/browse/KAFKA-15248
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Hector Geraldino
Assignee: Hector Geraldino


KIP-959: Add BooleanConverter to Kafka Connect -> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-959%3A+Add+BooleanConverter+to+Kafka+Connect



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[DISCUSS] KIP-959 Add BooleanConverter to Kafka Connect

2023-07-25 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi everyone,

I'd like to start a discussion of KIP-959, which aims to add a BooleanConverter 
to Kafka Connect: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-959%3A+Add+BooleanConverter+to+Kafka+Connect

This KIP is a counterpart of KIP-907: Add Boolean Serde to public interface 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-907%3A+Add+Boolean+Serde+to+public+interface],
 which added Boolean SerDes to the Kafka serialization APIs. 

The scope of this KIP is very limited, and will help us close a small gap that 
exists on the list of included converters for connect's "primitive" types.

Looking forward for your feedback.

Regards,
Hector

Bug#1041013:

2023-07-15 Thread Hector Cao
Thanks Bastian for your help
How can I get the list of bugs that were opened at the package removal ?

-- 
Hector CAO
Software Engineer – Partner Engineering Team
hector@canonical.com
https://launc <https://launchpad.net/~hectorcao>hpad.net/~hectorcao
<https://launchpad.net/~hectorcao>

<https://launchpad.net/~hectorcao>


Bug#1041013: RFS: woof/20220202 -- File sharing utility through http protocol

2023-07-13 Thread Hector Cao
Package: sponsorship-requests
Severity: normal

Dear mentors,

I am looking for a sponsor for my package "woof":

 * Package name : woof
   Version  : 20220202
   Upstream contact : Simon Budig
 * URL  : https://github.com/simon-budig/woof
 * License  : GPL-3.0+
 * Vcs  : [fill in URL of packaging vcs]
   Section  : net

The source builds the following binary packages:

  woof - File sharing utility through http protocol

To access further information about this package, please visit the
following URL:

  https://mentors.debian.net/package/woof/

Alternatively, you can download the package with 'dget' using this command:

  dget -x https://mentors.debian.net/debian/pool/main/w/woof/woof_20220202.dsc

Changes since the last upload:

 woof (20220202) unstable; urgency=medium
 .
   * Initial release.

Regards,
-- 
  Hector CAO



-- 
Hector CAO
Software Engineer – Partner Engineering Team
hector@canonical.com
https://launc <https://launchpad.net/~hectorcao>hpad.net/~hectorcao
<https://launchpad.net/~hectorcao>

<https://launchpad.net/~hectorcao>


Bug#1041013: RFS: woof/20220202 -- File sharing utility through http protocol

2023-07-13 Thread Hector Cao
Package: sponsorship-requests
Severity: normal

Dear mentors,

I am looking for a sponsor for my package "woof":

 * Package name : woof
   Version  : 20220202
   Upstream contact : Simon Budig
 * URL  : https://github.com/simon-budig/woof
 * License  : GPL-3.0+
 * Vcs  : [fill in URL of packaging vcs]
   Section  : net

The source builds the following binary packages:

  woof - File sharing utility through http protocol

To access further information about this package, please visit the
following URL:

  https://mentors.debian.net/package/woof/

Alternatively, you can download the package with 'dget' using this command:

  dget -x https://mentors.debian.net/debian/pool/main/w/woof/woof_20220202.dsc

Changes since the last upload:

 woof (20220202) unstable; urgency=medium
 .
   * Initial release.

Regards,
-- 
  Hector CAO



-- 
Hector CAO
Software Engineer – Partner Engineering Team
hector@canonical.com
https://launc <https://launchpad.net/~hectorcao>hpad.net/~hectorcao
<https://launchpad.net/~hectorcao>

<https://launchpad.net/~hectorcao>


Re: [ANNOUNCE] New committer: Greg Harris

2023-07-10 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Congrats Greg! Well deserved

From: dev@kafka.apache.org At: 07/10/23 12:18:48 UTC-4:00To:  
dev@kafka.apache.org
Subject: Re: [ANNOUNCE] New committer: Greg Harris

Congratulations!

On Mon, Jul 10, 2023 at 9:17 AM Randall Hauch  wrote:
>
> Congratulations, Greg.
>
> On Mon, Jul 10, 2023 at 11:13 AM Mickael Maison 
> wrote:
>
> > Congratulations Greg!
> >
> > On Mon, Jul 10, 2023 at 6:08 PM Bill Bejeck 
> > wrote:
> > >
> > > Congrats Greg!
> > >
> > > -Bill
> > >
> > > On Mon, Jul 10, 2023 at 11:53 AM Divij Vaidya 
> > > wrote:
> > >
> > > > Congratulations Greg! I am going through a new committer teething
> > process
> > > > right now and would be happy to get you up to speed. Looking forward to
> > > > working with you in your new role.
> > > >
> > > > --
> > > > Divij Vaidya
> > > >
> > > >
> > > >
> > > > On Mon, Jul 10, 2023 at 5:51 PM Josep Prat  > >
> > > > wrote:
> > > >
> > > > > Congrats Greg!
> > > > >
> > > > >
> > > > > ———
> > > > > Josep Prat
> > > > >
> > > > > Aiven Deutschland GmbH
> > > > >
> > > > > Alexanderufer 3-7, 10117 Berlin
> > > > >
> > > > > Amtsgericht Charlottenburg, HRB 209739 B
> > > > >
> > > > > Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> > > > >
> > > > > m: +491715557497
> > > > >
> > > > > w: aiven.io
> > > > >
> > > > > e: josep.p...@aiven.io
> > > > >
> > > > > On Mon, Jul 10, 2023, 17:47 Matthias J. Sax 
> > wrote:
> > > > >
> > > > > > Congrats!
> > > > > >
> > > > > > On 7/10/23 8:45 AM, Chris Egerton wrote:
> > > > > > > Hi all,
> > > > > > >
> > > > > > > The PMC for Apache Kafka has invited Greg Harris to become a
> > > > committer,
> > > > > > and
> > > > > > > we are happy to announce that he has accepted!
> > > > > > >
> > > > > > > Greg has been contributing to Kafka since 2019. He has made over
> > 50
> > > > > > commits
> > > > > > > mostly around Kafka Connect and Mirror Maker 2. His most notable
> > > > > > > contributions include KIP-898: "Modernize Connect plugin
> > discovery"
> > > > > and a
> > > > > > > deep overhaul of the offset syncing logic in MM2 that addressed
> > > > several
> > > > > > > technically-difficult, long-standing, high-impact issues.
> > > > > > >
> > > > > > > He has also been an active participant in discussions and
> > reviews on
> > > > > the
> > > > > > > mailing lists and on GitHub.
> > > > > > >
> > > > > > > Thanks for all of your contributions, Greg. Congratulations!
> > > > > > >
> > > > > >
> > > > >
> > > >
> >




Re: [dmarc-ietf] Another p=reject text proposal

2023-07-08 Thread Hector Santos

*Note: The following is a general rule of thumb for me.*

From my functional specification protocol language coding standpoint:

MUST --> NO OPTION. Technically enabled with no switch to disable.
SHOULD -> OPTIONAL, Default is ON, enabled
MAY -->  OPTIONAL, Default is ON or OFF depending on implementation

With the special RFC documentation format designed/written for a very 
wide audience from managers, system admins, coders, engineers, etc, it 
offers semantics with lower and upper case guidelines. sometimes 
purposely ambiguous (to keep it open ended) and in the IETF  RFC, we 
have used the UPPER CASE language as a way to create code especially 
with standard track items.


Those who choose to ignore the UPPER CASE interop advice often risk 
having the proverbial book thrown at them.


With lower case semantics, this has been my overall implementation 
methods:


may, should --> may be implemented as options as with upper case

must --> may be implemented and enabled with hidden option to disable.

--
HLS


On 7/8/2023 12:49 PM, Richard Clayton wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

In message, Murray S. Kucherawy  writes


Some of my IETF mentors (ahem) taught me some stuff about the use
of SHOULD [NOT] that have stuck with me, and I'm going to pressure
test this against that advice.� Let's see how this goes.� :-)

"SHOULD" leaves the implementer with a choice.� You really ought to
do what it says in the general case, but there might be
circumstances where you could deviate from that advice, with some
possible effect on interoperability.

I noted that one of the earlier messages which endorsed MUST NOT said
that of course some people might know better -- which is what I always
understood SHOULD NOT was for !


  � If you do that, it is
expected that you fully understand the possible impact you're about
to have on the Internet before proceeding.� To that end, we like
the use of SHOULD [NOT] to be accompanied by some prose explaining
when one might deviate in this manner, such as an example of when
it might be okay to do so.

not so much "OK" as "necessary".  Yahoo's original statement (I'm
prepared to name names rather than pussyfoot around this) on the
deployment of p=reject is discussed here:

  https://wordtothewise.com/2014/04/yahoo-statement-dmarc-policy/

and I believe it is widely understood that it was particularly deployed
to counter the "address book spammers" of the time. These bad guys
originally persuaded people to open their email by forging it to appear
to come from their friends (having compromised relevant address books).

They then moved on to just using random identities from the same domain
as the recipient. This led a great many Yahoo users to believe that a
great many other Yahoo users had been compromised, leading to a
significant loss of faith in the integrity of the platform.


Does anyone have such an example in mind that could be included
here?� Specifically: Can we describe a scenario where (a) a sender
publishes p=reject (b) with users that post to lists (c) that the
community at large would be willing to accept/tolerate?

So the example you seek might be phrased along the lines of when failing
to set p=reject means that significant quantities of forged email will
be delivered and this will cause damage.

Personally (not a $DAYJOB$ opinion) I think SHOULD NOT is still too
strong, mailing lists (and other forwarders that mangle email) have been
coping with p=reject for nearly a decade -- so that trying
(ineffectually in practice) to make their lives easier at this point is
a snare and a delusion.

- -- 
richard   Richard Clayton


Those who would give up essential Liberty, to purchase a little temporary
Safety, deserve neither Liberty nor Safety. Benjamin Franklin 11 Nov 1755




--
Hector Santos,
https://santronics.com
https://winserver.com


___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Another p=reject text proposal

2023-07-07 Thread Hector Santos
Barry, 

I did a quick review and comparison for changes.  

Overall, it appears this document is more clear in key specific areas but also 
more complex.  At this point, DMARCbis is about Local Policy, System 
Administrative choices and suggested guidelines, codified and/or new.  As such, 
 I don’t think this document is a "Standard Track" material with its many 
ambiguous considerations and changes.

Consider the following:

Is this document backward compatible with existing DMARC1 behavior?  If not, 
what are the key protocol changes implementers need to consider for updating 
DMARC1 to DMARCbis?  Can this be summarized?  

Thanks

—
HLS


> On Jul 7, 2023, at 9:17 AM, Barry Leiba  wrote:
> 
> I, too, prefer MUST to SHOULD there, but it was clear to me that we
> will not reach rough consensus on MUST, but that we can reach rough
> consensus on SHOULD.
> 
> I do like your suggestion of silent discard rather than bounce, and I
> would want to see that change made -- perhaps with a note that
> aggregate reports will still include information about those discards.
> 
> Barry
> 
> On Fri, Jul 7, 2023 at 9:03 AM Baptiste Carvello
>  wrote:
>> 
>> Hi,
>> 
>> I consider this a step backwards. The MUST requirement on the author
>> domain finally made it clear, after a lost decade, *who* is responsible
>> for solving the breakage of indirect mailflows. Problem solving starts
>> with acknowledging one's responsibilities.
>> 
>> This proposal goes back to a muddy shared responsibility between the
>> author domain and the mail receiver. This is the best way to make sure
>> nothing changes, as each waits for the other to act. Mailing lists can
>> expect to suffer for more long years. No wonder the From-munging
>> proponents are rejoicing!
>> 
>> If this goes in, at least something has to be done to reduce bounces,
>> such as:
>> 
>> — Section 8.3 —
>> 
>> ADD
>> The Mail Receiver MUST reject with "silent discard" when rejecting
>> messages with a List-Id header.
>> END
>> 
>> That way, each party's choices will mostly impact their own messages.
>> Mailing list operators can then take a step back, undo any ugly
>> workarounds, and let DMARC participants decide between themselves, on a
>> case by case basis, how they solve *their* deliverability problems.
>> 
>> Cheers,
>> Baptiste
>> 
>> Le 06/07/2023 à 16:55, Barry Leiba a écrit :
>>> I had some off-list discussions with Seth, who was very much against
>>> my original proposed text, and he suggested an alternative
>>> organization that would be more palatable to him.  I've attempted to
>>> set that out below.  The idea is to remove the normative requirements
>>> about using p=reject from Sections 5.5.6 and 5.8, and instead put a
>>> broader discussion of the issues, along with the normative
>>> requirements, into a new "Interoperability Considerations" section.
>>> This makes it explicitly clear that any MUST/SHOULD stuff with regard
>>> to using and honoring p=reject is an issue of interoperating with
>>> existing Internet email features.  I can accept that mechanism also,
>>> and so, below is my attempt at writing that proposal up.
>>> 
>>> Barry
>> 
>> 
>> ___
>> dmarc mailing list
>> dmarc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dmarc
> 
> ___
> dmarc mailing list
> dmarc@ietf.org
> https://www.ietf.org/mailman/listinfo/dmarc



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [Koha] hourly fines not calculating

2023-07-04 Thread Hector Gonzalez Jaime
Tom, you should check your circulation rules, and verify "overdue fines 
cap" is not zero (the default), as it would limit your fines to nothing.


Make sure you have "CalculateFinesOnReturn" on, and "FinesMode" set to 
"Calculate and Charge"


On 7/4/23 10:24, Tom Obrien wrote:

Hi all,
I installed koha 22 on ubuntu 22. Everything works except that Koha is not
calculating hourly fines. The long loan is working very well.
Kindly assist.
Tom
___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


--
Hector Gonzalez
ca...@genac.org

___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


Re: [Ietf-dkim] DMARC's auth=dkim+spf tag

2023-07-03 Thread Hector Santos


> On Jul 3, 2023, at 10:06 AM, Barry Leiba  wrote:
> 
>> Anyway, discussing whether spf+dkim verification can mitigate DKIM replay
>> belongs to the ietf-dkim list.  (In case, it could also be expressed outside
>> DMARC, for example by an additional DKIM tag.)
> 
> I do agree with this, yes.
> 

+1

There may be additional integrated protocol considerations for ESMTP, SPF and 
DKIM that may go beyond what DMARCbis is willing to consider.

—
HLS








___
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim


Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal

2023-06-30 Thread Hector Santos
A small follow up about my DMARC view:

> On Jun 30, 2023, at 4:02 PM, Hector Santos 
>  wrote:
> 
> Overall, imo, it is never a good idea to exerted changes on domains with bis 
> specs, requiring them to change their current DMARC record to reinforce the 
> security level they want using SPF in DMARC evaluation. 
> 


I don’t want surprises. Higher support cost.   But is DMARC that “messed up?”   
I mean, just like ADSP, it is abandonment material, honestly, easy.

But DMARC is big and it did one thing for the mail industry — the Lookup added 
to the SMTP process.  Moat SMTP receivers will do the the _dmarc.from-domain 
lookup.

DMARC is the #1 lookup record for this purpose,  a DKIM Policy Model.

We said very early on that but will take a while to get traction for a DKIM 
Policy model where lookups come with a good payoff, otherwise it is just wasted 
calls. 

Let’s leverage the lookup using a protocol language for a wide security 
coverage that offers dynamic rejection to clean the mail stream before passing 
it to local proprietary reputation databases.

Happy July 4th, Be safe.

—
HLS

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Idle Musings - Why Is It DMARC and not DMARD?

2023-06-30 Thread Hector Santos
Great question.

I’ve been around since the beginning as a very strong DKIM Policy advocate, 
watching everything, my dumb attempt to summarize:

1) The idea of “reporting” was considered a testing thing.  Redundant,. 
DomainKeys and DKIM had -t test keys.  I believed and others as well, felt 
reporting was an attack vector. I included reporting ideas in DSAP but the 
format was not defined. The section was left TBD.

2) Murray was working on reporting methods with a format.  He was obviously 
filling a need out there.

3) I did not hear of anyone honoring ADSP rejects because of the known indirect 
mail problems.

4) ADSP was abandoned and replaced with Super ADSP aka DMARC which introduced a 
reporting and compliance concept.  It had a strong policy idea.

5) I totally under estimated the administrator direct for reports.  But I still 
didn’t believe it in.  It’s for testing only, right? 

6) I did not hear of anyone rejecting on DMARC p=reject. So it was just about 
Reporting & Conformance.

7) Then YAHOO.COM , the patented inventor of this all this 
starting with DomainKeys, the first with a built-in `o=` tag policy concept, 
was the first big system to honor published DMARC strict policies.

8) Now DMARC become about Handling and proper SMTP integration with SPF.

The end!

Happy July 4th Weekend. Be safe!!

—
HLS




> On Jun 30, 2023, at 2:22 PM, Todd Herr 
>  wrote:
> 
> Genuine curiosity question here for those who were around at the beginning...
> 
> Why is the mechanism called "Domain-based Message Authentication, Reporting, 
> and Conformance" and not "Domain-based Message Authentication, Reporting, and 
> Disposition"? Perhaps a better question, why is "conformance" in the name of 
> the mechanism?
> 
> I ask because I'm writing up some stuff for internal use, and I got curious 
> as to how conformance is defined or explained in RFC 7489, and well, it's 
> not. The word appears five times in RFC 7489, and each occurrence is in the 
> context of spelling out the full name of the mechanism.
> 
> I am not looking to change the name of the mechanism; I'm just genuinely 
> curious how the name was arrived at.
> 
> -- 
> Todd Herr  | Technical Director, Standards & Ecosystem
> e: todd.h...@valimail.com 
> p: 703-220-4153
> m: 703.220.4153
> 
> This email and all data transmitted with it contains confidential and/or 
> proprietary information intended solely for the use of individual(s) 
> authorized to receive it. If you are not an intended and authorized recipient 
> you are hereby notified of any use, disclosure, copying or distribution of 
> the information included in this transmission is prohibited and may be 
> unlawful. Please immediately notify the sender by replying to this email and 
> then delete it from your system.
> ___
> dmarc mailing list
> dmarc@ietf.org
> https://www.ietf.org/mailman/listinfo/dmarc

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal

2023-06-30 Thread Hector Santos

> On Jun 30, 2023, at 3:32 PM, Murray S. Kucherawy  wrote:
> 
> On Fri, Jun 30, 2023 at 12:21 AM Jan Dušátko 
> mailto:40dusatko@dmarc.ietf.org>> 
> wrote:
>> Scott, Barry,
>> as far as I understand, SPF are historic technology,
> 
> Not in any official capacity.  RFC 7208 is a Proposed Standard.  In fact, in 
> IETF terms, it enjoys higher status than DMARC does right now.
> 
> The status of these protocols is not under discussion.  The only question is 
> whether DMARC should continue to factor SPF results into its output.


If I am reading the group right, using the suggested `auth=` tag for 
explanation, it appears the editor wants the new DMARCbis default to be:

auth=dkim

And it would required an explicit tag like;

auth=spf,dkim

to express a desire for spf to be in the evaluation.  This offers DMARCbis 
backward compatibility.   This would be the one “upgrade” change a domain would 
need to make, an optional “extended behavior” to make it behave like DMARC 
today.  The default behavior today is auth=spf,dkim.  DMARCbis’s default would 
be auth=dkim.

I am saying it sounds like this.  

Overall, imo, it is never a good idea to exerted changes on domains with bis 
specs, requiring them to change their current DMARC record to reinforce the 
security level they want using SPF in DMARC evaluation. 

—
HLS

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal

2023-06-27 Thread Hector Santos
Doug,  this is Wildcat! SMTP - one of the oldest SMPT packages around. Summary 
here:

Without some signal at wcSMTP about DMARC,  SPF will most likely remain a hard 
rejection at WCSAP/SMTP (at RCPT state) before DMARC at DATA

Background:

Since 2003, out of the box, Wildcat! SMTP with add-on tools, wcSAP and wcDKIM, 
the mail flow is:

(Note: for the record, email is a small part of winserver.com, but a supportive 
part for many customer operations)

At SMTP, starting with remote client connection:

1) If Enabled, Check for DNS-RBL IP check, respond at step 8 in order to 
collect envelope data.
2.0) Check for smtpfilter-connect.wcx script, run it if found
2.1) Check for Geo-Location IP Blocks, very unethical practice.
3) HELO/EHLO: Check EHLO/HELO on technical merit like IP Literal matching 
Connect IP (CIP)
4) MAIL FROM: Check for Local domain MAIL FROM spoof
5) MAIL FROM: Accept 250 pending RCPT validation
6) RCPT TO: not valid 550 
7) RCPT TO: Valid starts WCSAP.WCX (p-code) Wait for Response.

At WCSAP (Wildcat! Sender Authorization Protocol):

8)  WCSAP: Record if DNS-RBL IP blocked at 1, exit response 55z
9) WCSAP: Run sysop-defined text-based simple White/Block Accept/Reject If rules
10) WCSAP: Run SPF.  Failure return 55z or 45z
11) WCSAP: Run CBV,  Failure return 55z or 45z
12) WCSAP: return 250

Back to WCSMTP RCPT state

8) RCPT response provided, reject 55z/45z or 250 continue, Receiver-SPF header 
prepended.
9) DATA is transferred, Received: trace line prepended.
10 Before DATA response, run stack of SMTP mail filters written in-house or 3rd 
party:

At  SMTPFILTER-xxx.wcx, specifically SMTPFILTER-DKIM-VERIFY.WCX

11) Check for ADSP + ATPS
12) Check For DMARC + ATPS
13) Record Authentication-Results
14) DMARC Rejection Failures are DISABLED. Auth-Res Prepended.
15) Return any 55z, 45z or 250 based on SMTPFILTER-,wcx filters.

Back to DATA, 

16) DATA response is provided.
17) 250 mail is accepted, router signaled for MDA import or MTA forwarding.
18) Wait 30 seconds for client new transaction command, QUIT and/or DROP

Pretty much it.  

We have too much invested in our integrated Wildcat! Internet Net Server mail 
platform — one of the oldest platforms since the 80s. 

My long time customers, sysops, sysadms, love the flexibility and  p-code and 
low code programmability for operators and 3rd party developers!  

I have provided the DMARC option to the SMTPFILTER-DKIM-VERIFY.WCX to return 
55z response for DMARC p=reject but it is compiler disabled.  However, anyone 
can write a new mail filter script to run at DATA for DMARC, but it has yet to 
happen.  Not surprise there.   If we don’t provide it, I don’t expect them to 
do it.

In WCSAP, the SPF handling of the result can be set to not reply with 55z for a 
SPF hard failure.  

This will allow SMTPFILTER-DKIM-VERIFY.WCX to see a SPF=fail Received-SPF 
header. But at the moment, out of the box, DMARC will never see a SPF reject.

From an innocent IETF implementor, my integration enhancement for DMARC might 
be:

With SUBMITTER enabled, you will MOST DEFINITELY see this from supportive 
clients:

C: MAIL FROM: SUBMITTER=pra 

where pra is Purported Responsible Address, normally 5322.From,  Low volume or 
not, you will see it.

I plan to use the PRA to get DMARC information. 

First I will assume the signer is aligned so the next check is for return-path 
alignment.  

Second, if auth=dkim tag exist,  I could offer an option to relax SPF in both 
WCSAP.WCX and SMTPFILTER-DKIM-VERIFY.WCX.

Without some signal at wcSMTP about DMARC,  SPF will most likely remain a hard 
rejection at WCSAP/SMTP (at RCPT state) before DMARC at DATA.

—
HLS

> On Jun 27, 2023, at 7:58 AM, Douglas Foster 
>  > wrote:
> 
> Ale, here is an attempt at a formal model.   Application to the current 
> question is at the very end.
> 
> Any test (DKIM, SPF, ARC) has these result possibilities:
> Pass
> No data or uncertain result
> Fail
> 
> The test results are imperfect, so we have to consider these probabilities
> 
> Probability that PASS is a correct result
>Probability that a false PASS will be whitelisted or not blocked in 
> content filtering
>  Net result that a false PASS is delivered to the user
> 
> Probability that a NoData / Uncertain result is correct (presumed 100%).
>   Probability that an Uncertain message is wanted or unwanted.
>   Probability that an unwanted message is or is not blocked by 
> content filtering
>Net probability that an unauthenticated and unwanted message 
> is delivered to the user
> 
> Probability that FAIL is a correct result
>   Probability that a FAIL result is blocked by local policy (presumed 
> 100%)
>Probability that a false FAIL is actually wanted
>   Net probability that false FAIL blocks a wanted message
> 
> My strategy is documented in my "Best Practices" draft submission.   To 
> 

Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal

2023-06-27 Thread Hector Santos
+1

> On Jun 27, 2023, at 11:06 AM, Tobias Herkula 
>  wrote:
> 
> Signing That, nothing to add.
> 
> -Original Message-
> From: dmarc  On Behalf Of Barry Leiba
> Sent: Tuesday, June 27, 2023 4:24 PM
> To: Alessandro Vesely 
> Cc: dmarc@ietf.org
> Subject: Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal
> 
> I don't understand how most of your message fits into this discussion:
> you're comparing SPF's policy points with DMARC policy.  we're talking about 
> SPF as an authentication mechanism together with DKIM (not
> DMARC) as an authentication mechanism... and then using those authentication 
> results in DMARC policy evaluation.
> 
> But here: I've said all this before in separate places, so I'll put it in one 
> place, here, one more time:
> 
> Given that SPF and DKIM are both configured properly:
> 1. If SPF passes, DKIM will always pass.
> 2. If DKIM fails, SPF will always fail.
> 3. In some scenarios, DKIM will pass when SPF fails.

Yes, since SPF comes first, by far, in my empirical field experience, if SPF 
fails, odds are good DKIM will fail.   But if DKIM passes, then it can be 
interesting to see if this can fix a false positive with SPF.

—
HLS

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal

2023-06-27 Thread Hector Santos
Since 2003, here is my out of the box, Wildcat! SMTP with wcSAP and wcDKIM 
add-on support, mail flow: 

(Note: for the record, email is a small Part, but a supportive part for many 
customer operations)

At SMTP, starting with connection

1) If Enabled, Check for DNS-RBL IP check, respond at step 8
2.0) Check for smtpfilter-connect.wcx script, run it if found
2.1) Check for Geo-Location IP Blocks, very unethical practice.
3) HELO/EHLO: Check EHLO/HELO on technical merit like IP Literal matching 
Connect IP (CIP)
4) MAIL FROM: Check for Local domain MAIL FROM spoof
5) MAIL FROM: Accept 250 pending RCPT validation
6) RCPT TO: not valid 550 
7) RCPT TO: Valid starts WCSAP.WCX (p-code) Wait for Response.

At WCSAP:

8)  WCSAP: Record if DNS-RBL IP blocked at 1, exit response 55z
9) WCSAP: Run sysop-defined text-based simple White/Block Accept/Reject If rules
10) WCSAP: Run SPF.  Failure return 55z or 45z
11) WCSAP: Run CBV,  Failure return 55z or 45z
12) WCSAP: return 250

Back to WCSMTP RCPT state

8) RCPT response provided, reject 55z/45z or 250 continue, Receiver-SPF header 
prepended.
9) DATA is transferred, Received: trace line prepended.
10 Before DATA response, run stack of SMTP mail filters written in-house or 3rd 
party:

At  SMTPFILTER-xxx.wcx, specifically SMTPFILTER-DKIM-VERIFY.WCX

11) Check for ADSP + ATPS
12) Check For DMARC + ATPS
13) Record Authentication-Results
14) DMARC Rejection Failures are DISABLED. Auth-Res Prepended.
15) Return any 55z, 45z or 250 based on SMTPFILTER-,wcx filters.

Back to DATA, 

16) DATA response is provided.
17) 250 mail is accepted, router signaled for MDA import or MTA forwarding.
18) Wait 30 seconds for client new transaction command, QUIT and/or DROP

Pretty much it.  

We have too much invested in the integrated Wildcat! Internet Net Server 
platform — one of the oldest platforms since the 80s.

My long time customers, sysops, sysadms, love the flexibility and  p-code and 
low code programmability for 3rd party developers!  

I have provided the DMARC option to the SMTPFILTER-DKIM-VERIFY.WCX to return 
55z response for DMARC p=reject but it is compiler disabled.  However, anyone 
can write a new mail filter script to run at DATA for DMARC, but it has yet to 
happen.  Not surprise there.   If we don’t provide it, I don’t expect them to 
do it.

In WCSAP, the SPF handling of the result can be set to not reply with 55z for a 
SPF hard failure.  

This will allow SMTPFILTER-DKIM-VERIFY.WCX to see a SPF=fail Received-SPF 
header. But at the moment, out of the box, DMARC will never see a SPF reject.

From an innocent IETF implementor, my integration enhancement for DMARC might 
be:

With SUBMITTER enabled, you will MOST DEFINITELY see this from supportive 
clients:

C: MAIL FROM: SUBMITTER=pra 

where pra is Purported Responsible Address, normally 5322.From,  Low volume or 
not, you will see it.

I plan to use the PRA to get DMARC information. 

First I will assume the signer is aligned so the next check is for return-path 
alignment.  

Second, if auth=dkim tag exist,  I could offer an option to relax SPF in both 
WCSAP.WCX and SMTPFILTER-DKIM-VERIFY.WCX.

Without some signal at wcSMTP about DMARC,  SPF will most likely remain a hard 
rejection at WCSAP/SMTP (at RCPT state) before DMARC at DATA.

—
HLS

> On Jun 27, 2023, at 7:58 AM, Douglas Foster 
>  wrote:
> 
> Ale, here is an attempt at a formal model.   Application to the current 
> question is at the very end.
> 
> Any test (DKIM, SPF, ARC) has these result possibilities:
> Pass
> No data or uncertain result
> Fail
> 
> The test results are imperfect, so we have to consider these probabilities
> 
> Probability that PASS is a correct result
>Probability that a false PASS will be whitelisted or not blocked in 
> content filtering
>  Net result that a false PASS is delivered to the user
> 
> Probability that a NoData / Uncertain result is correct (presumed 100%).
>   Probability that an Uncertain message is wanted or unwanted.
>   Probability that an unwanted message is or is not blocked by 
> content filtering
>Net probability that an unauthenticated and unwanted message 
> is delivered to the user
> 
> Probability that FAIL is a correct result
>   Probability that a FAIL result is blocked by local policy (presumed 
> 100%)
>Probability that a false FAIL is actually wanted
>   Net probability that false FAIL blocks a wanted message
> 
> My strategy is documented in my "Best Practices" draft submission.   To 
> summarize my experience:
> - The frequency of a true PASS is high, so the probability of a false PASS is 
> low
> - The probability of a false PASS being detected by content filtering is 
> pretty good.
> - The frequency of a true FAIL is low, so the probability of a false FAIL is 
> high.
> - Uncertain messages have a high percentage of unwanted messages, but also a 
> non-trivial volume of wanted messages.
> 

Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal

2023-06-24 Thread Hector Santos
> If the DMARC spec makes that clear, I think we win.  And recipients
> can still do what they want: if DMARCbis goes out with "use DKIM only"
> and a recipient wants to use SPF anyway, they can do that... just as a
> recipient that decides to use best-guess-SPF in the absence of actual
> SPF records is free to make that choice.

When said that way, I believe that requires a version bump v2 which would 
inherently means “use DKIM only,"

So supporters all do a version check:


   bUseDKIMOnly =  (DMARC[“v=“] == “DMARC2”)?1:0


And the new supporter will use the flag bUseDKIMOnly throughout its current 
DMARC1 evaluation accordingly.  

Or via “Add-on” tag extension:

   bUseDKIMOnly =  (DMARC[“auth="] == “dkim”)?1:0

Six in one, Half Dozen of the other?

The problem is that there are implementations that do check for v=DMARC1 and 
will not required DMARC2 as a valid record when in fact a DMARC2 record exist 
whose only purpose in life was to signify a relaxed DMARC1 evaluation regarding 
SPF.

I like the tag extension instead.  Make coding life easier, I think.   But if  
v=DMARC2 is the way Levine wishes to go, I’m ok.  I see issues with just 
changing the inherent behavior without any protocol negotiating signals.

—
HLS


 



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal

2023-06-24 Thread Hector Santos
Alessandro,  I believe we are on the same wave.  I support the DMARC1 tag 
extension `auth=‘ idea.   Do you have any suggestions for the text?  

Technically we don’t need DMARC1-Bis.   That document can move forward as is 
and a new draft proposal I-D called “DMARC1-EXTENSION-AUTH” can be written for 
relaxing the original DMARC1 (RFC 7489) and also the current DMARC1-bis.

—
HLS

> On Jun 24, 2023, at 12:17 PM, Alessandro Vesely  wrote:
> 
> On Fri 23/Jun/2023 20:13:27 +0200 Hector Santos wrote:
>>> On Jun 23, 2023, at 12:52 PM, John R Levine  wrote:
>>> On Thu, 22 Jun 2023, Emanuel Schorsch wrote:
>>>> I agree with John's point that dkim+spf doesn't make sense in the context
>>>> of strict DMARC enforcement (I think it provides value for p=none domains
>>> Since the aggregate reports tell you what authentication worked, I don't 
>>> even see that as a benefit.  There's also the question how many people 
>>> would even look at a DMARC v2 tag which would be a prerequisite for the 
>>> auth tag.
>> DMARC v1 supports extended tags.  See section 3.1.3 in RFC 7489:
>> 3.1.3 <https://datatracker.ietf.org/doc/html/rfc7489#section-3.1.3>.  
>> Alignment and Extension Technologies
>>If in the future DMARC is extended to include the use of other
>>authentication mechanisms, the extensions will need to allow for
>>domain identifier extraction so that alignment with the RFC5322 
>> <https://datatracker.ietf.org/doc/html/rfc5322>.From
>>domain can be verified.
> 
> 
> Eh?  Dkim+spf wouldn't be a new mechanism, as the domain identifier would 
> have to be the same.  I'd have cited 6.3:
> 
> 6.3.  General Record Format
> https://datatracker.ietf.org/doc/html/rfc7489#section-6.3
> 
>   Section 11 creates a registry for known DMARC tags and registers the
>   initial set defined in this document.  Only tags defined in this
>   document or in later extensions, and thus added to that registry, are
>   to be processed; unknown tags MUST be ignored.
> 
> Of course, there will be lots of verifiers who ignore auth=, t=, and ed25519. 
> Unfortunately, while we have so many blog posts, we're still missing DMARC 
> verifier checks.
> 
> 
>>> The idea is that auth=dkim means you'd publish SPF records but hope people 
>>> will ignore them, or vice versa for auth=dkim?  I still don't get it.
>> The immediate benefit would be forwarders. I believe Wei labeled this form 
>> of forwarding REM in the PDF analysis posted recently.
>> With REM forwarders, in SMTP transport terms, it is a passthru message 
>> forwarded to a recorded address given by the local domain or locally hosted 
>> domain Recipient , untouched data.  MTA inbound to MTA outbound. The MDA, 
>> like gmail.com <http://gmail.com/>, would see an SPF failure so the DMARC 
>> auth=dkim relaxed option tells GMAIL that the hard fail with SPF is 
>> acceptable, ignore it, but expect the DKIM to be valid from the author 
>> signer domain.
> 
> 
> SPF hard fail is acceptable even with the default auth=.  (SPF hard fail is a 
> problem for those who reject before DATA.  Receiving MXes have no DKIM clue 
> at that stage.  The only way forwarding might work without replacing the 
> bounce address is if either the receiver or the SPF record provide for 
> whitelisting. As a side note, let me add that I'm rejecting way more spam 
> thanks to spf-all than DMARC and DNSBL together.)
> 
> The auth=dkim (excluding SPF) setting is needed by domains who /have/ to 
> include a bloated SPF record in order to deliver at sites which only verify 
> that.  However, since the bloated record allows impersonation, they don't 
> want that DMARC verifiers consider it.  They probably wish that everybody 
> used DMARC so that they could avoid publishing an SPF record, but there's not 
> much they can do about it.
> 
> 
> Best
> Ale
> -- 


___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal

2023-06-23 Thread Hector Santos
On Jun 23, 2023, at 1:54 PM, John R Levine  wrote:
> 
>> My understanding is that if `auth=dkim` then SPF would be ignored from the
>> perspective of DMARC. So  if a receiver sees DKIM is not DMARC aligned and
>> only SPF is DMARC aligned then it would still be treated as a DMARC fail.
> 
> That's my understanding.
> 
>> It would be a way for senders to say "yes I checked that all my DKIM
>> signatures are working and aligned, I don't need you to look at SPF and
>> don't want to have the risk of SPF Upgrades.
> 
> So why do you publish an SPF record?  Presumably so someone will accept your 
> mail who wouldn't otherwise, except you just said they shouldn't. Still not 
> making sense to me.

I believe because the domain may still want the restrictive SPF -ALL  and DMARC 
p=reject or p=quarantine for normal direct messages but they recognize users 
will be contacting people where a SPF will fail due to a forward.

If you remove the SPF record or weaken it with ~ALL or ?ALL, then it weakens 
the majority of non-forwarded direct transactions. The proposed tag `auth=dkim` 
will indicate to gmail that SPF failing is ok as long as the first party DKIM 
signature is still intact.   It’s weaker but would be less problematic than it 
is today.

Today, we can modify the return path for the forward or don’t allow for forward 
and make the (gmail) user pick up the mail via POP3/IMAP.  No forwarding.

—
HLS

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal

2023-06-23 Thread Hector Santos


> On Jun 23, 2023, at 12:52 PM, John R Levine  wrote:
> 
> On Thu, 22 Jun 2023, Emanuel Schorsch wrote:
>> I agree with John's point that dkim+spf doesn't make sense in the context
>> of strict DMARC enforcement (I think it provides value for p=none domains
> 
> Since the aggregate reports tell you what authentication worked, I don't even 
> see that as a benefit.  There's also the question how many people would even 
> look at a DMARC v2 tag which would be a prerequisite for the auth tag.

DMARC v1 supports extended tags.  See section 3.1.3 in RFC 7489:

https://datatracker.ietf.org/doc/html/rfc7489#section-3.1.3



3.1.3 .  Alignment 
and Extension Technologies

   If in the future DMARC is extended to include the use of other
   authentication mechanisms, the extensions will need to allow for
   domain identifier extraction so that alignment with the RFC5322 
.From
   domain can be verified.





> 
> The idea is that auth=dkim means you'd publish SPF records but hope people 
> will ignore them, or vice versa for auth=dkim?  I still don't get it.
> 

The immediate benefit would be forwarders. I believe Wei labeled this form of 
forwarding REM in the PDF analysis posted recently.

With REM forwarders, in SMTP transport terms, it is a passthru message 
forwarded to a recorded address given by the local domain or locally hosted 
domain Recipient , untouched data.  MTA inbound to MTA outbound. The MDA, like 
gmail.com , would see an SPF failure so the DMARC auth=dkim 
relaxed option tells GMAIL that the hard fail with SPF is acceptable, ignore 
it, but expect the DKIM to be valid from the author signer domain.

Who sets this tag?  The initial sender that unbeknownst to this sender, the MX 
Is not the final MDA.  We will never know that information of where a contact 
can be reached.  The Hosted Domain market is very big and important.

So it will be a matter of training system admins that domains with any chance 
of being indirect, it will probably be a good idea to use a relaxed SPF 
evaluation for DMARC1.

We will not need a version bump. 

—
HLS



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal

2023-06-23 Thread Hector Santos

Levine makes a good point. A less complex option would be:

auth=dkim  # apply dkim only, ignore spf, dkim failure is 
dmarc=fail
auth=spf# apply spf only, ignore dkim, spf failure is 
dmarc=fail


the default auth=dkim,spf SHOULD NOT be explicitly be required. It 
adds no additional security value.  I would like to note that some DNS 
Zone Managers with DMARC record support will add the complete tags 
available for the protocol with the default conditions making the 
record look more complex than it really it.


Other system integration options would (forgive me for I have sinned):

atps=1 # we support ATPS protocol for 3rd party signer.
rewrite=1  # we are perfectly fine with Author Rewrite

--
HLS





On 6/22/2023 10:18 PM, John Levine wrote:

It appears that Emil Gustafsson   said:

I don't know if there is a better way to encode that, but I'm supportive of
making a change that that would allow domains to tell us (gmail) that they
prefer us to require both dkim and spf for DMARC evaluation (or whatever
combination of DKIM and SPF they desire).

I really don't understand what problem this solves. More likely people
will see blog posts telling them auth=dkim+spf is "more secure",
they'll add that without understanding what it means, and all that
will happen is that more of their legit mail will disappear.

If you're worried about DKIM replay attacks, let's fix that rather
than trying to use SPF, which as we know has all sorts of problems of
its own, as a band-aid.

R's,
John

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc





--
Hector Santos,
https://santronics.com
https://winserver.com



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal

2023-06-22 Thread Hector Santos
> On Jun 22, 2023, at 9:54 AM, Scott Kitterman  wrote:
> 
> My conclusion (it won't surprise you to learn) from this thread is precisely 
> the opposite.  
> 
> In theory, DKIM is enough for DMARC (this was always true), but in practice 
> it 
> is not.
> 
> I don't think there's evidence of a systemic weakness in the protocol.  We've 
> seen evidence of poor deployment of the protocol for SPF, but I think the 
> solution is to fix that (see the separate thread on data hygiene).
> 
> Scott K
> 

Scott, this all started as a way to add weight to a SPF=SOFTFAIL using ADSP.  
Microsoft started it and DMARC came out with a surprising even tighter rule for 
DKIM+SPF alignment.

SPF rejects immediately issued an 55z the transaction, confused DMARCers.  
Let’s keep in mind SPF pre-dated DMARC.

SPF softfail results were interesting to see how a DKIM signature may help.  
Microsoft’s idea before DMARC.

Overall, DMARC created a Link with SPF that wasn’t thoroughly reviewed with the 
IETF.  It skipped the process as an Informational proposal.  Now as a standard 
track DMARCbis, we see all the problems. 

How is this problem fixed with client/server protocol negotiating software?

—
HLS___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] easier DKIM, DMARC2 & SPF Dependency Removal

2023-06-22 Thread Hector Santos

> On Jun 22, 2023, at 1:08 PM, Barry Leiba  wrote:
> 
>> I concur that this isn't really a problem for either working group to solve 
>> as part of a standard,
> 
> Well, the part that the working group needs to solve is whether the
> challenges of getting DKIM right are such that we need to retain SPF
> to fill that gap, or whether the issues with relying on SPF are more
> significant.  I think that's an important part of the decision we're
> discussing, and will be a significant part of judging consensus on
> that discussion.
> 
> Barry, as chair
> 

Barry, this is obviously a new relaxation option.  From a mail system 
integration standpoint, the options are:

1) A version bump to DMARC2 with new semantics with backward DMARC1 
compatibility, or

2) Use a DMARC1 Extended tag option allowed by DMARC1.   Alessandro cited an 
excellent backward compatible extended tag option:

auth=dkim|spf (default value), auth=dkim+spf, auth=dkim, auth=spf

Of course, this would need to be discussed and I know Levine see this is too 
late for DMARCbis, but in my opinion,  Why the rush?  IETF San Fran next month?

DMARCBis is highly contentious and remains problematic. You know whats 
happening. I put my IETF faith in you.

—
HLS___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] DMARC2 & SPF Dependency Removal

2023-06-17 Thread Hector Santos


> On Jun 17, 2023, at 9:50 PM, John Levine  wrote:
> 
> It appears that Hector Santos   said:
>>> Can these senders not accomplish the same thing by removing the SPF record 
>>> altogether?
>>> 
>>> -MSK, participating
>> 
>> 
>> Isn’t SPF, DKIM and alignment are all required for DMARC1 passage? Failure 
>> if any are missing?
> 
> No, that has never been the case.  Please reread RFC 7489.
> 



Everything in that doc, all angles of reading this Informational Status RFC 
suggest SPF is a natural part of the DMARC consideration.  

A domain with a DMARC1 record is expected to have SPF and DKIM.  The 
authenticated identifiers need to be aligned as well. The DMARC1 policy define 
how failures are handled.  If the policy p=none allows for failures by not 
having a SPF record, I would agree that would be technically true but not all 
receivers behave the same.With restrictive DMARC policies. SPF is pretty 
much required.  Senders risked failures by receivers who may applied it 
inconsistently. 

Section 4.3 has items 1,6, 7 and 8 describing SPF as a factor  in the 
established procedure and flow and consideration in policy result evaluation. 

Let’s consider the huge industry DMARC marketing as well where SPF, DKIM are 
described as necessary email security preparation for  DMARC.

The section 10.1, 2nd para confirms my main point that SPF may be processed 
separately for reject (-all)  results preempting payload processing:


   Some receiver architectures might implement SPF in advance of any
   DMARC operations.  This means that a "-" prefix on a sender's SPF
   mechanism, such as "-all", could cause that rejection to go into
   effect early in handling, causing message rejection before any DMARC
   processing takes place.  Operators choosing to use "-all" should be
   aware of this.


Anyway, I support removing SPF from the DMARCbis or DMARC2 evaluation.  Section 
10.1 2nd para semantics need to remain.

Thanks

—
HLS




___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] DMARC2 & SPF Dependency Removal

2023-06-17 Thread Hector Santos

> On Jun 17, 2023, at 8:41 PM, Murray S. Kucherawy  wrote:
> 
> On Sat, Jun 17, 2023 at 2:40 PM Ken Simpson  > wrote:
>> FWIW, I'd like to chuck my hat in the ring on the side of removing SPF from 
>> the next iteration of DMARC. As the operator of an email delivery service 
>> with tens of millions of primarily uncontrolled senders on web hosting 
>> servers, it would be great if domain owners could assert via their DMARC 
>> record that receivers should only trust DKIM-signed email.
> 
> Can these senders not accomplish the same thing by removing the SPF record 
> altogether?
> 
> -MSK, participating


Isn’t SPF, DKIM and alignment are all required for DMARC1 passage? Failure if 
any are missing?

Even then, with no SPF, what would remain for a reduced DMARC2 requirement is a 
1st party DKIM signature only.  No 3rd party. When we resolve this part, “I can 
die and finally go to heaven."

Note, from my pov, SPF was always separate from any payload DKIM-based policy 
protocol process because there are receivers who will reject at SMTP before 
DATA or DMARC consideration. For these optimized systems, DMARC would only ever 
see a SPF = pass, softfail, neutral or none/unknown but never a spf=reject 
unless the implementation delayed SPF rejects until DMARC can be processed.

—
HLS

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] DMARC2 & SPF Dependency Removal

2023-06-12 Thread Hector Santos


> On Jun 12, 2023, at 6:02 PM, Jim Fenton  wrote:
> 
> On 9 Jun 2023, at 22:35, Murray S. Kucherawy wrote:
> 
>> 
>> You were previously talking about inserting ">" before a line starting
>> "From ", which is typically done on delivery when writing to an
>> mbox-formatted mailbox file, because in that format, "From " at the front
>> of a line has a specific meaning (i.e., "this is a new message").  If that
>> insertion is happening in transport, then a local mailbox convention is
>> leaking out into the transport environment, which means something is
>> misconfigured, and all bets are off.
>> 
>> In any case, it is not a transport conversion anticipated by the section
>> you're quoting, so I've no idea why a DKIM signer might opt to handle it
>> specially.
> 
> I’m not as definite that this is a misconfiguration, but might be a 
> historical artifact.

Very historic - UUCP days and it didn’t come with the “>” prefix. Thats 
something new to perhaps mask and avoid stripping at the MDA.




___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Jami on Sony Xperia X

2023-06-11 Thread Hector Espinoza
App crashes when hitting the make call button or video call button.
Other aspects work well: text messages send/receive, photo send/receive,
picture send/receive.


Re: Jami on Sony Xperia X

2023-06-11 Thread Hector Espinoza
log attached

On Sun, Jun 11, 2023, 13:46 Hector Espinoza  wrote:

> App crashes when hitting the make call button or video call button.
> Other aspects work well: text messages send/receive, photo send/receive,
> picture send/receive.
>


log_20230611_134826_1390526763.log
Description: Binary data


Re: [dmarc-ietf] DMARC2 & SPF Dependency Removal

2023-06-09 Thread Hector Santos
Barry,

Whoa! Take it easy.  

We are on the DMARC2 thread per topic - a proposal. Not anything for the 
current DMARCbis. 

Is the chair suggesting the current charter for DMARCbis should change to 
remove SPF? Was the charter changed for this?

To be clear, DMARC2 is not DMARCbis right now, are you wishing this now?

Hector


> On Jun 9, 2023, at 8:27 PM, Barry Leiba  wrote:
> 
> Hector, did you not understand this?:
> 
>>> We will *not* consider what should happen to
>>> SPF outside of DMARC, and any discussion of that is *out of scope* for
>>> this working group under its current charter.
> 
> Please stop discussing it.
> 
> Barry
> 
> On Fri, Jun 9, 2023 at 8:23 PM Hector Santos  wrote:
>> 
>>> On Jun 9, 2023, at 4:41 AM, Barry Leiba  wrote:
>>> 
>>> Repeating this one point as chair, to make it absolutely clear:
>>> 
>>> The proposal we're discussing is removing SPF authentication from
>>> DMARC evaluation *only*.  We will *not* consider what should happen to
>>> SPF outside of DMARC, and any discussion of that is *out of scope* for
>>> this working group under its current charter.
>>> 
>>> Barry, as chair
>> 
>> For the record,  from a long time SMTP implementer standpoint, DMARC would 
>> be ignored, dropped, turned off, etc first before any consideration to stop 
>> SPF support.   As a Transporter, SPF works. As an Administrator - ADSP, I 
>> mean “Supper ADSP” aka DMARC has been horrible.  I, and most people, could 
>> easily deprecate Wildcat! DMARC with no harm and fact, less harm because the 
>> false positives will disappear.  My product add-on for wcSMTP, wcDMARC, 
>> never did honor the p=reject|quarantine. It was left for filters and no one 
>> hard any confidence to make it work.
>> 
>> SPF on the other hand, I don’t see dropped in the name of DMARC.  So if it’s 
>> about sparate, but not abandon, that I can support - because it is already 
>> separate.  SPF preempts DMARC or any Payload protocol..
>> 
>> Thanks
>> 
> 
> ___
> dmarc mailing list
> dmarc@ietf.org
> https://www.ietf.org/mailman/listinfo/dmarc



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] DMARC2 & SPF Dependency Removal

2023-06-09 Thread Hector Santos
> On Jun 9, 2023, at 4:41 AM, Barry Leiba  > wrote:
> 
> Repeating this one point as chair, to make it absolutely clear:
> 
> The proposal we're discussing is removing SPF authentication from
> DMARC evaluation *only*.  We will *not* consider what should happen to
> SPF outside of DMARC, and any discussion of that is *out of scope* for
> this working group under its current charter.
> 
> Barry, as chair

For the record,  from a long time SMTP implementer standpoint, DMARC would be 
ignored, dropped, turned off, etc first before any consideration to stop SPF 
support.   As a Transporter, SPF works. As an Administrator - ADSP, I mean 
“Supper ADSP” aka DMARC has been horrible.  I, and most people, could easily 
deprecate Wildcat! DMARC with no harm and fact, less harm because the false 
positives will disappear.  My product add-on for wcSMTP, wcDMARC, never did 
honor the p=reject|quarantine. It was left for filters and no one hard any 
confidence to make it work.

SPF on the other hand, I don’t see dropped in the name of DMARC.  So if it’s 
about sparate, but not abandon, that I can support - because it is already 
separate.  SPF preempts DMARC or any Payload protocol..

Thanks___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] DMARC2 & SPF Dependency Removal

2023-06-08 Thread Hector Santos


> On Jun 8, 2023, at 10:20 AM, Murray S. Kucherawy  wrote:
> 
> On Thu, Jun 8, 2023 at 6:00 AM Tobias Herkula 
> mailto:401und1...@dmarc.ietf.org>> 
> wrote:
>> My team recently concluded an extensive study on the current use and 
>> performance of DMARC. We analyzed a staggering 3.2 billion emails, and the 
>> insights drawn are quite enlightening. Of these, 2.2 billion emails 
>> (approximately 69%) passed the DMARC check successfully. It's quite an 
>> achievement, reflective of our collective hard work in fostering a safer, 
>> more secure email environment.
>>  
>> 
>> However, upon further analysis, it's evident that a mere 1.6% (or thirty-six 
>> million) of these DMARC-passed emails relied exclusively on the Sender 
>> Policy Framework (SPF) for validation. This is a remarkably low volume 
>> compared to the overall DMARC-passed traffic, raising questions about SPF's 
>> relevancy and the load it imposes on the DNS systems.
>> 
>>  
>> 
>> Given the current use case scenarios and the desire to optimize our 
>> resources, I propose that we explore the possibility of removing the SPF 
>> dependency from DMARC. This step could result in a significant reduction in 
>> DNS load, increased efficiency, and an accurate alignment with our 
>> predominant use cases.
>> 
>> [...]
>> 
> 
> Does anyone have consonant (or dissonant) data? 
> 



Thank you for inviting feedback on Mr. Herkula's interesting DMARC2 proposal, 
focusing on detaching SPF from DMARC's evaluation process. I would like to 
share my thoughts on this matter.

While the principle behind the proposed DMARC2 is valuable, I remain sceptical 
about the effectiveness of separating SPF from DMARC to alleviate DNS load. For 
various reasons, notably the layer issue – SPF being an SMTP protocol rather 
than a payload protocol – SPF is unlikely to be fully discarded.

It's worth recalling that SPF's contribution has served the email community 
well, despite certain node transition issues such as relays and forwards. The 
optional integration of SPF within DMARC1 from the onset might have simplified 
the process, mitigating the challenges associated with the merged results and 
reducing the occurrence of false positives, which in many cases has begun to 
give domains a second thought on using p=reject|quarantine policies. 

A potential DMARC2 proposal should, in my opinion, maintain backward 
compatibility, making SPF an optional requirement.  The real gap with DMARC1 
has been the lack of diversity in policies, and effectively, the DMARC2 
proposal could add a new "policy" that doesn't require SPF evaluation.

For context, we've amassed nearly two decades worth of data on SPF, DKIM, ADSP, 
and DMARC, providing considerable insight into the longevity and effectiveness 
of these measures.

It's crucial to establish that SPF was deliberately designed to, and in many 
cases will, use a HARDFAIL result to preempt payload transfer or its acceptance 
(at DATA). This is precisely why the SUBMITTER add-on ESMTP protocol was 
introduced as part of SenderID – the payload version of SPF – to relay the 
Purported Responsible Address (PRA) at the SMTP MAIL FROM command.

By reviving the SUBMITTER protocol for DMARC purposes, servers/receivers can 
check the DMARC policy at SMTP, offering valuable foresight into the email 
domain's expectations prior to payload delivery. This approach allows for a 
more optimized process, ensuring that SPF is evaluated at MAIL FROM or RCPT TO, 
once the recipient address's acceptability is established per RFC 5321, Section 
3.3 Mail Transactions' recommendations.

It's important to note that SPF-compliant servers – as evidenced recently by 
gmail.com – can reject SPF failures at SMTP independent of DMARC. For domains 
using SPF hard failures (-ALL), DMARC is not mandatory and in many cases, a 
hard SPF policy vs hard  DMARC policies are mutual exclusive.  Odds are good, 
if SPF was disabled, DMARC2 could yield the same way. I suggest that the 
proposed DMARC2 revisit the coupling of SOFTFAIL SPF results with DMARC2 policy 
analysis.  

While DMARC has introduced complexities and some uncertainty, my recent 
analysis reveals that domains without DMARC, but implementing hard SPF 
policies, experienced minimal issues, except for gmail.com domain members. The 
problem appears to be more prominent with ESPs, particularly those with a 
lenient DMARC p=nonen since ESPs with strong policies are restricted from 
subscriptions and submissions.

In conclusion, the evolution to a DMARC2 should focus on addressing these 
concerns, potentially including a "rewrite=1" option to mailing lists with 
appropriate permissions. This could potentially make it more palatable to 
modify the author's address, while respecting the hard-won email security 
principles.

Thanks

---
Hector Santos
Santronics Software,Inc.












___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] DMARC2 & SPF Dependency Removal

2023-06-08 Thread Hector Santos
My #1 concern is how the bigger ESP is contributing to the delivery problems, 
causing chaos for business users and customer relationship problems with mail 
hosting provider  I am seeing uncertainty and inconsistency among different 
receivers with ESP gmail.com seems to be the most aggressive and I am seeing 
the bigger ESPs using different methods, including proprietary methods.

As a sideline note, I will venture email overhead has skyrocketed to a new 
level of hard to follow headers for tech support. I don’t think we can continue 
down this path in the name of trying to get DMARC as apart of everyone’s domain 
when in fact, it is unfortunately becoming more apparent, a domain may be is 
better off with no DMARC record, not even p=none may help. Using SPF is good 
enough it seems.

—
HLS

> On Jun 8, 2023, at 11:02 AM, Barry Leiba  wrote:
> 
> I disagree with the premise (the last sentence of your first paragraph).  
> Broken or ineffective authentication is worse than none, because it causes 
> deliverability problems.  I’d rather have no authentication and rely on other 
> means of filtering.
> 
> Barry
> 
> On Thu, Jun 8, 2023 at 3:54 PM Seth Blank  > wrote:
>> Participating, while running around so apologies for terseness:
>> 
>> Sophisticated senders do DKIM. The long tail, we're lucky if they do SPF. 
>> Some authentication is better than none.
>> 
>> The problem isn't people evaluating SPF vs DKIM and choosing the easier 
>> option. It's people who have a business, who bolt on email, and then 
>> struggle to authenticate through any means. Again, we're lucky when we get 
>> SPF from them, and I'll still take that over no auth all day every day.
>> 
>> Don't disagree at all with the myriad problems with SPF, and that the goal 
>> should be to eliminate it. I just don't believe we're anywhere close to that 
>> being a reality yet.
>> 
>> The data that led to DMARC showed that SPF and DKIM were both necessary to 
>> determine legitimacy broadly. What would we need to understand now to see if 
>> only DKIM is necessary?
>> 
>> On Thu, Jun 8, 2023 at 3:44 PM Barry Leiba > > wrote:
>>> See, I don't look at it as "harmed".  Rather, I think they're using "we use 
>>> SPF" as a *reason* not to use DKIM, and I think that *causes* harm.
>>> 
>>> SPF is, as I see it, worse than useless, as it adds no value to domain that 
>>> use DKIM -- any time DKIM fails SPF will also fail -- and actually impedes 
>>> the adoption of DKIM.  Reliance on SPF causes DMARC failures that result in 
>>> deliverability problems for legitimate mail.  I wholeheartedly support 
>>> removal of SPF as an authentication mechanism that DMARC accepts.
>>> 
>>> Barry, as participant
>>> 
>>> On Thu, Jun 8, 2023 at 3:30 PM Seth Blank 
>>> mailto:40valimail@dmarc.ietf.org>> 
>>> wrote:
 Participating, I have data that I believe points to a long tail of 
 businesses who predominantly only authenticate on behalf of others using 
 SPF, and would be harmed by such a change. It will take me a little while 
 to confirm and share.
 
 I also know a predominant ccTLD with millions of registrations, that has 
 SPF on roughly 80% of them, but DMARC on barely 5%. I don't have data on 
 DKIM for those, but I assume it's closer to the DMARC penetration than the 
 SPF one. I'll see if I can get this data to share more publically, and 
 also get the DKIM answer.
 
 Of course the goal is aligned dkim with a stated policy, but I don't think 
 the data supports us being anywhere close to that realistically. 
 
 As Chair, this is a valuable conversation to have with real data on 
 problems and opportunities at scale, and am excited to see Tobias share 
 and see what others have to say.
 
 Seth
 
 On Thu, Jun 8, 2023 at 3:21 PM Murray S. Kucherawy >>> > wrote:
> On Thu, Jun 8, 2023 at 6:00 AM Tobias Herkula 
>  > wrote:
>> My team recently concluded an extensive study on the current use and 
>> performance of DMARC. We analyzed a staggering 3.2 billion emails, and 
>> the insights drawn are quite enlightening. Of these, 2.2 billion emails 
>> (approximately 69%) passed the DMARC check successfully. It's quite an 
>> achievement, reflective of our collective hard work in fostering a 
>> safer, more secure email environment.
>>  
>> 
>> However, upon further analysis, it's evident that a mere 1.6% (or 
>> thirty-six million) of these DMARC-passed emails relied exclusively on 
>> the Sender Policy Framework (SPF) for validation. This is a remarkably 
>> low volume compared to the overall DMARC-passed traffic, raising 
>> questions about SPF's relevancy and the load it imposes on the DNS 
>> systems.
>> 
>>  
>> 
>> Given the current use case scenarios and the desire to 

Bug#1037186: debian-installer: bookworm d-i graphics are not shown on Raptor system

2023-06-07 Thread Hector Oron
Hello

El mié, 7 jun 2023, 13:03, Cyril Brulebois  escribió:

> Hector Oron  (2023-06-07):
> > and Timonthy was able to test that. I could expand the change to ppc64
> > (be) and cdrom targets and test that.
> >
> > Note, the ppc64el installer images are unusable with that change, at
> > least on the Raptor systems
>
> I don't think you answered my question about fbdev.
>

I defer to Timothy since I do not have a machine myself, but since Raptorcs
is Debian partner I had assumed we - Debian - would like to ship a working
product that works for them.

Thanks for your support

>


Bug#1037186: debian-installer: bookworm d-i graphics are not shown on Raptor system

2023-06-07 Thread Hector Oron
Hello

El mié, 7 jun 2023, 13:03, Cyril Brulebois  escribió:

> Hector Oron  (2023-06-07):
> > and Timonthy was able to test that. I could expand the change to ppc64
> > (be) and cdrom targets and test that.
> >
> > Note, the ppc64el installer images are unusable with that change, at
> > least on the Raptor systems
>
> I don't think you answered my question about fbdev.
>

I defer to Timothy since I do not have a machine myself, but since Raptorcs
is Debian partner I had assumed we - Debian - would like to ship a working
product that works for them.

Thanks for your support

>


Bug#1037186: debian-installer: bookworm d-i graphics are not shown on Raptor system

2023-06-07 Thread Hector Oron
Hello,

  To be honest, I only updated ppc64el netboot image with the following patch:

--- a/build/pkg-lists/netboot/ppc64el.cfg
+++ b/build/pkg-lists/netboot/ppc64el.cfg
@@ -1,5 +1,6 @@
 input-modules-${kernel:Version}
 nic-modules-${kernel:Version}
+fb-modules-${kernel:Version} ?
 usb-modules-${kernel:Version}
 virtio-modules-${kernel:Version} ?

I built the installer posted at:
https://people.debian.org/~zumbi/netboot_ppc64el/

and Timonthy was able to test that. I could expand the change to ppc64
(be) and cdrom targets and test that.

Note, the ppc64el installer images are unusable with that change, at
least on the Raptor systems, and since the change only affects power
package lists, I'd consider it as low impact/risk. I know we are close
to a release, and it is very bad timing, however if it can be merged,
it'd be great, otherwise we'll have to wait until the first point
release.

Thanks for your support.

On Wed, 7 Jun 2023 at 12:29, Cyril Brulebois  wrote:
>
> Hi,
>
> Hector Oron Martinez  (2023-06-07):
> > We found latest installer for bookworm is missing ast DRM kernel
> > module, causing graphical failure on ppc64el Raptor machines. Could
> > you please consider the following change or similar for the
> > debian-installer bookworm release.
> >
> >   
> > https://salsa.debian.org/installer-team/debian-installer/-/merge_requests/34
>
> Thanks for the patch, but this is too late. We can consider that for
> unstable, and later on via a point release.
>
> I know for a fact we have fbdev in ppc64el builds, as we've suffered a
> regression there (#1033058, for which someone still needs to find a real
> fix on the kernel side, hint hint wink wink); isn't that generic driver
> sufficient to get some basic output?
>
> Also, I don't understand what's going on with the build/Makefile part:
>  1. This is a temporary workaround for 3 architectures that run into
> issues under UEFI/SB, which isn't quite relevant for ppc64el. (The
> idea was to avoid having to re-upload linux and linux-signed-* for
> just a few additions to the fb-modules udeb.)
>  2. If you look at the following commit, I suppose you'll get the very
> same impression as I have: this merge request is very likely to
> trigger a direct FTBFS on ppc64el.
>   
> https://salsa.debian.org/installer-team/debian-installer/-/commit/32e4d58c263fc5454067a7217ee7103cfb12bc1b
>
>
> Cheers,
> --
> Cyril Brulebois (k...@debian.org)<https://debamax.com/>
> D-I release manager -- Release team member -- Freelance Consultant



-- 
 Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.



Bug#1037186: debian-installer: bookworm d-i graphics are not shown on Raptor system

2023-06-07 Thread Hector Oron
Hello,

  To be honest, I only updated ppc64el netboot image with the following patch:

--- a/build/pkg-lists/netboot/ppc64el.cfg
+++ b/build/pkg-lists/netboot/ppc64el.cfg
@@ -1,5 +1,6 @@
 input-modules-${kernel:Version}
 nic-modules-${kernel:Version}
+fb-modules-${kernel:Version} ?
 usb-modules-${kernel:Version}
 virtio-modules-${kernel:Version} ?

I built the installer posted at:
https://people.debian.org/~zumbi/netboot_ppc64el/

and Timonthy was able to test that. I could expand the change to ppc64
(be) and cdrom targets and test that.

Note, the ppc64el installer images are unusable with that change, at
least on the Raptor systems, and since the change only affects power
package lists, I'd consider it as low impact/risk. I know we are close
to a release, and it is very bad timing, however if it can be merged,
it'd be great, otherwise we'll have to wait until the first point
release.

Thanks for your support.

On Wed, 7 Jun 2023 at 12:29, Cyril Brulebois  wrote:
>
> Hi,
>
> Hector Oron Martinez  (2023-06-07):
> > We found latest installer for bookworm is missing ast DRM kernel
> > module, causing graphical failure on ppc64el Raptor machines. Could
> > you please consider the following change or similar for the
> > debian-installer bookworm release.
> >
> >   
> > https://salsa.debian.org/installer-team/debian-installer/-/merge_requests/34
>
> Thanks for the patch, but this is too late. We can consider that for
> unstable, and later on via a point release.
>
> I know for a fact we have fbdev in ppc64el builds, as we've suffered a
> regression there (#1033058, for which someone still needs to find a real
> fix on the kernel side, hint hint wink wink); isn't that generic driver
> sufficient to get some basic output?
>
> Also, I don't understand what's going on with the build/Makefile part:
>  1. This is a temporary workaround for 3 architectures that run into
> issues under UEFI/SB, which isn't quite relevant for ppc64el. (The
> idea was to avoid having to re-upload linux and linux-signed-* for
> just a few additions to the fb-modules udeb.)
>  2. If you look at the following commit, I suppose you'll get the very
> same impression as I have: this merge request is very likely to
> trigger a direct FTBFS on ppc64el.
>   
> https://salsa.debian.org/installer-team/debian-installer/-/commit/32e4d58c263fc5454067a7217ee7103cfb12bc1b
>
>
> Cheers,
> --
> Cyril Brulebois (k...@debian.org)<https://debamax.com/>
> D-I release manager -- Release team member -- Freelance Consultant



-- 
 Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.



Bug#1037186: debian-installer: bookworm d-i graphics are not shown on Raptor system

2023-06-07 Thread Hector Oron Martinez
Source: debian-installer
Version: 20230526
Severity: important
X-Debbugs-Cc: tpear...@raptorcs.com, zu...@debian.org

Hello,

We found latest installer for bookworm is missing ast DRM kernel module, 
causing graphical failure on ppc64el Raptor machines. Could you please consider 
the following change or similar for the debian-installer bookworm release.

  https://salsa.debian.org/installer-team/debian-installer/-/merge_requests/34

Regards

-- System Information:
Debian Release: 12.0
  APT prefers unstable
  APT policy: (500, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)

Kernel: Linux 6.1.0-9-amd64 (SMP w/8 CPU threads; PREEMPT)
Kernel taint flags: TAINT_USER, TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE
Locale: LANG=ca_ES.UTF-8, LC_CTYPE=ca_ES.UTF-8 (charmap=UTF-8), 
LANGUAGE=ca_ES:ca
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled



Bug#1037186: debian-installer: bookworm d-i graphics are not shown on Raptor system

2023-06-07 Thread Hector Oron Martinez
Source: debian-installer
Version: 20230526
Severity: important
X-Debbugs-Cc: tpear...@raptorcs.com, zu...@debian.org

Hello,

We found latest installer for bookworm is missing ast DRM kernel module, 
causing graphical failure on ppc64el Raptor machines. Could you please consider 
the following change or similar for the debian-installer bookworm release.

  https://salsa.debian.org/installer-team/debian-installer/-/merge_requests/34

Regards

-- System Information:
Debian Release: 12.0
  APT prefers unstable
  APT policy: (500, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)

Kernel: Linux 6.1.0-9-amd64 (SMP w/8 CPU threads; PREEMPT)
Kernel taint flags: TAINT_USER, TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE
Locale: LANG=ca_ES.UTF-8, LC_CTYPE=ca_ES.UTF-8 (charmap=UTF-8), 
LANGUAGE=ca_ES:ca
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled



[jira] [Commented] (FLINK-25920) Allow receiving updates of CommittableSummary

2023-06-05 Thread Hector Rios (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729290#comment-17729290
 ] 

Hector Rios commented on FLINK-25920:
-

Hello all.

To add more context to this issue, I was working with a customer who was 
experiencing this error. I found the content in the following issue really 
helpful.

https://issues.apache.org/jira/browse/FLINK-30238

 

In the specific case of this customer, the issue was being caused by including 
--drain on their call to stop-with-savepoint. I was able to recreate the issue 
using a very simple job reading from a Kafka source and sinking back to Kafka. 
Unfortunately, it was not consistent across versions. I was able to reproduce 
it on 1.15.3 but not on 1.15.4. Granted, it was a quick test, and I wanted to 
do a more thorough test to reproduce the issue consistently.

One interesting wrinkle on this one is that it occurs in 1.15.x, but the same 
job deployed into 1.14.x does not produce the issue.

Thanks.

 

 

> Allow receiving updates of CommittableSummary
> -
>
> Key: FLINK-25920
> URL: https://issues.apache.org/jira/browse/FLINK-25920
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Connectors / Common
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Fabian Paul
>Priority: Major
>
> In the case of unaligned checkpoints, it might happen that the checkpoint 
> barrier overtakes the records and an empty committable summary is emitted 
> that needs to be correct at a later point when the records arrive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HDFS-17026) RBF: NamenodeHeartbeatService should update JMX report with configurable frequency

2023-06-03 Thread Hector Sandoval Chaverri (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729013#comment-17729013
 ] 

Hector Sandoval Chaverri commented on HDFS-17026:
-

[~hexiaoqiao] added PR for branch-3.3: 
[https://github.com/apache/hadoop/pull/5714]

Thanks!

 

> RBF: NamenodeHeartbeatService should update JMX report with configurable 
> frequency
> --
>
> Key: HDFS-17026
> URL: https://issues.apache.org/jira/browse/HDFS-17026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Hector Sandoval Chaverri
>Assignee: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-17026-branch-3.3.patch
>
>
> The NamenodeHeartbeatService currently calls each of the Namenode's JMX 
> endpoint every time it wakes up (default value is every 5 seconds).
> In a cluster with 40 routers, we have observed service degradation on some of 
> the  Namenodes, since the JMX request obtains Datanode status and blocks 
> other RPC requests. However, JMX report data doesn't seem to be used for 
> critical paths on the routers.
> We should configure the NamenodeHeartbeatService so it updates the JMX 
> reports on a slower frequency than the Namenode states or to disable the 
> reports completely.
> The class calls out the JMX request being optional even though there is no 
> implementation to turn it off:
> {noformat}
> // Read the stats from JMX (optional)
> updateJMXParameters(webAddress, report);{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



Re: [Launchpad-reviewers] [Merge] ~lool/git-build-recipe:fetch-pristine-tar-tags into git-build-recipe:master

2023-06-01 Thread Hector CAO
LGTM, +1
Tested on my local machine, the tags needed by pristine-tar are fetched 
correctly
-- 
https://code.launchpad.net/~lool/git-build-recipe/+git/git-build-recipe/+merge/443942
Your team Launchpad code reviewers is requested to review the proposed merge of 
~lool/git-build-recipe:fetch-pristine-tar-tags into git-build-recipe:master.


___
Mailing list: https://launchpad.net/~launchpad-reviewers
Post to : launchpad-reviewers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~launchpad-reviewers
More help   : https://help.launchpad.net/ListHelp


Re: [Launchpad-reviewers] [Merge] ~lool/git-build-recipe:fallback-on-pristine-tar-checkout-failures into git-build-recipe:master

2023-06-01 Thread Hector CAO
Review: Needs Information

great work, some questions

Diff comments:

> diff --git a/gitbuildrecipe/deb_util.py b/gitbuildrecipe/deb_util.py
> index d0a..4724de6 100644
> --- a/gitbuildrecipe/deb_util.py
> +++ b/gitbuildrecipe/deb_util.py
> @@ -68,11 +69,20 @@ def extract_upstream_tarball(path, package, version, 
> dest_dir):
>  finally:
>  pristine_tar_list.wait()
>  if dest_filename is not None:
> -subprocess.check_call(
> -["pristine-tar", "checkout",
> - os.path.abspath(os.path.join(dest_dir, dest_filename))],
> -cwd=path)
> -else:
> +try:
> +subprocess.check_call(
> +["pristine-tar", "checkout",
> + os.path.abspath(os.path.join(dest_dir, dest_filename))],
> +cwd=path)
> +# ideally we'd triage between pristine-tar issues
> +except subprocess.CalledProcessError as e:
> +print("pristine-tar exception")
> +if not fallback:
> +raise e
> +if os.path.exists(dest_filename):

is it worth to move the file removal a little bit upper (before raising the 
exception) ?

> +os.remove(dest_filename)

is it better to use os.path.abspath(os.path.join(dest_dir, dest_filename)) 
instead of dest_filename ?

> +# no pristine-tar data or pristine-tar failed
> +if not os.path.exists(dest_filename):
>  tag_names = ["upstream/%s" % version, "upstream-%s" % version]
>  git_tag_list = subprocess.Popen(
>  ["git", "tag"], stdout=subprocess.PIPE, cwd=path)


-- 
https://code.launchpad.net/~lool/git-build-recipe/+git/git-build-recipe/+merge/443943
Your team Launchpad code reviewers is requested to review the proposed merge of 
~lool/git-build-recipe:fallback-on-pristine-tar-checkout-failures into 
git-build-recipe:master.


___
Mailing list: https://launchpad.net/~launchpad-reviewers
Post to : launchpad-reviewers@lists.launchpad.net
Unsubscribe : https://launchpad.net/~launchpad-reviewers
More help   : https://help.launchpad.net/ListHelp


[jira] [Commented] (HDFS-17026) RBF: NamenodeHeartbeatService should update JMX report with configurable frequency

2023-05-31 Thread Hector Sandoval Chaverri (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17728021#comment-17728021
 ] 

Hector Sandoval Chaverri commented on HDFS-17026:
-

[~hexiaoqiao] would you be able to commit the attached patch to branch-3.3?

> RBF: NamenodeHeartbeatService should update JMX report with configurable 
> frequency
> --
>
> Key: HDFS-17026
> URL: https://issues.apache.org/jira/browse/HDFS-17026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Hector Sandoval Chaverri
>Assignee: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: HDFS-17026-branch-3.3.patch
>
>
> The NamenodeHeartbeatService currently calls each of the Namenode's JMX 
> endpoint every time it wakes up (default value is every 5 seconds).
> In a cluster with 40 routers, we have observed service degradation on some of 
> the  Namenodes, since the JMX request obtains Datanode status and blocks 
> other RPC requests. However, JMX report data doesn't seem to be used for 
> critical paths on the routers.
> We should configure the NamenodeHeartbeatService so it updates the JMX 
> reports on a slower frequency than the Namenode states or to disable the 
> reports completely.
> The class calls out the JMX request being optional even though there is no 
> implementation to turn it off:
> {noformat}
> // Read the stats from JMX (optional)
> updateJMXParameters(webAddress, report);{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-17026) RBF: NamenodeHeartbeatService should update JMX report with configurable frequency

2023-05-30 Thread Hector Sandoval Chaverri (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727661#comment-17727661
 ] 

Hector Sandoval Chaverri commented on HDFS-17026:
-

[~elgoiri] I added a patch for branch-3.3 if you can take a look as well

> RBF: NamenodeHeartbeatService should update JMX report with configurable 
> frequency
> --
>
> Key: HDFS-17026
> URL: https://issues.apache.org/jira/browse/HDFS-17026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-17026-branch-3.3.patch
>
>
> The NamenodeHeartbeatService currently calls each of the Namenode's JMX 
> endpoint every time it wakes up (default value is every 5 seconds).
> In a cluster with 40 routers, we have observed service degradation on some of 
> the  Namenodes, since the JMX request obtains Datanode status and blocks 
> other RPC requests. However, JMX report data doesn't seem to be used for 
> critical paths on the routers.
> We should configure the NamenodeHeartbeatService so it updates the JMX 
> reports on a slower frequency than the Namenode states or to disable the 
> reports completely.
> The class calls out the JMX request being optional even though there is no 
> implementation to turn it off:
> {noformat}
> // Read the stats from JMX (optional)
> updateJMXParameters(webAddress, report);{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17026) RBF: NamenodeHeartbeatService should update JMX report with configurable frequency

2023-05-30 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HDFS-17026:

Attachment: HDFS-17026-branch-3.3.patch

> RBF: NamenodeHeartbeatService should update JMX report with configurable 
> frequency
> --
>
> Key: HDFS-17026
> URL: https://issues.apache.org/jira/browse/HDFS-17026
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-17026-branch-3.3.patch
>
>
> The NamenodeHeartbeatService currently calls each of the Namenode's JMX 
> endpoint every time it wakes up (default value is every 5 seconds).
> In a cluster with 40 routers, we have observed service degradation on some of 
> the  Namenodes, since the JMX request obtains Datanode status and blocks 
> other RPC requests. However, JMX report data doesn't seem to be used for 
> critical paths on the routers.
> We should configure the NamenodeHeartbeatService so it updates the JMX 
> reports on a slower frequency than the Namenode states or to disable the 
> reports completely.
> The class calls out the JMX request being optional even though there is no 
> implementation to turn it off:
> {noformat}
> // Read the stats from JMX (optional)
> updateJMXParameters(webAddress, report);{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



Re: [Koha] Fwd: Koha 22.11.05 released

2023-05-29 Thread Hector Gonzalez Jaime

I see this behavior in two unrelated installations.  Both with overdues.

On 5/27/23 17:47, Tomas Cohen Arazi wrote:

Update using the packages.

El sáb, 27 may 2023 16:37, Hector Gonzalez Jaime  
escribió:


Sorry, I didn't copy the list:

Hi, it is strange, I was testing in english, anyway,
koha-translate does
not list english as it is the default language, and doesn't let me
update it.

The system also had another language, which was updated but it still
gives the same set of errors for both:

[Sat May 27 13:27:32.849826 2023] [cgi:error] [pid 6125] [client
192.168.1.1:50078 <http://192.168.1.1:50078>] AH01215: Template
process failed: undef error - :
filter not found at /usr/share/koha/lib/C4/Templates.pm line 127.:
/usr/share/koha/intranet/cgi-bin/circ/branchoverdues.pl
<http://branchoverdues.pl>, referer:
http://bibliotecacho-intra.locales/cgi-bin/koha/circ/circulation-home.pl
[Sat May 27 13:27:32.902038 2023] [cgi:error] [pid 6125] [client
192.168.1.1:50078 <http://192.168.1.1:50078>] End of script output
before headers:
branchoverdues.pl <http://branchoverdues.pl>, referer:
http://bibliotecacho-intra.locales/cgi-bin/koha/circ/circulation-home.pl

This is a test system, so I'm certain there are no other missing
messages in the logs.

I'll try deleting koha and installing it without an upgrade to see if
this still is a problem or if it needs overdue fines to be triggered.


On 5/27/23 01:06, Mason James wrote:
> hi Hector
>
> this problem tests OK for me, on a 22.11.06 KTD instance
>
>
> some suggestions...
>
> 1/ switch your language to 'en', and test problem again
>
> 2/ manually rebuild your koha languages, and test problem again
>
>
>   $ sudo koha-translate --list
>   en
>   fr-FR
>
>
>   $ sudo koha-translate -v --update en
>   $ sudo koha-translate -v --update fr-FR
>
>
>
> On 27/05/23 5:27 am, Hector Gonzalez Jaime wrote:
>> Hello, thanks for all the hard work!
>>
>> I just tried this version on a development server, and get a 500
>> error trying to check overdue fines, this is in the logs:
>>
>> "GET /intranet/circ/branchoverdues.pl
<http://branchoverdues.pl> HTTP/1.1" 500
>>
>> Template process failed: undef error - : filter not found at
>> /usr/share/koha/lib/C4/Templates.pm line 127.
>>
>> It worked well with 22.11.05, and got broken with 22.11.06
>>
>>
>> On 5/26/23 09:00, Renvoize, Martin wrote:
>>> Hello, Bonjour, Kia ora,
>>>
>>> The Koha community is pleased to announce the release of version
>>> 22.11.05.
>>>
>>> This is a security and bugfix maintenance release, including 160
>>> bugfixes!
>>>
>>> Full release notes are available here:
>>> https://koha-community.org/koha-22-11-06-released/
>>>
>>> This is our last release as the stable branch maintainers,
with the
>>> community heading into the next cycle imminently. However,
22.11.x
>>> will be
>>> in the capable hands of Pedro and Matt for the next cycle having
>>> been voted
>>> in as the oldstable maintainers 
>>>
>>> Thanks to anybody involved! 
>>>
>>>
>>> Martin Renvoize, MPhys (Hons)
>>>
>>> Head of Development and Community Engagement
>>>
>>>
>>>
>>> E: martin.renvo...@ptfs-europe.com
>>>
>>> P: +44 (0) 1483 378728
>>>
>>> M: +44 (0) 7725 985636
>>>
>>> www.ptfs-europe.com <http://www.ptfs-europe.com>
>>> ___
>>>
>>> Koha mailing list http://koha-community.org
>>> Koha@lists.katipo.co.nz
>>> Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha
>>
>
-- 
Hector Gonzalez

ca...@genac.org

___

Koha mailing list http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


--
Hector Gonzalez
ca...@genac.org
___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


Re: [Koha] Fwd: Koha 22.11.05 released

2023-05-29 Thread Hector Gonzalez Jaime

I have updated with the packages, it's now on 22.11.06-3

It only fails if there are pending fines, I removed the pending fines, 
and it shows the page.  But that page is only for showing overdues.  Do 
you have overdues in your test system?



On 5/27/23 17:47, Tomas Cohen Arazi wrote:

Update using the packages.

El sáb, 27 may 2023 16:37, Hector Gonzalez Jaime  
escribió:


Sorry, I didn't copy the list:

Hi, it is strange, I was testing in english, anyway,
koha-translate does
not list english as it is the default language, and doesn't let me
update it.

The system also had another language, which was updated but it still
gives the same set of errors for both:

[Sat May 27 13:27:32.849826 2023] [cgi:error] [pid 6125] [client
192.168.1.1:50078 <http://192.168.1.1:50078>] AH01215: Template
process failed: undef error - :
filter not found at /usr/share/koha/lib/C4/Templates.pm line 127.:
/usr/share/koha/intranet/cgi-bin/circ/branchoverdues.pl
<http://branchoverdues.pl>, referer:
http://bibliotecacho-intra.locales/cgi-bin/koha/circ/circulation-home.pl
[Sat May 27 13:27:32.902038 2023] [cgi:error] [pid 6125] [client
192.168.1.1:50078 <http://192.168.1.1:50078>] End of script output
before headers:
branchoverdues.pl <http://branchoverdues.pl>, referer:
http://bibliotecacho-intra.locales/cgi-bin/koha/circ/circulation-home.pl

This is a test system, so I'm certain there are no other missing
messages in the logs.

I'll try deleting koha and installing it without an upgrade to see if
this still is a problem or if it needs overdue fines to be triggered.


On 5/27/23 01:06, Mason James wrote:
> hi Hector
>
> this problem tests OK for me, on a 22.11.06 KTD instance
>
>
> some suggestions...
>
> 1/ switch your language to 'en', and test problem again
>
> 2/ manually rebuild your koha languages, and test problem again
>
>
>   $ sudo koha-translate --list
>   en
>   fr-FR
>
>
>   $ sudo koha-translate -v --update en
>   $ sudo koha-translate -v --update fr-FR
>
>
>
> On 27/05/23 5:27 am, Hector Gonzalez Jaime wrote:
>> Hello, thanks for all the hard work!
>>
>> I just tried this version on a development server, and get a 500
>> error trying to check overdue fines, this is in the logs:
>>
>> "GET /intranet/circ/branchoverdues.pl
<http://branchoverdues.pl> HTTP/1.1" 500
>>
>> Template process failed: undef error - : filter not found at
>> /usr/share/koha/lib/C4/Templates.pm line 127.
>>
>> It worked well with 22.11.05, and got broken with 22.11.06
>>
>>
>> On 5/26/23 09:00, Renvoize, Martin wrote:
>>> Hello, Bonjour, Kia ora,
>>>
>>> The Koha community is pleased to announce the release of version
>>> 22.11.05.
>>>
>>> This is a security and bugfix maintenance release, including 160
>>> bugfixes!
>>>
>>> Full release notes are available here:
>>> https://koha-community.org/koha-22-11-06-released/
>>>
>>> This is our last release as the stable branch maintainers,
with the
>>> community heading into the next cycle imminently. However,
22.11.x
>>> will be
>>> in the capable hands of Pedro and Matt for the next cycle having
>>> been voted
>>> in as the oldstable maintainers 
>>>
>>> Thanks to anybody involved! 
>>>
>>>
>>> Martin Renvoize, MPhys (Hons)
>>>
>>> Head of Development and Community Engagement
>>>
>>>
>>>
>>> E: martin.renvo...@ptfs-europe.com
>>>
>>> P: +44 (0) 1483 378728
>>>
>>> M: +44 (0) 7725 985636
>>>
>>> www.ptfs-europe.com <http://www.ptfs-europe.com>
>>> ___
>>>
>>> Koha mailing list http://koha-community.org
>>> Koha@lists.katipo.co.nz
>>> Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha
>>
>
-- 
Hector Gonzalez

ca...@genac.org

___

Koha mailing list http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


--
Hector Gonzalez
ca...@genac.org
___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


[ceph-users] Re: BlueStore fragmentation woes

2023-05-29 Thread Hector Martin
On 29/05/2023 22.26, Igor Fedotov wrote:
> So fragmentation score calculation was improved recently indeed, see 
> https://github.com/ceph/ceph/pull/49885
> 
> 
> And yeah one can see some fragmentation in allocations for the first two
> OSDs. Doesn't look that dramatic as fragmentation scores tell though.
> 
> 
> Additionally you might want to collect free extents dump using 'ceph
> tell osd.N ceph bluestore allocator dump block' command and do more
> analysis on these data.
> 
> E.g. I'd recommend to build something like a histogram showing amount of
> chunks for specific size range:
> 
> [1-4K]: N1 chunks
> 
> (4K-16]: N2 chunks
> 
> (16K-64K): N3
> 
> ...
> 
> [16M-inf) : Nn chunks
> 
> 
> This should be even more informative about fragmentation state -
> particularly if observed in evolution.
> 
> Looking for volunteers to write a script for building such a histogram... ;)

I'm up for that, once I get through some other cluster maintenance I
need to deal with first :)

Backfill is almost done and I was finally able to destroy two OSDs, will
be doing a bunch of restructuring in the coming weeks. I can probably
get the script done partway through doing this, so I can see how the
distributions evolve over a bunch of data movement.

> 
> 
> Thanks,
> 
> Igor
> 
> 
> On 28/05/2023 08:31, Hector Martin wrote:
>> So chiming in, I think something is definitely wrong with at *least* the
>> frag score.
>>
>> Here's what happened so far:
>>
>> 1. I had 8 OSDs (all 8T HDDs)
>> 2. I added 2 more (osd.0,1) , with Quincy defaults
>> 3. I marked 2 old ones out (the ones that seemed to be struggling the
>> most with IOPS)
>> 4. I added 2 more (osd.2,3), but this time I had previously set
>> bluestore_min_alloc_size_hdd to 16K as an experiment
>>
>> This has all happened in the space of a ~week. That means there was data
>> movement into the first 2 new OSDs, then before that completed I added 2
>> new OSDs. So I would expect some data thashing on the first 2, but
>> nothing extreme.
>>
>> The fragmentation scores for the 4 new OSDs are, respectively:
>>
>> 0.746, 0.835, 0.160, 0.067
>>
>> That seems ridiculous for the first two, it's only been a week. The
>> newest two seem in better shape, though those mostly would've seen only
>> data moving in, not out. The rebalance isn't done yet, but it's almost
>> done and all 4 OSDs have a similar fullness level at this time.
>>
>> Looking at alloc stats:
>>
>> ceph-0)  allocation stats probe 6: cnt: 2219302 frags: 2328003 size:
>> 1238454677504
>> ceph-0)  probe -1: 1848577,  1970325, 1022324588544
>> ceph-0)  probe -2: 848301,  862622, 505329963008
>> ceph-0)  probe -6: 2187448,  2187448, 1055241568256
>> ceph-0)  probe -14: 0,  0, 0
>> ceph-0)  probe -22: 0,  0, 0
>>
>> ceph-1)  allocation stats probe 6: cnt: 1882396 frags: 1947321 size:
>> 1054829641728
>> ceph-1)  probe -1: 2212293,  2345923, 1215418728448
>> ceph-1)  probe -2: 1471623,  1525498, 826984652800
>> ceph-1)  probe -6: 2095298,  2095298, 165933312
>> ceph-1)  probe -14: 0,  0, 0
>> ceph-1)  probe -22: 0,  0, 0
>>
>> ceph-2)  allocation stats probe 3: cnt: 2760200 frags: 2760200 size:
>> 1554513903616
>> ceph-2)  probe -1: 2584046,  2584046, 1498140393472
>> ceph-2)  probe -3: 1696921,  1696921, 869424496640
>> ceph-2)  probe -7: 0,  0, 0
>> ceph-2)  probe -11: 0,  0, 0
>> ceph-2)  probe -19: 0,  0, 0
>>
>> ceph-3)  allocation stats probe 3: cnt: 2544818 frags: 2544818 size:
>> 1432225021952
>> ceph-3)  probe -1: 2688015,  2688015, 1515260739584
>> ceph-3)  probe -3: 1086875,  1086875, 622025424896
>> ceph-3)  probe -7: 0,  0, 0
>> ceph-3)  probe -11: 0,  0, 0
>> ceph-3)  probe -19: 0,  0, 0
>>
>> So OSDs 2 and 3 (the latest ones to be added, note that these 4 new OSDs
>> are 0-3 since those IDs were free) are in good shape, but 0 and 1 are
>> already suffering from at least some fragmentation of objects, which is
>> a bit worrying when they are only ~70% full right now and only a week old.
>>
>> I did delete a couple million small objects during the rebalance to try
>> to reduce load (I had some nasty directories), but that was cumulatively
>> only about 60GB of data. So while that could explain a high frag score
>> if there are now a million little holes in the free space map of the
>> OSDs (how is it calculated?), it should not actually cause new data
>> moving in to end up fragmented since there should be plenty of
>> unfragmented free space going around still.
&g

[ceph-users] Re: Recoveries without any misplaced objects?

2023-05-29 Thread Hector Martin
On 29/05/2023 20.55, Anthony D'Atri wrote:
> Check the uptime for the OSDs in question

I restarted all my OSDs within the past 10 days or so. Maybe OSD
restarts are somehow breaking these stats?

> 
>> On May 29, 2023, at 6:44 AM, Hector Martin  wrote:
>>
>> Hi,
>>
>> I'm watching a cluster finish a bunch of backfilling, and I noticed that
>> quite often PGs end up with zero misplaced objects, even though they are
>> still backfilling.
>>
>> Right now the cluster is down to 6 backfilling PGs:
>>
>>  data:
>>volumes: 1/1 healthy
>>pools:   6 pools, 268 pgs
>>objects: 18.79M objects, 29 TiB
>>usage:   49 TiB used, 25 TiB / 75 TiB avail
>>pgs: 262 active+clean
>> 6   active+remapped+backfilling
>>
>> But there are no misplaced objects, and the misplaced column in `ceph pg
>> dump` is zero for all PGs.
>>
>> If I do a `ceph pg dump_json`, I can see `num_objects_recovered`
>> increasing for these PGs... but the misplaced count is still 0.
>>
>> Is there something else that would cause recoveries/backfills other than
>> misplaced objects? Or perhaps there is a bug somewhere causing the
>> misplaced object count to be misreported as 0 sometimes?
>>
>> # ceph -v
>> ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy
>> (stable)
>>
>> - Hector
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 
> 

- Hector
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Recoveries without any misplaced objects?

2023-05-29 Thread Hector Martin
Hi,

I'm watching a cluster finish a bunch of backfilling, and I noticed that
quite often PGs end up with zero misplaced objects, even though they are
still backfilling.

Right now the cluster is down to 6 backfilling PGs:

  data:
volumes: 1/1 healthy
pools:   6 pools, 268 pgs
objects: 18.79M objects, 29 TiB
usage:   49 TiB used, 25 TiB / 75 TiB avail
pgs: 262 active+clean
 6   active+remapped+backfilling

But there are no misplaced objects, and the misplaced column in `ceph pg
dump` is zero for all PGs.

If I do a `ceph pg dump_json`, I can see `num_objects_recovered`
increasing for these PGs... but the misplaced count is still 0.

Is there something else that would cause recoveries/backfills other than
misplaced objects? Or perhaps there is a bug somewhere causing the
misplaced object count to be misreported as 0 sometimes?

# ceph -v
ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy
(stable)

- Hector
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: BlueStore fragmentation woes

2023-05-27 Thread Hector Martin
So chiming in, I think something is definitely wrong with at *least* the
frag score.

Here's what happened so far:

1. I had 8 OSDs (all 8T HDDs)
2. I added 2 more (osd.0,1) , with Quincy defaults
3. I marked 2 old ones out (the ones that seemed to be struggling the
most with IOPS)
4. I added 2 more (osd.2,3), but this time I had previously set
bluestore_min_alloc_size_hdd to 16K as an experiment

This has all happened in the space of a ~week. That means there was data
movement into the first 2 new OSDs, then before that completed I added 2
new OSDs. So I would expect some data thashing on the first 2, but
nothing extreme.

The fragmentation scores for the 4 new OSDs are, respectively:

0.746, 0.835, 0.160, 0.067

That seems ridiculous for the first two, it's only been a week. The
newest two seem in better shape, though those mostly would've seen only
data moving in, not out. The rebalance isn't done yet, but it's almost
done and all 4 OSDs have a similar fullness level at this time.

Looking at alloc stats:

ceph-0)  allocation stats probe 6: cnt: 2219302 frags: 2328003 size:
1238454677504
ceph-0)  probe -1: 1848577,  1970325, 1022324588544
ceph-0)  probe -2: 848301,  862622, 505329963008
ceph-0)  probe -6: 2187448,  2187448, 1055241568256
ceph-0)  probe -14: 0,  0, 0
ceph-0)  probe -22: 0,  0, 0

ceph-1)  allocation stats probe 6: cnt: 1882396 frags: 1947321 size:
1054829641728
ceph-1)  probe -1: 2212293,  2345923, 1215418728448
ceph-1)  probe -2: 1471623,  1525498, 826984652800
ceph-1)  probe -6: 2095298,  2095298, 165933312
ceph-1)  probe -14: 0,  0, 0
ceph-1)  probe -22: 0,  0, 0

ceph-2)  allocation stats probe 3: cnt: 2760200 frags: 2760200 size:
1554513903616
ceph-2)  probe -1: 2584046,  2584046, 1498140393472
ceph-2)  probe -3: 1696921,  1696921, 869424496640
ceph-2)  probe -7: 0,  0, 0
ceph-2)  probe -11: 0,  0, 0
ceph-2)  probe -19: 0,  0, 0

ceph-3)  allocation stats probe 3: cnt: 2544818 frags: 2544818 size:
1432225021952
ceph-3)  probe -1: 2688015,  2688015, 1515260739584
ceph-3)  probe -3: 1086875,  1086875, 622025424896
ceph-3)  probe -7: 0,  0, 0
ceph-3)  probe -11: 0,  0, 0
ceph-3)  probe -19: 0,  0, 0

So OSDs 2 and 3 (the latest ones to be added, note that these 4 new OSDs
are 0-3 since those IDs were free) are in good shape, but 0 and 1 are
already suffering from at least some fragmentation of objects, which is
a bit worrying when they are only ~70% full right now and only a week old.

I did delete a couple million small objects during the rebalance to try
to reduce load (I had some nasty directories), but that was cumulatively
only about 60GB of data. So while that could explain a high frag score
if there are now a million little holes in the free space map of the
OSDs (how is it calculated?), it should not actually cause new data
moving in to end up fragmented since there should be plenty of
unfragmented free space going around still.

I am now restarting OSDs 0 and 1 to see whether that makes the frag
score go down over time. I will do further analysis later with the raw
bluestore free space map, since I still have a bunch of rebalancing and
moving data around planned (I'm moving my cluster to new machines).

On 26/05/2023 00.29, Igor Fedotov wrote:
> Hi Hector,
> 
> I can advise two tools for further fragmentation analysis:
> 
> 1) One might want to use ceph-bluestore-tool's free-dump command to get 
> a list of free chunks for an OSD and try to analyze whether it's really 
> highly fragmented and lacks long enough extents. free-dump just returns 
> a list of extents in json format, I can take a look to the output if 
> shared...
> 
> 2) You might want to look for allocation probs in OSD logs and see how 
> fragmentation in allocated chunks has evolved.
> 
> E.g.
> 
> allocation stats probe 33: cnt: 8148921 frags: 10958186 size: 1704348508>
> probe -1: 35168547,  46401246, 1199516209152
> probe -3: 27275094,  35681802, 200121712640
> probe -5: 34847167,  52539758, 271272230912
> probe -9: 44291522,  60025613, 523997483008
> probe -17: 10646313,  10646313, 155178434560
> 
> The first probe refers to the last day while others match days (or 
> rather probes) -1, -3, -5, -9, -17
> 
> 'cnt' column represents the amount of allocations performed in the 
> previous 24 hours and 'frags' one shows amount of fragments in the 
> resulted allocations. So significant mismatch between frags and cnt 
> might indicate some issues with high fragmentation indeed.
> 
> Apart from retrospective analysis you might also want how OSD behavior 
> changes after reboot - e.g. wouldn't rebooted OSD produce less 
> fragmentation... Which in turn might indicate some issues with BlueStore 
> allocator..
> 
> Just FYI: allocation probe printing interval is controlled by 
> bluestore_alloc_stats_dump_interval parameter.
> 
> 
> Thanks,
> 
> Igor
> 
> 
> 
> On 24/05/2023

[Koha] Fwd: Koha 22.11.05 released

2023-05-27 Thread Hector Gonzalez Jaime

Sorry, I didn't copy the list:

Hi, it is strange, I was testing in english, anyway, koha-translate does 
not list english as it is the default language, and doesn't let me 
update it.


The system also had another language, which was updated but it still 
gives the same set of errors for both:


[Sat May 27 13:27:32.849826 2023] [cgi:error] [pid 6125] [client 
192.168.1.1:50078] AH01215: Template process failed: undef error - : 
filter not found at /usr/share/koha/lib/C4/Templates.pm line 127.: 
/usr/share/koha/intranet/cgi-bin/circ/branchoverdues.pl, referer: 
http://bibliotecacho-intra.locales/cgi-bin/koha/circ/circulation-home.pl
[Sat May 27 13:27:32.902038 2023] [cgi:error] [pid 6125] [client 
192.168.1.1:50078] End of script output before headers: 
branchoverdues.pl, referer: 
http://bibliotecacho-intra.locales/cgi-bin/koha/circ/circulation-home.pl


This is a test system, so I'm certain there are no other missing 
messages in the logs.


I'll try deleting koha and installing it without an upgrade to see if 
this still is a problem or if it needs overdue fines to be triggered.



On 5/27/23 01:06, Mason James wrote:

hi Hector

this problem tests OK for me, on a 22.11.06 KTD instance


some suggestions...

1/ switch your language to 'en', and test problem again

2/ manually rebuild your koha languages, and test problem again


  $ sudo koha-translate --list
  en
  fr-FR


  $ sudo koha-translate -v --update en
  $ sudo koha-translate -v --update fr-FR



On 27/05/23 5:27 am, Hector Gonzalez Jaime wrote:

Hello, thanks for all the hard work!

I just tried this version on a development server, and get a 500 
error trying to check overdue fines, this is in the logs:


"GET /intranet/circ/branchoverdues.pl HTTP/1.1" 500

Template process failed: undef error - : filter not found at 
/usr/share/koha/lib/C4/Templates.pm line 127.


It worked well with 22.11.05, and got broken with 22.11.06


On 5/26/23 09:00, Renvoize, Martin wrote:

Hello, Bonjour, Kia ora,

The Koha community is pleased to announce the release of version 
22.11.05.


This is a security and bugfix maintenance release, including 160 
bugfixes!


Full release notes are available here:
https://koha-community.org/koha-22-11-06-released/

This is our last release as the stable branch maintainers, with the
community heading into the next cycle imminently. However, 22.11.x 
will be
in the capable hands of Pedro and Matt for the next cycle having 
been voted

in as the oldstable maintainers 

Thanks to anybody involved! 


Martin Renvoize, MPhys (Hons)

Head of Development and Community Engagement



E: martin.renvo...@ptfs-europe.com

P: +44 (0) 1483 378728

M: +44 (0) 7725 985636

www.ptfs-europe.com
___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha





--
Hector Gonzalez
ca...@genac.org

___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


Re: [Koha] Koha 22.11.05 released

2023-05-26 Thread Hector Gonzalez Jaime
Your problem looks like koha-plack is not running, you should try 
restarting it.


On 5/26/23 13:02, Thomas Sycko-Miller wrote:

I'm getting a similar error when trying to view biblio details in staff
client:

[Fri May 26 13:54:25.843357 2023] [proxy:error] [pid 2474] (2)No such file
or directory: AH02454: http: attempt to connect to Unix domain socket
/var/run/koha/library/plack.sock (localhost) failed
[Fri May 26 13:54:25.843460 2023] [proxy_http:error] [pid 2474] [client
999.999.999.999:55178] AH01114: HTTP: failed to make connection to backend:
httpd-UDS

--


On Fri, May 26, 2023 at 1:28 PM Hector Gonzalez Jaime 
wrote:


Hello, thanks for all the hard work!

I just tried this version on a development server, and get a 500 error
trying to check overdue fines, this is in the logs:

"GET /intranet/circ/branchoverdues.pl HTTP/1.1" 500

Template process failed: undef error - : filter not found at
/usr/share/koha/lib/C4/Templates.pm line 127.

It worked well with 22.11.05, and got broken with 22.11.06


On 5/26/23 09:00, Renvoize, Martin wrote:

Hello, Bonjour, Kia ora,

The Koha community is pleased to announce the release of version

22.11.05.

This is a security and bugfix maintenance release, including 160

bugfixes!

Full release notes are available here:
https://koha-community.org/koha-22-11-06-released/

This is our last release as the stable branch maintainers, with the
community heading into the next cycle imminently. However, 22.11.x will

be

in the capable hands of Pedro and Matt for the next cycle having been

voted

in as the oldstable maintainers 

Thanks to anybody involved! 


Martin Renvoize, MPhys (Hons)

Head of Development and Community Engagement



E: martin.renvo...@ptfs-europe.com

P: +44 (0) 1483 378728

M: +44 (0) 7725 985636

www.ptfs-europe.com
___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha

--
Hector Gonzalez
ca...@genac.org

___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


--
Hector Gonzalez
ca...@genac.org

___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


Re: [Koha] Koha 22.11.05 released

2023-05-26 Thread Hector Gonzalez Jaime

Hello, thanks for all the hard work!

I just tried this version on a development server, and get a 500 error 
trying to check overdue fines, this is in the logs:


"GET /intranet/circ/branchoverdues.pl HTTP/1.1" 500

Template process failed: undef error - : filter not found at 
/usr/share/koha/lib/C4/Templates.pm line 127.


It worked well with 22.11.05, and got broken with 22.11.06


On 5/26/23 09:00, Renvoize, Martin wrote:

Hello, Bonjour, Kia ora,

The Koha community is pleased to announce the release of version 22.11.05.

This is a security and bugfix maintenance release, including 160 bugfixes!

Full release notes are available here:
https://koha-community.org/koha-22-11-06-released/

This is our last release as the stable branch maintainers, with the
community heading into the next cycle imminently. However, 22.11.x will be
in the capable hands of Pedro and Matt for the next cycle having been voted
in as the oldstable maintainers 

Thanks to anybody involved! 


Martin Renvoize, MPhys (Hons)

Head of Development and Community Engagement



E: martin.renvo...@ptfs-europe.com

P: +44 (0) 1483 378728

M: +44 (0) 7725 985636

www.ptfs-europe.com
___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


--
Hector Gonzalez
ca...@genac.org

___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


[jira] [Created] (HDFS-17026) NamenodeHeartbeatService should update JMX report with configurable frequency

2023-05-25 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HDFS-17026:
---

 Summary: NamenodeHeartbeatService should update JMX report with 
configurable frequency
 Key: HDFS-17026
 URL: https://issues.apache.org/jira/browse/HDFS-17026
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Reporter: Hector Sandoval Chaverri


The NamenodeHeartbeatService currently calls each of the Namenode's JMX 
endpoint every time it wakes up (default value is every 5 seconds).

In a cluster with 40 routers, we have observed service degradation on some of 
the  Namenodes, since the JMX request obtains Datanode status and blocks other 
RPC requests. However, JMX report data doesn't seem to be used for critical 
paths on the routers.

We should configure the NamenodeHeartbeatService so it updates the JMX reports 
on a slower frequency than the Namenode states or to disable the reports 
completely.

The class calls out the JMX request being optional even though there is no 
implementation to turn it off:
{noformat}
// Read the stats from JMX (optional)
updateJMXParameters(webAddress, report);{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-17026) NamenodeHeartbeatService should update JMX report with configurable frequency

2023-05-25 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HDFS-17026:
---

 Summary: NamenodeHeartbeatService should update JMX report with 
configurable frequency
 Key: HDFS-17026
 URL: https://issues.apache.org/jira/browse/HDFS-17026
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Reporter: Hector Sandoval Chaverri


The NamenodeHeartbeatService currently calls each of the Namenode's JMX 
endpoint every time it wakes up (default value is every 5 seconds).

In a cluster with 40 routers, we have observed service degradation on some of 
the  Namenodes, since the JMX request obtains Datanode status and blocks other 
RPC requests. However, JMX report data doesn't seem to be used for critical 
paths on the routers.

We should configure the NamenodeHeartbeatService so it updates the JMX reports 
on a slower frequency than the Namenode states or to disable the reports 
completely.

The class calls out the JMX request being optional even though there is no 
implementation to turn it off:
{noformat}
// Read the stats from JMX (optional)
updateJMXParameters(webAddress, report);{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[ceph-users] Re: BlueStore fragmentation woes

2023-05-24 Thread Hector Martin
On 25/05/2023 01.40, 胡 玮文 wrote:
> Hi Hector,
> 
> Not related to fragmentation. But I see you mentioned CephFS, and your OSDs 
> are at high utilization. Is your pool NEAR FULL? CephFS write performance is 
> severely degraded if the pool is NEAR FULL. Buffered write will be disabled, 
> and every single write() system call needs to wait for reply from OSD.
> 
> If this is the case, use “ceph osd set-nearfull-ratio” to get normal 
> performance.
> 

I learned about this after the issue; they did become nearfull at one
point and I changed the threshold, but I don't think this explains the
behavior I was seeing because I was trying to do bulk writes (which
should use very large write sizes even without buffering). What happened
was usually a single OSD would immediately go to 100% utilization, but
not the rest, which is what I'd expect if that one OSD was the one with
the most fragmented free space ending up pathologically slowing down writes.

- Hector
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: BlueStore fragmentation woes

2023-05-24 Thread Hector Martin
On 24/05/2023 22.07, Mark Nelson wrote:
> Yep, bluestore fragmentation is an issue.  It's sort of a natural result 
> of using copy-on-write and never implementing any kind of 
> defragmentation scheme.  Adam and I have been talking about doing it 
> now, probably piggybacking on scrub or other operations that already 
> area reading all of the extents for an object anyway.
> 
> 
> I wrote a very simply prototype for clone to speed up the rbd mirror use 
> case here:
> 
> https://github.com/markhpc/ceph/commit/29fc1bfd4c90dd618eb9e0d4ae6474d8cfa5dfdf
> 
> 
> Adam ended up going the extra mile and completely changed how shared 
> blobs works which probably eliminates the need to do defrag on clone 
> anymore from an rbd-mirror perspective, but I think we still need to 
> identify any times we are doing full object reads of fragmented objects 
> and consider defragmenting at that time.  It might be clone, or scrub, 
> or other things, but the point is that if we are already doing most of 
> the work (seeks on HDD especially!) the extra cost of a large write to 
> clean it up isn't that bad, especially if we are doing it over the 
> course of months or years and can help keep freespace less fragmented.

Note that my particular issue seemed to specifically be free space
fragmentation. I don't use RBD mirror and I would not *expect* most of
my cephfs use cases to lead to any weird cow/fragmentation issues with
objects other than those forced by the free space becoming fragmented
(unless there is some weird pathological use case I'm hitting). Most of
my write workloads are just copying files in bulk and incrementally
writing out files.

Would simply defragging objects during scrub/etc help with free space
fragmentation itself? Those seem like two somewhat unrelated issues...
note that if free space is already fragmented, you wouldn't even have a
place to put down a defragmented object.

Are there any stats I can look at to figure out how bad object and free
space fragmentation is? It would be nice to have some clearer data
beyond my hunch/deduction after seeing the I/O patterns and the sole
fragmentation number :). Also would be interesting to get some kind of
trace of the bluestore ops the OSD is doing, so I can find out whether
it's doing something pathological that causes more fragmentation for
some reason.

> Mark
> 
> 
> On 5/24/23 07:17, Hector Martin wrote:
>> Hi,
>>
>> I've been seeing relatively large fragmentation numbers on all my OSDs:
>>
>> ceph daemon osd.13 bluestore allocator score block
>> {
>>  "fragmentation_rating": 0.77251526920454427
>> }
>>
>> These aren't that old, as I recreated them all around July last year.
>> They mostly hold CephFS data with erasure coding, with a mix of large
>> and small files. The OSDs are at around 80%-85% utilization right now.
>> Most of the data was written sequentially when the OSDs were created (I
>> rsynced everything from a remote backup). Since then more data has been
>> added, but not particularly quickly.
>>
>> At some point I noticed pathologically slow writes, and I couldn't
>> figure out what was wrong. Eventually I did some block tracing and
>> noticed the I/Os were very small, even though CephFS-side I was just
>> writing one large file sequentially, and that's when I stumbled upon the
>> free space fragmentation problem. Indeed, deleting some large files
>> opened up some larger free extents and resolved the problem, but only
>> until those get filled up and I'm back to fragmented tiny extents. So
>> effectively I'm stuck at the current utilization, as trying to fill them
>> up any more just slows down to an absolute crawl.
>>
>> I'm adding a few more OSDs and plan on doing the dance of removing one
>> OSD at a time and replacing it with another one to hopefully improve the
>> situation, but obviously this is going to take forever.
>>
>> Is there any plan for offering a defrag tool of some sort for bluestore?
>>
>> - Hector
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
> 

- Hector
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph Pacific - MDS activity freezes when one the MDSs is restarted

2023-05-24 Thread Hector Martin
Hi,

On 24/05/2023 22.02, Emmanuel Jaep wrote:
> Hi Hector,
> 
> thank you very much for the detailed explanation and link to the
> documentation.
> 
> Given our current situation (7 active MDSs and 1 standby MDS):
> RANK  STATE  MDS ACTIVITY DNSINOS   DIRS   CAPS
>  0active  icadmin012  Reqs:   82 /s  2345k  2288k  97.2k   307k
>  1active  icadmin008  Reqs:  194 /s  3789k  3789k  17.1k   641k
>  2active  icadmin007  Reqs:   94 /s  5823k  5369k   150k   257k
>  3active  icadmin014  Reqs:  103 /s   813k   796k  47.4k   163k
>  4active  icadmin013  Reqs:   81 /s  3815k  3798k  12.9k   186k
>  5active  icadmin011  Reqs:   84 /s   493k   489k  9145176k
>  6active  icadmin015  Reqs:  374 /s  1741k  1669k  28.1k   246k
>   POOL TYPE USED  AVAIL
> cephfs_metadata  metadata  8547G  25.2T
>   cephfs_data  data 223T  25.2T
> STANDBY MDS
>  icadmin006
> 
> I would probably be better off having:
> 
>1. having only 3 active MDSs (rank 0 to 2)
>2. configure 3 standby-replay to mirror the ranks 0 to 2
>3. have 2 'regular' standby MDSs
> 
> Of course, this raises the question of storage and performance.
> 
> Since I would be moving from 7 active MDSs to 3:
> 
>1. each new active MDS will have to store more than twice the data
>2. the load will be more than twice as high
> 
> Am I correct?

Yes, that is correct. The MDSes don't store data locally but do
cache/maintain it in memory, so you will either have higher memory load
for the same effective cache size, or a lower cache size for the same
memory load.

If you have 8 total MDSes, I'd go for 4+4. You don't need non-replay
standbys if you have a standby replay for each active MDS. As far as I
know, if you end up with an active and its standby both failing, some
other standby-replay MDS will still be stolen to take care of that rank,
so the cluster will eventually become healthy again after the replay time.

With 4 active MDSes down from the current 7, the load per MDS will be a
bit less than double.

> 
> Emmanuel
> 
> On Wed, May 24, 2023 at 2:31 PM Hector Martin  wrote:
> 
>> On 24/05/2023 21.15, Emmanuel Jaep wrote:
>>> Hi,
>>>
>>> we are currently running a ceph fs cluster at the following version:
>>> MDS version: ceph version 16.2.10
>>> (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)
>>>
>>> The cluster is composed of 7 active MDSs and 1 standby MDS:
>>> RANK  STATE  MDS ACTIVITY DNSINOS   DIRS   CAPS
>>>  0active  icadmin012  Reqs:   73 /s  1938k  1880k  85.3k  92.8k
>>>  1active  icadmin008  Reqs:  206 /s  2375k  2375k  7081171k
>>>  2active  icadmin007  Reqs:   91 /s  5709k  5256k   149k   299k
>>>  3active  icadmin014  Reqs:   93 /s   679k   664k  40.1k   216k
>>>  4active  icadmin013  Reqs:   86 /s  3585k  3569k  12.7k   197k
>>>  5active  icadmin011  Reqs:   72 /s   225k   221k  8611164k
>>>  6active  icadmin015  Reqs:   87 /s  1682k  1610k  27.9k   274k
>>>   POOL TYPE USED  AVAIL
>>> cephfs_metadata  metadata  8552G  22.3T
>>>   cephfs_data  data 226T  22.3T
>>> STANDBY MDS
>>>  icadmin006
>>>
>>> When I restart one of the active MDSs, the standby MDS becomes active and
>>> its state becomes "replay". So far, so good!
>>>
>>> However, only one of the other "active" MDSs seems to remain active. All
>>> activities drop from the other ones:
>>> RANK  STATE  MDS ACTIVITY DNSINOS   DIRS   CAPS
>>>  0active  icadmin012  Reqs:0 /s  1938k  1881k  85.3k  9720
>>>  1active  icadmin008  Reqs:0 /s  2375k  2375k  7080   2505
>>>  2active  icadmin007  Reqs:2 /s  5709k  5256k   149k  26.5k
>>>  3active  icadmin014  Reqs:0 /s   679k   664k  40.1k  3259
>>>  4replay  icadmin006  801k   801k  1279  0
>>>  5active  icadmin011  Reqs:0 /s   225k   221k  8611   9241
>>>  6active  icadmin015  Reqs:0 /s  1682k  1610k  27.9k  34.8k
>>>   POOL TYPE USED  AVAIL
>>> cephfs_metadata  metadata  8539G  22.8T
>>>   cephfs_data  data 225T  22.8T
>>> STANDBY MDS
>>>  icadmin013
>>>
>>> In effect, the cluster becomes almost unavailable until the newly
>> promoted
>>> MDS finishes rejoining the cluster.
>>>
>>> Obviously, this defeats the purpose of having 7MDSs.
>>> Is this behavior?
>>> If not, what configuration items should I check to 

[ceph-users] Re: ceph Pacific - MDS activity freezes when one the MDSs is restarted

2023-05-24 Thread Hector Martin
On 24/05/2023 21.15, Emmanuel Jaep wrote:
> Hi,
> 
> we are currently running a ceph fs cluster at the following version:
> MDS version: ceph version 16.2.10
> (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)
> 
> The cluster is composed of 7 active MDSs and 1 standby MDS:
> RANK  STATE  MDS ACTIVITY DNSINOS   DIRS   CAPS
>  0active  icadmin012  Reqs:   73 /s  1938k  1880k  85.3k  92.8k
>  1active  icadmin008  Reqs:  206 /s  2375k  2375k  7081171k
>  2active  icadmin007  Reqs:   91 /s  5709k  5256k   149k   299k
>  3active  icadmin014  Reqs:   93 /s   679k   664k  40.1k   216k
>  4active  icadmin013  Reqs:   86 /s  3585k  3569k  12.7k   197k
>  5active  icadmin011  Reqs:   72 /s   225k   221k  8611164k
>  6active  icadmin015  Reqs:   87 /s  1682k  1610k  27.9k   274k
>   POOL TYPE USED  AVAIL
> cephfs_metadata  metadata  8552G  22.3T
>   cephfs_data  data 226T  22.3T
> STANDBY MDS
>  icadmin006
> 
> When I restart one of the active MDSs, the standby MDS becomes active and
> its state becomes "replay". So far, so good!
> 
> However, only one of the other "active" MDSs seems to remain active. All
> activities drop from the other ones:
> RANK  STATE  MDS ACTIVITY DNSINOS   DIRS   CAPS
>  0active  icadmin012  Reqs:0 /s  1938k  1881k  85.3k  9720
>  1active  icadmin008  Reqs:0 /s  2375k  2375k  7080   2505
>  2active  icadmin007  Reqs:2 /s  5709k  5256k   149k  26.5k
>  3active  icadmin014  Reqs:0 /s   679k   664k  40.1k  3259
>  4replay  icadmin006  801k   801k  1279  0
>  5active  icadmin011  Reqs:0 /s   225k   221k  8611   9241
>  6active  icadmin015  Reqs:0 /s  1682k  1610k  27.9k  34.8k
>   POOL TYPE USED  AVAIL
> cephfs_metadata  metadata  8539G  22.8T
>   cephfs_data  data 225T  22.8T
> STANDBY MDS
>  icadmin013
> 
> In effect, the cluster becomes almost unavailable until the newly promoted
> MDS finishes rejoining the cluster.
> 
> Obviously, this defeats the purpose of having 7MDSs.
> Is this behavior?
> If not, what configuration items should I check to go back to "normal"
> operations?
> 

Please ignore my previous email, I read too quickly. I see you do have a
standby. However, that does not allow fast failover with multiple MDSes.

For fast failover of any active MDS, you need one standby-replay daemon
for *each* active MDS. Each standby-replay MDS follows one active MDS's
rank only, you can't have one standby-replay daemon following all ranks.
What you have right now is probably a regular standby daemon, which can
take over any failed MDS, but requires waiting for the replay time.

See:

https://docs.ceph.com/en/latest/cephfs/standby/#configuring-standby-replay

My explanation for the zero ops from the previous email still holds:
it's likely that most clients will hang if any MDS rank is down/unavailable.

- Hector
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: ceph Pacific - MDS activity freezes when one the MDSs is restarted

2023-05-24 Thread Hector Martin
On 24/05/2023 21.15, Emmanuel Jaep wrote:
> Hi,
> 
> we are currently running a ceph fs cluster at the following version:
> MDS version: ceph version 16.2.10
> (45fa1a083152e41a408d15505f594ec5f1b4fe17) pacific (stable)
> 
> The cluster is composed of 7 active MDSs and 1 standby MDS:
> RANK  STATE  MDS ACTIVITY DNSINOS   DIRS   CAPS
>  0active  icadmin012  Reqs:   73 /s  1938k  1880k  85.3k  92.8k
>  1active  icadmin008  Reqs:  206 /s  2375k  2375k  7081171k
>  2active  icadmin007  Reqs:   91 /s  5709k  5256k   149k   299k
>  3active  icadmin014  Reqs:   93 /s   679k   664k  40.1k   216k
>  4active  icadmin013  Reqs:   86 /s  3585k  3569k  12.7k   197k
>  5active  icadmin011  Reqs:   72 /s   225k   221k  8611164k
>  6active  icadmin015  Reqs:   87 /s  1682k  1610k  27.9k   274k
>   POOL TYPE USED  AVAIL
> cephfs_metadata  metadata  8552G  22.3T
>   cephfs_data  data 226T  22.3T
> STANDBY MDS
>  icadmin006
> 
> When I restart one of the active MDSs, the standby MDS becomes active and
> its state becomes "replay". So far, so good!
> 
> However, only one of the other "active" MDSs seems to remain active. All
> activities drop from the other ones:
> RANK  STATE  MDS ACTIVITY DNSINOS   DIRS   CAPS
>  0active  icadmin012  Reqs:0 /s  1938k  1881k  85.3k  9720
>  1active  icadmin008  Reqs:0 /s  2375k  2375k  7080   2505
>  2active  icadmin007  Reqs:2 /s  5709k  5256k   149k  26.5k
>  3active  icadmin014  Reqs:0 /s   679k   664k  40.1k  3259
>  4replay  icadmin006  801k   801k  1279  0
>  5active  icadmin011  Reqs:0 /s   225k   221k  8611   9241
>  6active  icadmin015  Reqs:0 /s  1682k  1610k  27.9k  34.8k
>   POOL TYPE USED  AVAIL
> cephfs_metadata  metadata  8539G  22.8T
>   cephfs_data  data 225T  22.8T
> STANDBY MDS
>  icadmin013
> 
> In effect, the cluster becomes almost unavailable until the newly promoted
> MDS finishes rejoining the cluster.
> 
> Obviously, this defeats the purpose of having 7MDSs.
> Is this behavior?
> If not, what configuration items should I check to go back to "normal"
> operations?

If *any* active MDS is down at least chunks of your filesystem will be
down, which means clients will likely hang and stop doing anything even
if the other MDSes are capable of serving subsets of the filesystem.
Active MDSes do not randomly balance requests, they are each in charge
of a subset of the filesystem and they all must be up for the filesystem
to work.

If you want reliability with fast failover, you need standby MDSes.

- Hector
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] BlueStore fragmentation woes

2023-05-24 Thread Hector Martin
Hi,

I've been seeing relatively large fragmentation numbers on all my OSDs:

ceph daemon osd.13 bluestore allocator score block
{
"fragmentation_rating": 0.77251526920454427
}

These aren't that old, as I recreated them all around July last year.
They mostly hold CephFS data with erasure coding, with a mix of large
and small files. The OSDs are at around 80%-85% utilization right now.
Most of the data was written sequentially when the OSDs were created (I
rsynced everything from a remote backup). Since then more data has been
added, but not particularly quickly.

At some point I noticed pathologically slow writes, and I couldn't
figure out what was wrong. Eventually I did some block tracing and
noticed the I/Os were very small, even though CephFS-side I was just
writing one large file sequentially, and that's when I stumbled upon the
free space fragmentation problem. Indeed, deleting some large files
opened up some larger free extents and resolved the problem, but only
until those get filled up and I'm back to fragmented tiny extents. So
effectively I'm stuck at the current utilization, as trying to fill them
up any more just slows down to an absolute crawl.

I'm adding a few more OSDs and plan on doing the dance of removing one
OSD at a time and replacing it with another one to hopefully improve the
situation, but obviously this is going to take forever.

Is there any plan for offering a defrag tool of some sort for bluestore?

- Hector
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


Bug#1036623: libclang-common-16-dev: missing LLVM_VERSION_FULL in include path

2023-05-23 Thread Hector Oron Martinez
Package: libclang-common-16-dev
Version: 1:16.0.4-1~exp1
Severity: normal
X-Debbugs-Cc: zu...@debian.org

Hello,

  On clang 16 the include files are broken links:

$ ls -l /usr/include/clang/16*/include
lrwxrwxrwx 1 root root 45 17 de maig  09:25 /usr/include/clang/16.0.4/include 
-> ../../../lib/llvm-16/lib/clang/16.0.4/include
lrwxrwxrwx 1 root root 45 17 de maig  09:25 /usr/include/clang/16/include -> 
../../../lib/llvm-16/lib/clang/16.0.4/include

  Since /usr/lib/llvm-16/lib/clang/16 exists but not the
  /usr/lib/llvm-16/lib/clang/16.0.4 path.

$ LANG=C ls -l /usr/lib/llvm-16/lib/clang/16.0.4/
ls: cannot access '/usr/lib/llvm-16/lib/clang/16.0.4/': No such file or 
directory

Regards

-- System Information:
Debian Release: 12.0
  APT prefers unstable
  APT policy: (500, 'unstable'), (1, 'experimental')
Architecture: amd64 (x86_64)

Kernel: Linux 6.1.0-9-amd64 (SMP w/8 CPU threads; PREEMPT)
Kernel taint flags: TAINT_USER, TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE
Locale: LANG=ca_ES.UTF-8, LC_CTYPE=ca_ES.UTF-8 (charmap=UTF-8), 
LANGUAGE=ca_ES:ca
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages libclang-common-16-dev depends on:
ii  libllvm16  1:16.0.4-1~exp1

Versions of packages libclang-common-16-dev recommends:
ii  libclang-rt-16-dev  1:16.0.4-1~exp1

libclang-common-16-dev suggests no packages.

-- no debconf information



[ceph-users] Re: Slow recovery on Quincy

2023-05-20 Thread Hector Martin
On 17/05/2023 03.07, 胡 玮文 wrote:
> Hi Sake,
> 
> We are experiencing the same. I set “osd_mclock_cost_per_byte_usec_hdd” to 
> 0.1 (default is 2.6) and get about 15 times backfill speed, without 
> significant affect client IO. This parameter seems calculated wrongly, from 
> the description 5e-3 should be a reasonable value for HDD (corresponding to 
> 200MB/s). I noticed this default is originally 5.2, then changed to 2.6 to 
> increase the recovery speed. So I suspect the original author just convert 
> the unit wrongly, he may want 5.2e-3 but wrote 5.2 in code.
> 
> But all this may be not important in the next version. I see the relevant 
> code is rewritten, and this parameter is now removed.
> 
> high_recovery_ops profile works very poorly for us. It increase the average 
> latency of client IO from 50ms to about 1s.
> 
> Weiwen Hu
> 

Thank you for this, that parameter indeed seems completely wrong
(assuming it means what it says on the tin). After changing that my
Quincy cluster is no recovering at a much more reasonable speed.

- Hector
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


Re: [Koha] Fwd: Dead Koha Resuscitation

2023-05-19 Thread Hector Gonzalez Jaime


On 5/18/23 20:46, Bruce A. Metcalf wrote:

On 5/18/23 21:35, Hector Gonzalez Jaime wrote:


What does sudo koha-plack --start instancename Do?


root@store:/usr/sbin# koha-plack --start library bash: koha-plack:
command not found

Which seems even more weird. Are the permissions wrong?


You are still missing /usr/sbin in your PATH variable.



That's been added.

It mighty not have been added permanently.  start-stop-daemon is at 
/sbin, which means that one is missing from your PATH.


to verify what is on your path, you can type:  echo $PATH

if it is missing any of the directorios, add it with: 
PATH=/sbin:/usr/sbin:$PATH, followed by export PATH


it might be part of .bashrc, but if you are editing root's one, you 
might want to check your regular user's .bashrc first.





The command would have run if you had typed:

/usr/sbin/koha-plack --start library



Let me try:

root@store:/usr/sbin# /usr/sbin/koha-plack --start library
/usr/share/koha/bin/koha-functions.sh: line 285: start-stop-daemon: 
command not found

[ ok ] Starting Plack daemon for library:.

Okay, a step forward, but a new challenge. (Been a lot of that in this 
project!)


Line 285 is:

if start-stop-daemon --pidfile 
"/var/run/koha/${instancename}/plack.pid" \


File /var/run/koha/library/plack.pid does exist. It contains a 
five-digit number only.


Okay, I'm stuck again!

Thanks again for all the help.

Regards,
/ Bruce /
Bruce A. Metcalf, Librarian
The Augustan Library


your system seems to be in a strange situation, you might want to 
uninstall koha, remove all dependencies, and install it again:


apt-get remove koha-common <- this does not affect your database
or configuration files. apt-get autoremove <- this
deletes every dependency that is no longer needed, which should be a
ton of perl libraries. apt-get install koha-common  <- this
should reinstall koha, and every one of its dependencies.  If your
system has problems, this command would eventually fail.

If this last command fails, I'd like to see the contents of your 
/etc/apt/sources.list file and every file at /etc/apt/sources.list.d




Regards, / Bruce / Bruce A. Metcalf, Librarian The Augustan
Library ___

Koha mailing list  http://koha-community.org Koha@lists.katipo.co.nz 
Unsubscribe:

https://lists.katipo.co.nz/mailman/listinfo/koha



___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


--
Hector Gonzalez
ca...@genac.org

___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


Re: [Koha] Fwd: Dead Koha Resuscitation

2023-05-18 Thread Hector Gonzalez Jaime


On 5/18/23 19:18, Bruce A. Metcalf wrote:

On 5/18/23 17:35, Chris Cormack wrote:


What does
sudo koha-plack --start instancename
Do?



root@store:/home/bruce# koha-plack --start library
bash: koha-plack: command not found

Which I thought was pretty odd, so I went to the /usr/sbin directory, 
and:


root@store:/usr/sbin# ls -l koha-p*
-rwxr-xr-x 1 root root 13538 May 13 23:26 koha-plack

then

root@store:/usr/sbin# koha-plack --start library
bash: koha-plack: command not found

Which seems even more weird. Are the permissions wrong?
You are still missing /usr/sbin in your PATH variable.  You seem to be 
running bash:


export PATH=/sbin:/usr/sbin:$PATH

(bash can do this in one line).  Copy the command exactly, CAPS are 
important here.  Linux does not have the current directory in your PATH, 
which means it will not run a command from the directory you are at 
(this may feel strange if you come from a windows environment, but is 
normal)  The command would have run if you had typed:


/usr/sbin/koha-plack --start library

your system seems to be in a strange situation, you might want to 
uninstall koha, remove all dependencies, and install it again:


apt-get remove koha-common <- this does not affect your database or 
configuration files.
apt-get autoremove <- this deletes every dependency that is 
no longer needed, which should be a ton of perl libraries.
apt-get install koha-common  <- this should reinstall koha, and 
every one of its dependencies.  If your system has problems, this 
command would eventually fail.


If this last command fails, I'd like to see the contents of your 
/etc/apt/sources.list file and every file at /etc/apt/sources.list.d




Regards,
/ Bruce /
Bruce A. Metcalf, Librarian
The Augustan Library
___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


--
Hector Gonzalez
ca...@genac.org

___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


Re: [Koha] Fwd: Dead Koha Resuscitation

2023-05-18 Thread Hector Gonzalez Jaime


On 5/18/23 13:46, Bruce A. Metcalf wrote:

On 5/18/23 14:52, David Liddle wrote:

Hello, Bruce. You've been good about chasing down the information 
folks have asked for. I'm sorry that we haven't brought you closer

to a solution.



I'm no less appreciative of your efforts for that.



I have some follow-up questions:

1.a. Database. Once you listed the databases, did you exit the mysql 
prompt and back up the koha_library database with the mysqldump command?



Yes, and it appeared to work. I have a 450MB file called backup.sql.



1.b.Can you use an FTP client such as FileZilla to download such files?



I assume so.



2.a. Path. What is the result of this command? env | grep -i 'path'



root@store:/# env | grep -i 'path'
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games

I am aware that the path should contain several sbin directories, but I
don't know how to add them.


with these commands:

PATH=/sbin:/usr/sbin:$PATH
export PATH

Assuming you are using some variant of /bin/sh

You might edit your .bashrc and add that at the end of the file, and 
login again, or use "source ~/.bashrc " to process the file in your session.





2.b. Path. Did you already establish that the Koha commands are or 
are not installed? ls /usr/sbin/koha*



Yes, the usual list of commands is there. This suggests to me that the
inadequate PATH command is at least part of my problem.


3. Repository and deb. Did you run "apt update" before running 
"apt-get install --reinstall koha-common"?



Yes. The resultant is:

root@store:/# apt-get update && apt-get install --reinstall koha-common
[some lines skipped]
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 koha-common : Depends: libtest-dbix-class-perl but it is not going to
be installed
E: Unable to correct problems, you have held broken packages.




You should try:  apt-get dist-upgrade which would install the 
missing dependencies.  Also apt-get -f install should fix that too.




The current version is 21.11.20.



Right, but I haven't been able to employ any earlier version, either.


Yes, koha-common appears to be running, though maybe I'm reading it 
wrong. Have you checked the Apache config file for your instance, 
probably /etc/apache2/sites-available/library.conf



There are two links in that file, one for the OPAC and one for admin 
access.




SSL Certificate information will be important.



There is no mention of SSL Certificates in the file.


There are also Apache-related config files in /etc/koha, which I 
think could have been overwritten in the upgrade; that may or may

not be important.



There are such files. I don't know what to make of them.



5. This situation is a good opportunity to research and document the
elements on the server that you need to back up manually and download
in the event that you need to rebuild the server from scratch. I'm
guessing you don't do this work full-time, but having this
information written will save you and/or your successor a lot of time
when that day comes.



Yeah, I should be so lucky as to have someone take over for me!  
The organization is a non-profit with declining participation.




Is this hosted server just running: 1. the main website (what
platform or site-building tool?), 2. a wiki (MediaWiki?), and 3.
Koha? That backup will probably come down to the databases you've
listed and one or more directories for each service.



The virtual server is running an Online Store, a Wiki, and Koha. Minor 
file storage is all else that's there.



What I suspect at this point is that I need to add the sbin 
directories to the PATH. Can someone point me to a tutorial?


Regards,
/ Bruce /
Bruce A. Metcalf, Librarian
The Augustan Library
___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


--
Hector Gonzalez
ca...@genac.org

___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


[Touch-packages] [Bug 2020123] [NEW] package keyboard-configuration 1.194ubuntu3 failed to install/upgrade: el subproceso instalado paquete keyboard-configuration script post-installation devolvió el

2023-05-18 Thread HECTOR SANTAELLA MARIN
Public bug reported:

Choices:
  1: Launch a browser now
  C: Cancel
Please choose (1/C): 1
sudo: xdg-open: command not found

No se han podido instalar las actualizaciones

La actualización se ha cancelado. Su sistema podría haber quedado en 
un estado no utilizable. Ahora se llevará a cabo una recuperación 
(dpkg --configure -a). 

Configurando keyboard-configuration (1.194ubuntu3) ...
/var/lib/dpkg/info/keyboard-configuration.config: 1: eval: Syntax error: 
Unterminated quoted string
dpkg: error al procesar el paquete keyboard-configuration (--configure):
 el subproceso instalado paquete keyboard-configuration script 
post-installation devolvió el código de salida de error 2
dpkg: problemas de dependencias impiden la configuración de console-setup:
 console-setup depende de keyboard-configuration (= 1.194ubuntu3); sin embargo:
 El paquete `keyboard-configuration' no está configurado todavía.

dpkg: error al procesar el paquete console-setup (--configure):
 problemas de dependencias - se deja sin configurar
dpkg: problemas de dependencias impiden la configuración de ubuntu-minimal:
 ubuntu-minimal depende de console-setup; sin embargo:
 El paquete `console-setup' no está configurado todavía.

dpkg: error al procesar el paquete ubuntu-minimal (--configure):
 problemas de dependencias - se deja sin configurar
dpkg: problemas de dependencias impiden la configuración de kbd:
 kbd depende de console-setup | console-setup-mini; sin embargo:
 El paquete `console-setup' no está configurado todavía.
  El paquete `console-setup-mini' no está instalado.

dpkg: error al procesar el paquete kbd (--configure):
 problemas de dependencias - se deja sin configurar
dpkg: problemas de dependencias impiden la configuración de console-setup-linux:
 console-setup-linux depende de kbd (>= 0.99-12) | console-tools (>= 
1:0.2.3-16); sin embargo:
 El paquete `kbd' no está configurado todavía.
  El paquete `console-tools' no está instalado.
 console-setup-linux depende de keyboard-configuration (= 1.194ubuntu3); sin 
embargo:
 El paquete `keyboard-configuration' no está configurado todavía.

dpkg: error al procesar el paquete console-setup-linux (--configure):
 problemas de dependencias - se deja sin configurar
Se encontraron errores al procesar:
 keyboard-configuration
 console-setup
 ubuntu-minimal
 kbd
 console-setup-linux

Actualización completada

ProblemType: Package
DistroRelease: Ubuntu 20.04
Package: keyboard-configuration 1.194ubuntu3
ProcVersionSignature: Ubuntu 4.15.0-211.222-generic 4.15.18
Uname: Linux 4.15.0-211-generic x86_64
ApportVersion: 2.20.11-0ubuntu27.26
Architecture: amd64
CasperMD5CheckResult: skip
Date: Thu May 18 10:19:55 2023
ErrorMessage: el subproceso instalado paquete keyboard-configuration script 
post-installation devolvió el código de salida de error 2
PackageArchitecture: all
Python3Details: /usr/bin/python3.8, Python 3.8.10, python3-minimal, 
3.8.2-0ubuntu2
PythonDetails: /usr/bin/python2.7, Python 2.7.18, python-is-python2, 2.7.17-4
RelatedPackageVersions:
 dpkg 1.19.7ubuntu3.2
 apt  2.0.9
SourcePackage: console-setup
Title: package keyboard-configuration 1.194ubuntu3 failed to install/upgrade: 
el subproceso instalado paquete keyboard-configuration script post-installation 
devolvió el código de salida de error 2
UpgradeStatus: Upgraded to focal on 2023-05-18 (0 days ago)

** Affects: console-setup (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-package focal third-party-packages uec-images

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to console-setup in Ubuntu.
https://bugs.launchpad.net/bugs/2020123

Title:
  package keyboard-configuration 1.194ubuntu3 failed to install/upgrade:
  el subproceso instalado paquete keyboard-configuration script post-
  installation devolvió el código de salida de error 2

Status in console-setup package in Ubuntu:
  New

Bug description:
  Choices:
1: Launch a browser now
C: Cancel
  Please choose (1/C): 1
  sudo: xdg-open: command not found

  No se han podido instalar las actualizaciones

  La actualización se ha cancelado. Su sistema podría haber quedado en 
  un estado no utilizable. Ahora se llevará a cabo una recuperación 
  (dpkg --configure -a). 

  Configurando keyboard-configuration (1.194ubuntu3) ...
  /var/lib/dpkg/info/keyboard-configuration.config: 1: eval: Syntax error: 
Unterminated quoted string
  dpkg: error al procesar el paquete keyboard-configuration (--configure):
   el subproceso instalado paquete keyboard-configuration script 
post-installation devolvió el código de salida de error 2
  dpkg: problemas de dependencias impiden la configuración de console-setup:
   console-setup depende de keyboard-configuration (= 1.194ubuntu3); sin 
embargo:
   El paquete `keyboard-configuration' no está configurado todavía.

  dpkg: error al procesar el paquete console-setup (--configure):
   problemas de dependencias - se deja sin 

Re: [Koha] Fwd: Dead Koha Resuscitation

2023-05-16 Thread Hector Gonzalez Jaime


On 5/16/23 10:51, Bruce A. Metcalf wrote:

On 5/15/23 16:41, Galen Charlton wrote:


3. Can I install a new instance in the same virtual machine and
transfer the settings and data?


Almost everything worth permanently keeping in a typical Koha system
is stored in the database. Is MySQL/MariaDB running and can you get a
 mysqldump?



I'm not sure. How can one tell? There is a MySQL instance running, but 
I suspect that may be for one of the other servers. There is no 
MariaDB running, which I recall having switched to some time back.


Can I buy a clue about how to run mysqldump?


MariaDB's process name is mysql, it doesn't matter much which you are using.

you can make a backup wich mysqldump like this:

mysqldump -u root -p --all-databases > backup.sql

you should run this command from a directory with enough space to hold a 
backup of your database.


you can reinstall koha-common by issuing this command:

sudo apt-get install --reinstall koha-common

it should pull any missing dependencies.

Have you tried to backup your koha instance with "koha_dump" ? if your 
library instance is called "thisone", use:  sudo koha_dump thisone


it should put the backup on /var/spool/koha/thisone/

This process creates a mysqldump backup of just your database, and 
copies the rest of your koha files to a tar file.  Those files can be 
used with koha_restore but if they can be made, your koha should be good 
enough for you to repair it.






If not, are there at least database files present in /var/lib/mysql?



If /var/lib/mysql/ibdata1 is the right file, then yes. It's almost a 
GB in size, which seems reasonable for the library's data.




A current copy of the database could be imported into a fresh install
of Koha (either in a new VM or after a wipe and recreation of the
existing one - though if you do a wipe, triple-check that your
database export is good!).



Right. Tried to delete Koha and reinstall, but there was no change in 
the resultant. Wiping the whole VM is impractical due to the other 
servers on it, though that may prove necessary.


I continue to suspect that the problem lies with the upgrades from 
Debian 9 to 10 and Koha 21.05 to 21.11. Something didn't upgrade 
correctly, or there's a hidden incompatibility, that's stopping work.


Thanks for the ideas.

Respectfully,
/ Bruce /
Bruce A. Metcalf, Librarian
The Augustan Library
___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


--
Hector Gonzalez
ca...@genac.org

___

Koha mailing list  http://koha-community.org
Koha@lists.katipo.co.nz
Unsubscribe: https://lists.katipo.co.nz/mailman/listinfo/koha


Re: [dmarc-ietf] Third party signatures

2023-05-15 Thread Hector Santos

Wei,

Have you studied the past R and functional specifications, 
architectural surrounding SPF and DKIM leading up to DMARC?


   RFC5598  Internet Mail Architecture
   RFC5322  Internet Message Format
   RFC5321  Simple Mail Transfer Protocol
   RFC4405  SUBMITTER SMTP Service Extension
   RFC4406  Sender ID: Authenticating E-Mail
   RFC4407  Purported Responsible Address (PRA)
   RFC4408  Sender Policy Framework (SPF)
   RFC4686  Analysis of Threats Motivating DKIM
   RFC4870  DomainKeys
   RFC4871  DKIM (RFC5672.TXT,  RFC6376.TXT)
   RFC5016  Requirements for a DKIM Signing Practices Protocol
   RFC5451  Message Header Field for Indicating Message 
Authentication Status

   RFC5518  Vouch By Reference
   RFC5585  DKIM Service Overview
   RFC5617  DKIM Author Domain Signing Practices (ADSP)
   RFC5863  DKIM Development, Deployment, and Operations
   RFC5965  An Extensible Format for Email Feedback Reports
   RFC6376  DomainKeys Identified Mail (DKIM) Signatures
   RFC6377  DomainKeys Identified Mail (DKIM) and Mailing Lists
   RFC6541  DomainKeys Identified Mail (DKIM) Authorized Third-Party 
Signatures


I find it technically unfeasible and non-logical to support a high 
overhead, complex ARC concept that has no promise of any solution for 
a DKIM Policy model we have been seeking since 2005.


What are we solving in the first place with ARC?  The ability to 
revert to original integrity due to unknown middle wares changes? What 
ever happen to passthru mail transports?


In my technical view, it has been the PORT 25 unsolicited 3rd party 
signature unauthorized by the author domain due to the dearth of 
scaled AUTHOR::SIGNER Authorization methods.   ARC is not resolving 
this problem. The overhead is horrendous.


We have been seeking deterministic protocols to filter out failures 
with zero to low false positive.  Diffusion by Osmosis!
We don't have it today.   It has been made more complex than it really 
is.   I recommend to study the past work.


Thank you.

--
Hector Santos, CEO/CTO
Santronics Software, Inc.




On 5/15/2023 5:02 AM, Wei Chuang wrote:
That's a good point around ARC as that's what it was meant to do. 
And that got me thinking that it might be helpful to systematically 
compare the different proposed approaches and their pros and cons.  
The next approach would be to consider the general approach of the 
reversible transform idea that several folks have proposed, and how 
that would look.  And that got me rethinking the "DARA" work that 
we're already prototyping for DKIM replay mitigation, if it can 
relate to mailing-list and forwarder modifications and I now think 
DARA can help here too. The three different approaches have distinct 
pros and cons.


The following is a summary of the comparison.  Attached is a more 
detailed comparison as PDF that tries to work through what the 
algorithms would likely do.



ARC

Use ARC to override the SPF and DKIM results at Receiver by those 
found at the Forwarder.


Pros:

 *

Uses existing SPF, DKIM and ARC standards.

 *

Tolerates header and body modifications

Cons:

 *

Receiver must trust the ARC headers generated by the forwarder.

 *

Receiver must trust the modifications the forwarder made.

 *

Receiver must trust that no ARC replay occurred


Transform

Proposed in draft-kucherawy-dkim-transform 
<https://datatracker.ietf.org/doc/draft-kucherawy-dkim-transform/02/>where 
the forwarder can specify a "tf=" tag that specifies addition of 
Subject header prefix, text footer to a mime-part, mimify plaintext, 
and adding a mime-part representing a footer to an existing 
mime-tree to the DKIM-Signature.  These all may be reversed to 
recover the original signature.


DKIM-Signature: d=...; tf=subject,mime-wrap,

The ideas in draft-vesely-dmarc-mlm-transform-07 
<https://datatracker.ietf.org/doc/html/draft-vesely-dmarc-mlm-transform-07>are 
conceptually similar though add support for ARC.


Pros:

 *

Tolerates header and body modifications

 *

Identifies the modifications

 *

Does not require particular trust of the forwarder to be
non-malicious

Cons:

 *

Receiver must trust that no DKIM replay occurred

 *

Might not compose across multiple modifying forwarders

 *

Might not support all possible modifications by forwarder

 *

Reversing all possible draft transformations is potentially
complicated


DARA

This clarifies the DARA ideas in draft-chuang-replay-resistant-arc 
<https://datatracker.ietf.org/doc/draft-chuang-replay-resistant-arc/>which 
calls for authenticating a path from the Originator through the 
Forwarder to the Receiver that's tolerant of modifications.  Some 
ideas of a validated path are explored in 
draft-levine-dkim-conditional 
<https://datatracker.ietf.org/doc/html/draft-levine-dkim-conditional-04>. 



Pros:

 *

Tolerates header and body modifications

 *

Does not require particular tr

dosfstools for EFI partition?

2023-05-14 Thread Richard Hector

Hi,

Hopefully this is the right, or close enough, place ...

Given that EFI is common, should dosfstools now be a standard package, 
so that we can fsck the partition when required?


Happy to file as a bug, if I know what to file it against.

Cheers,
Richard



[Desktop-packages] [Bug 1816497] Re: [snap] vaapi chromium no video hardware decoding

2023-05-12 Thread Hector CAO
With the beta testing release, the hardware decoding is working for
several recent Intel GPU architectures (Tigerlake, Aderlake,
Raptorlake), so far, we only experienced a problem on Roadwell (Gen8)
GPU, the oldest GPU generation we aim to support for this release, we
will work to make it working very soon ... Let's keep the faith Michel !

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to chromium-browser in Ubuntu.
https://bugs.launchpad.net/bugs/1816497

Title:
  [snap] vaapi chromium no video hardware decoding

Status in chromium-browser package in Ubuntu:
  In Progress

Bug description:
  To test the snap with VA-API changes,

  1. Install the Chromium snap,

     sudo snap install --channel candidate/hwacc chromium

  2. Start Chromium,

     snap run chromium

  3. Open a video, e.g. one from https://github.com/chthomos/video-
  media-samples.

  4. Enter about:media-internals in the address bar, click the
  corresponding box (if the video is playing, it will have the "(kPlay)"
  identifier) and note if the page says VaapiVideoDecoder for
  kVideoDecoderName. You can alternatively check with intel_gpu_top
  (from intel-gpu-tools package) that the video engine bars are non zero
  during playback.

  --Original Bug report -

  Libva is no longer working for snap installed chromium 72.0.3626.109
  (Official Build) snap (64-bit)

  I followed this instruction
  sudo snap install --channel=candidate/vaapi chromium

  My amdgpu can use libva

  `vainfo: Driver version: Mesa Gallium driver 18.3.3 for AMD STONEY (DRM 
3.27.0, 4.20.0-10.1-liquorix-amd64, LLVM 7.0.1)
  vainfo: Supported profile and entrypoints
    VAProfileMPEG2Simple:   VAEntrypointVLD
    VAProfileMPEG2Main  :   VAEntrypointVLD
    VAProfileVC1Simple  :   VAEntrypointVLD
    VAProfileVC1Main:   VAEntrypointVLD
    VAProfileVC1Advanced:   VAEntrypointVLD
    VAProfileH264ConstrainedBaseline:   VAEntrypointVLD
    VAProfileH264ConstrainedBaseline:   VAEntrypointEncSlice
    VAProfileH264Main   :   VAEntrypointVLD
    VAProfileH264Main   :   VAEntrypointEncSlice
    VAProfileH264High   :   VAEntrypointVLD
    VAProfileH264High   :   VAEntrypointEncSlice
    VAProfileHEVCMain   :   VAEntrypointVLD
    VAProfileHEVCMain10 :   VAEntrypointVLD
    VAProfileJPEGBaseline   :   VAEntrypointVLD
    VAProfileNone   :   VAEntrypointVideoProc`

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/chromium-browser/+bug/1816497/+subscriptions


-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


[Desktop-packages] [Bug 1816497] Re: [snap] vaapi chromium no video hardware decoding

2023-05-12 Thread Hector CAO
GPU . Intel® HD Graphics 5500 / Gen8 / Broadwell

Chromium : 113.0.5672.24 hwacc/candidate

Hardware decoding is not working !

Log:

[46128:46128:0511/183800.564351:WARNING:chrome_main_delegate.cc(589)] This is 
Chrome version 113.0.5672.24 (not a warning)
[46128:46128:0511/183800.609573:WARNING:chrome_browser_cloud_management_controller.cc(87)]
 Could not create policy manager as CBCM is not enabled.
[46128:46128:0511/183800.628662:WARNING:wayland_object.cc(157)] Binding to 
gtk_shell1 version 4 but version 5 is available.
[46128:46128:0511/183800.628726:WARNING:wayland_object.cc(157)] Binding to 
zwp_pointer_gestures_v1 version 1 but version 3 is available.
[46128:46128:0511/183800.628780:WARNING:wayland_object.cc(157)] Binding to 
zwp_linux_dmabuf_v1 version 3 but version 4 is available.
[46128:46128:0511/183800.889036:ERROR:chrome_browser_cloud_management_controller.cc(162)]
 Cloud management controller initialization aborted as CBCM is not enabled.
[46128:46128:0511/183800.904533:WARNING:account_consistency_mode_manager.cc(73)]
 Desktop Identity Consistency cannot be enabled as no OAuth client ID and 
client secret have been configured.
[46128:46128:0511/183800.952551:WARNING:wayland_surface.cc(163)] Server doesn't 
support zcr_alpha_compositing_v1.
[46128:46128:0511/183800.952572:WARNING:wayland_surface.cc(178)] Server doesn't 
support overlay_prioritizer.
[46128:46128:0511/183800.952579:WARNING:wayland_surface.cc(192)] Server doesn't 
support surface_augmenter.
[46128:46128:0511/183800.952584:WARNING:wayland_surface.cc(207)] Server doesn't 
support wp_content_type_v1
[46128:46128:0511/183800.952589:WARNING:wayland_surface.cc(226)] Server doesn't 
support zcr_color_management_surface.
[46128:46128:0511/183800.952946:WARNING:cursor_loader.cc(122)] Failed to load a 
platform cursor of type kNull
libva info: VA-API version 1.17.0
libva info: Trying to open 
/snap/chromium/2444/va-driver-non-free/dri/iHD_drv_video.so
libva info: Trying to open 
/snap/chromium/2444/usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_17
libva info: va_openDriver() returns 0
[minigbm:drv_helpers.c(364)] DRM_IOCTL_MODE_CREATE_DUMB failed (12, 13)
[46281:46281:0511/183800.988362:ERROR:gbm_pixmap_wayland.cc(75)] Cannot create 
bo with format= YUV_420_BIPLANAR and usage=GPU_READ_CPU_READ_WRITE



VAINFO

$ export LD_LIBRARY_PATH=/snap/chromium/current/usr/lib/x86_64-linux-gnu/
$ export LIBVA_DRIVERS_PATH=/snap/chromium/current/usr/lib/x86_64-linux-gnu/dri
$ vainfo


libva info: VA-API version 1.17.0
libva info: Trying to open 
/snap/chromium/current/usr/lib/x86_64-linux-gnu/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_17
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.17 (libva 2.12.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 22.6.6 
(b51ffe5)
vainfo: Supported profile and entrypoints
  VAProfileNone   : VAEntrypointVideoProc
  VAProfileNone   : VAEntrypointStats
  VAProfileMPEG2Simple: VAEntrypointVLD
  VAProfileMPEG2Simple: VAEntrypointEncSlice
  VAProfileMPEG2Main  : VAEntrypointVLD
  VAProfileMPEG2Main  : VAEntrypointEncSlice
  VAProfileH264Main   : VAEntrypointVLD
  VAProfileH264Main   : VAEntrypointEncSlice
  VAProfileH264Main   : VAEntrypointFEI
  VAProfileH264High   : VAEntrypointVLD
  VAProfileH264High   : VAEntrypointEncSlice
  VAProfileH264High   : VAEntrypointFEI
  VAProfileVC1Simple  : VAEntrypointVLD
  VAProfileVC1Main: VAEntrypointVLD
  VAProfileVC1Advanced: VAEntrypointVLD
  VAProfileJPEGBaseline   : VAEntrypointVLD
  VAProfileH264ConstrainedBaseline: VAEntrypointVLD
  VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
  VAProfileH264ConstrainedBaseline: VAEntrypointFEI
  VAProfileVP8Version0_3  : VAEntrypointVLD

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to chromium-browser in Ubuntu.
https://bugs.launchpad.net/bugs/1816497

Title:
  [snap] vaapi chromium no video hardware decoding

Status in chromium-browser package in Ubuntu:
  In Progress

Bug description:
  To test the snap with VA-API changes,

  1. Install the Chromium snap,

     sudo snap install --channel candidate/hwacc chromium

  2. Start Chromium,

     snap run chromium

  3. Open a video, e.g. one from https://github.com/chthomos/video-
  media-samples.

  4. Enter about:media-internals in the address bar, click the
  corresponding box (if the video is playing, it will have the "(kPlay)"
  identifier) and note if the page says VaapiVideoDecoder for
  kVideoDecoderName. You can alternatively check with intel_gpu_top
  (from intel-gpu-tools package) that the video engine bars are 

Re: [VOTE] KIP-864: Add End-To-End Latency Metrics to Connectors

2023-05-11 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
This will help us greatly. +1 (non-binding) 

From: dev@kafka.apache.org At: 05/10/23 17:32:35 UTC-4:00To:  
dev@kafka.apache.org
Subject: Re: [VOTE] KIP-864: Add End-To-End Latency Metrics to Connectors

Hi everyone,

Bumping this vote thread. 2 +1 binding and 1 +1 non-binding so far.

Cheers,
Jorge.

On Mon, 27 Feb 2023 at 18:56, Knowles Atchison Jr 
wrote:

> +1 (non binding)
>
> On Mon, Feb 27, 2023 at 11:21 AM Chris Egerton 
> wrote:
>
> > Hi all,
> >
> > I could have sworn I +1'd this but I can't seem to find a record of that.
> >
> > In the hopes that this action is idempotent, +1 (binding). Thanks for the
> > KIP!
> >
> > Cheers,
> >
> > Chris
> >
> > On Mon, Feb 27, 2023 at 6:28 AM Mickael Maison  >
> > wrote:
> >
> > > Thanks for the KIP
> > >
> > > +1 (binding)
> > >
> > > On Thu, Jan 26, 2023 at 4:36 PM Jorge Esteban Quilcate Otoya
> > >  wrote:
> > > >
> > > > Hi all,
> > > >
> > > > I'd like to call for a vote on KIP-864, which proposes to add metrics
> > to
> > > > measure end-to-end latency in source and sink connectors.
> > > >
> > > > KIP:
> > > >
> > >
> >
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-864%3A+Add+End-To-End+Late
ncy+Metrics+to+Connectors
> > > >
> > > > Discussion thread:
> > > > https://lists.apache.org/thread/k6rh2mr7pg94935fgpqw8b5fj308f2n7
> > > >
> > > > Many thanks,
> > > > Jorge.
> > >
> >
>




Re: The role of FOSS in preventing a recurrence of vehicle emissions scandals

2023-05-09 Thread Hector Espinoza
   Very good initiative Lars.
   It is possible (but very difficult in practice) to create a device, as
   "simple" as a open source open hardware counter, as "simple" as that,
   embedded in every sensor or controller, that counts how many times it
   was re-configured. Again, proprietary controllers modified through a
   backdoor (defective by design concept), could circumvent that counter.
   Emission control should be done for a certain representative sample of
   a certain model year or generation, not for all, nor for one. The
   representative sample should be taken from the geography of the world
   and from the year/month.
   And then, emission control should be done randomly on the street ...
   And then there could be more "job" for some corrupt policeman from
   certain cities of some countries stopping people and asking for money
   because they "do not comply with emissions". Other policeman will sell
   that info to a law firm that sues the car manufacturer and get some
   money from them in in a out-of-court settlement or ... exposes the
   manufacturer to the public opinion.

   On Mon, 8 May 2023 at 11:24, Matt Ivie <[1]m0dese...@mykolab.com>
   wrote:

 On Sat, 2023-05-06 at 16:58 +0300, Lars Noodén wrote:
 > Recent news¹ reminds us that back in 2015 a whistleblower exposed
 the
 > VW/Audi emissions scandal, which I guess had been going on since
 > 1999.
 > The companies executives used closed source, proprietary software
 in
 > the
 > vehicles to hide the fact that the vehicles were emitting 40 times
 > the
 > allowed NOx when actually out on the roads and not in the testing
 > centers.  Even with fines and prison sentences, there is no way to
 be
 > sure the companies are not working on more of the same -- unless
 the
 > development is done out in the open.
 >
 > Clearly we see both physical and economic harm from neglecting to
 > require FOSS even in embedded computers, such as the 100+ now
 found
 > in
 > each new car.  because these companies have already shown that the
 > closed source model *cannot* be trusted such style of development
 > should
 > not be allowed any more in regards to vehicles.  Surely a
 FOSS-based
 > workflow can be figured out.
 >
 > Perhaps it is a timely and appropriate topic for institutions like
 > FSF,
 > OSI, EFFI, and so on to address that publicly?  Even a short
 > statement
 > in passing would at least raise awareness and provide an
 opportunity
 > to
 > ratchet things forward in regard to Software Freedom.
 >
 > /Lars
 >
  remember this scandal very well. There is a large incentive for car
 companies not to use Free Software on their embedded controllers.
 The
 emissions problem you highlight actually has a reverse effect if
 ANYONE
 can change or modify those programs. The intention of using Free
 Software on the controller to allow everyone to see what the code is
 telling the vehicle to do is good but given the ability for anyone
 to
 change the code and install their changes opens the door for those
 that
 don't care about emissions to tune their engine for performance
 instead
 of emissions. It could be argued that there are ways to avoid that,
 and
 I'm sure there are but how complex does that become?
 The car manufacturers also have a business model setup for repair
 of
 vehicles so allowing just anyone to tinker with the way their ECM
 works
 destroys their "control". While Free Software advocates realize the
 benefits of having Free Software, it will take a lot of effort to
 get a
 corporation to give up one of their revenue streams. Look at John
 Deere
 (
 [2]https://stallman.org/archives/2022-nov-feb.html#18_January_2023_(
 Right_to_repair,_John_Deere) )
 for example.
 Back in the day, before ECMs and computer control, one could tune
 their
 engine any way they chose. If you needed to pass an emissions test
 you
 would make sure your engine was setup to do just that, but then you
 could change it back after the test was passed. The inaccurate fuel
 and
 air metering that allowed that just isn't efficient enough to even
 make
 a car reliable without constant tuning let alone allow accurate
 emissions controls. Computer control was really the only way to get
 the
 job done. If we want control of those computers through Free
 Software
 we have a long battle ahead. I think there are solutions to be
 talked
 about. The next frontier though, is electric. With Electric has come
 the concept of "subscription features" and self driving. I think we
 need to address those issues every bit as much as we would need to
 regulate the management of software on ICE (Internal Combustion
 Engine)
 

Re: [Sursound] [Proposal] for HOA web-streaming-format

2023-05-09 Thread Hector Centeno
Additionally, on the same Android device I get the same results with Chrome
and Firefox:

Audio codecs
PCM audio support
Yes ✔
MP3 support
Yes ✔
AAC support
Yes ✔
Dolby Digital support
No ✘
Dolby Digital Plus support
No ✘
Ogg Vorbis support
Yes ✔
Ogg Opus support
Yes ✔
WebM with Vorbis support
Yes ✔
WebM with Opus support
Yes ✔


On Tue, May 9, 2023, 8:42 a.m. Hector Centeno  wrote:

> Hello,
>
> I use Edge on Android and Windows machines. This is what I get on my
> Android device (latest Samsung Galaxy S23 Ultra):
>
> Audio codecs
> PCM audio support
> Yes ✔
> MP3 support
> Yes ✔
> AAC support
> Yes ✔
> Dolby Digital support
> No ✘
> Dolby Digital Plus support
> No ✘
> Ogg Vorbis support
> Yes ✔
> Ogg Opus support
> Yes ✔
> WebM with Vorbis support
> Yes ✔
> WebM with Opus support
> Yes ✔
>
> Best,
> Hector Centeno
>
>
>
>
>
> On Mon, May 8, 2023, 12:38 p.m. Stefan Schreiber 
> wrote:
>
>> Ammendment:
>>
>> “EAC-3 (DD+) is natively supported by Edge and all Safari browsers.”
>>
>> I did a fast html5test (.com) on my iPad running on some very old iOS
>> 13.x (for some stupid reason I can’t update my AirPad Air 2 to version
>> 15.x, which would be the last supported one; need probably to reset my
>> Apple ID to “recover” my lost password 來), and even in this outdated
>> configuration:
>>
>> DD+ is definitively supported by mobile Safari, see result list (here
>> posted in text format, sursound might not like htrml5 text...I hope
>> there won't be any "optical breakdown"...):
>>
>> Audio codecs
>>
>>
>>
>> PCM audio support
>>
>> No
>>
>> MP3 support
>>
>> Yes ✔
>>
>> AAC support
>>
>> Yes ✔
>>
>> Dolby Digital support
>>
>> Yes ✔
>>
>> Dolby Digital Plus support
>>
>> Yes ✔
>>
>> Ogg Vorbis support
>>
>> No ✘
>>
>> Ogg Opus support
>>
>> No ✘
>>
>> WebM with Vorbis support
>>
>> No ✘
>>
>> WebM with Opus support
>>
>> No ✘
>>
>> I don't think that iOS/iPadOS 16-x would show different results, by the
>> way.
>>
>> As there is no Safari adaptation for Windows, Linux etc., my statement
>> that any (more or less recent) Safati browser would support DD+ should
>> be correct.
>>
>> Edge: Supports all tested codecs (above) on Win10. (I don't know which
>> codecs the Edge browsers would support or better "would not support"
>> if running on other operating systems than Windows, but you can always
>> test via html5test.com)
>>
>> (Maybe this last question is a bit academic anyway... Edge is in the
>> very most cases used as desktop browser for Windows 10/11.)
>>
>> Best,
>>
>> Stefan
>>
>> - - - -
>>
>> > I think that mobile Safari (so the Safari version for iOS and
>> > iPadOS) should also support DD+ (since iOS 14/iPadOS 14 probably),
>> > because Apple Spatial Audio supports DD+/Atmos.
>> >
>> > I will try to test this. ;-)
>> >
>> > Otherwise, agreed.
>> >
>> > Best,
>> >
>> > Stefan
>> >
>> > - Mensagem de Fersch, Christof 
>> -
>> >
>> > Data: Mon, 8 May 2023 06:01:56 +
>> >
>> > De: "Fersch, Christof" 
>> >
>> > Assunto: Re: [Sursound] [Proposal] for HOA web-streaming-format
>> >
>> > Para: Surround Sound discussion group 
>> >
>> >> You are right on EDGE and Safari on *PC platforms*. Firefox,
>> >> Chrome, … are a different story. And there is more differences
>> >> depending on which platform the browser is running (Windows,
>> >> Android, iOS, MacOS, …). What I wanted to say is that you would
>> >> need to be more specific for a statement on browser support (is
>> >> always a combination of browser, OS, maybe even HW).
>> >>
>> >> Nevertheless, I of course agree MPEG-H support on Browser/OS is not
>> >> “very widespread”. However, it also offers features/compression
>> >> which are much more advanced than what currently deployed codecs
>> >> can do.
>> >>
>> >> Ok, I see, thanks for clarifying. The statement below refers to DD
>> >> (not DD+).
>> >>
>> >> //Christof
>> >>
>> >> From: Sursound  on behalf of Stefan
>> >> Schreiber 
>> >>
>> >> Date: Saturday, 6. May 

Re: [Sursound] [Proposal] for HOA web-streaming-format

2023-05-09 Thread Hector Centeno
Hello,

I use Edge on Android and Windows machines. This is what I get on my
Android device (latest Samsung Galaxy S23 Ultra):

Audio codecs
PCM audio support
Yes ✔
MP3 support
Yes ✔
AAC support
Yes ✔
Dolby Digital support
No ✘
Dolby Digital Plus support
No ✘
Ogg Vorbis support
Yes ✔
Ogg Opus support
Yes ✔
WebM with Vorbis support
Yes ✔
WebM with Opus support
Yes ✔

Best,
Hector Centeno





On Mon, May 8, 2023, 12:38 p.m. Stefan Schreiber 
wrote:

> Ammendment:
>
> “EAC-3 (DD+) is natively supported by Edge and all Safari browsers.”
>
> I did a fast html5test (.com) on my iPad running on some very old iOS
> 13.x (for some stupid reason I can’t update my AirPad Air 2 to version
> 15.x, which would be the last supported one; need probably to reset my
> Apple ID to “recover” my lost password 來), and even in this outdated
> configuration:
>
> DD+ is definitively supported by mobile Safari, see result list (here
> posted in text format, sursound might not like htrml5 text...I hope
> there won't be any "optical breakdown"...):
>
> Audio codecs
>
>
>
> PCM audio support
>
> No
>
> MP3 support
>
> Yes ✔
>
> AAC support
>
> Yes ✔
>
> Dolby Digital support
>
> Yes ✔
>
> Dolby Digital Plus support
>
> Yes ✔
>
> Ogg Vorbis support
>
> No ✘
>
> Ogg Opus support
>
> No ✘
>
> WebM with Vorbis support
>
> No ✘
>
> WebM with Opus support
>
> No ✘
>
> I don't think that iOS/iPadOS 16-x would show different results, by the
> way.
>
> As there is no Safari adaptation for Windows, Linux etc., my statement
> that any (more or less recent) Safati browser would support DD+ should
> be correct.
>
> Edge: Supports all tested codecs (above) on Win10. (I don't know which
> codecs the Edge browsers would support or better "would not support"
> if running on other operating systems than Windows, but you can always
> test via html5test.com)
>
> (Maybe this last question is a bit academic anyway... Edge is in the
> very most cases used as desktop browser for Windows 10/11.)
>
> Best,
>
> Stefan
>
> - - - -
>
> > I think that mobile Safari (so the Safari version for iOS and
> > iPadOS) should also support DD+ (since iOS 14/iPadOS 14 probably),
> > because Apple Spatial Audio supports DD+/Atmos.
> >
> > I will try to test this. ;-)
> >
> > Otherwise, agreed.
> >
> > Best,
> >
> > Stefan
> >
> > - Mensagem de Fersch, Christof  -
> >
> > Data: Mon, 8 May 2023 06:01:56 +
> >
> > De: "Fersch, Christof" 
> >
> > Assunto: Re: [Sursound] [Proposal] for HOA web-streaming-format
> >
> > Para: Surround Sound discussion group 
> >
> >> You are right on EDGE and Safari on *PC platforms*. Firefox,
> >> Chrome, … are a different story. And there is more differences
> >> depending on which platform the browser is running (Windows,
> >> Android, iOS, MacOS, …). What I wanted to say is that you would
> >> need to be more specific for a statement on browser support (is
> >> always a combination of browser, OS, maybe even HW).
> >>
> >> Nevertheless, I of course agree MPEG-H support on Browser/OS is not
> >> “very widespread”. However, it also offers features/compression
> >> which are much more advanced than what currently deployed codecs
> >> can do.
> >>
> >> Ok, I see, thanks for clarifying. The statement below refers to DD
> >> (not DD+).
> >>
> >> //Christof
> >>
> >> From: Sursound  on behalf of Stefan
> >> Schreiber 
> >>
> >> Date: Saturday, 6. May 2023 at 00:41
> >>
> >> To: Surround Sound discussion group 
> >>
> >> Subject: Re: [Sursound] [Proposal] for HOA web-streaming-format
> >>
> >> Short answer:
> >>
> >> EAC-3 (DD+) is natively supported by Edge and all Safari browsers.
> >>
> >> I  really was refering to this one...
> >>
> >> AC-3 patents should have expired by now, but of course this codec is a
> >>
> >> bit old. (And won’t support even 7.1, by the way. The highest channel
> >>
> >> count would be 6.1, and the B channel would be matrixed into 5.1. If
> >>
> >> my memory is correctly working. But probably yes... ;-)
> >>
> >> Thanks,
> >>
> >> Stefan
> >>
> >> ...
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mail.music.vt.edu/mailman/private/sursound/attachments/20230508/d30442e2/attachment.htm
> >
> ___
> Sursound mailing list
> Sursound@music.vt.edu
> https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here,
> edit account or options, view archives and so on.
>
-- next part --
An HTML attachment was scrubbed...
URL: 
<https://mail.music.vt.edu/mailman/private/sursound/attachments/20230509/16c55f2f/attachment.htm>
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound - unsubscribe here, edit 
account or options, view archives and so on.


Bug#1035515: [pre-approval] unblock: gdb/13.1-2.1

2023-05-04 Thread Hector Oron
Hello,

  While I agree we should get this fixed on bookworm, I believe to be
able to unblock a package, the package should exist in the archive.

  Since you have not uploaded the package yet, are you fine if I do a
regular upload with the patch, then use this unblock request to add
the package to bookworm.

Regards

On Thu, 4 May 2023 at 16:21, Emanuele Rocca  wrote:
>
> Package: release.debian.org
> Severity: normal
> User: release.debian@packages.debian.org
> Usertags: unblock
> X-Debbugs-Cc: g...@packages.debian.org, debian-...@lists.debian.org
> Control: affects -1 + src:gdb
>
> Hello release team,
>
> Please unblock package gdb.
>
> [ Reason ]
> The most basic functionality of GDB, namely debugging an hello world C
> program, is currently broken in Bookworm for arm64 systems with pointer
> authentication enabled. See https://bugs.debian.org/1034611.
>
> There is a patch merged upstream addressing the issue [0]. I've tested
> it on a arm64 system and can confirm that it works. See also:
> https://bugs.debian.org/1034611#15
>
> I've prepared an NMU, not yet uploaded. Please find the debdiff
> attached.
>
> [ Impact ]
> GDB entirely unusable for most arm64 users.
>
> [ Tests ]
> Upstream test suite passes. Manually verified that #1034611 can be
> reproduced with gdb 13.1-2 from Bookworm, and it cannot with the
> proposed changes.
>
> [ Risks ]
> Minimal, the patch is small and targeted. Additionally, it only touches
> arm64-specific code.
>
> [ Checklist ]
>   [x] all changes are documented in the d/changelog
>   [x] I reviewed all changes and I approve them
>   [x] attach debdiff against the package in testing
>
> unblock gdb/13.1-2.1
>
> Thanks,
>   Emanuele
>
> [0] 
> https://sourceware.org/git/?p=binutils-gdb.git;a=patch;h=b3eff3e15576229af9bae026c5c23ee694b90389



-- 
 Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.



Bug#1035515: [pre-approval] unblock: gdb/13.1-2.1

2023-05-04 Thread Hector Oron
Hello,

  While I agree we should get this fixed on bookworm, I believe to be
able to unblock a package, the package should exist in the archive.

  Since you have not uploaded the package yet, are you fine if I do a
regular upload with the patch, then use this unblock request to add
the package to bookworm.

Regards

On Thu, 4 May 2023 at 16:21, Emanuele Rocca  wrote:
>
> Package: release.debian.org
> Severity: normal
> User: release.debian@packages.debian.org
> Usertags: unblock
> X-Debbugs-Cc: g...@packages.debian.org, debian-...@lists.debian.org
> Control: affects -1 + src:gdb
>
> Hello release team,
>
> Please unblock package gdb.
>
> [ Reason ]
> The most basic functionality of GDB, namely debugging an hello world C
> program, is currently broken in Bookworm for arm64 systems with pointer
> authentication enabled. See https://bugs.debian.org/1034611.
>
> There is a patch merged upstream addressing the issue [0]. I've tested
> it on a arm64 system and can confirm that it works. See also:
> https://bugs.debian.org/1034611#15
>
> I've prepared an NMU, not yet uploaded. Please find the debdiff
> attached.
>
> [ Impact ]
> GDB entirely unusable for most arm64 users.
>
> [ Tests ]
> Upstream test suite passes. Manually verified that #1034611 can be
> reproduced with gdb 13.1-2 from Bookworm, and it cannot with the
> proposed changes.
>
> [ Risks ]
> Minimal, the patch is small and targeted. Additionally, it only touches
> arm64-specific code.
>
> [ Checklist ]
>   [x] all changes are documented in the d/changelog
>   [x] I reviewed all changes and I approve them
>   [x] attach debdiff against the package in testing
>
> unblock gdb/13.1-2.1
>
> Thanks,
>   Emanuele
>
> [0] 
> https://sourceware.org/git/?p=binutils-gdb.git;a=patch;h=b3eff3e15576229af9bae026c5c23ee694b90389



-- 
 Héctor Orón  -.. . -... .. .- -.   -.. . ...- . .-.. --- .--. . .-.



Re: [dmarc-ietf] Add MLS/MLM subscription/submissions controls to DMARCbis

2023-05-01 Thread Hector Santos

Alex,

I agree with a suggestion to have a separate document, a great 
starting point is to update the ATPS RFC document.  However, DMARCbis 
MUST open up the door for it and address the potential new security 
issues with From Rewrite.


1) Address the MUST NOT p=reject with a new small section, a few 
paragraphs citing the basic non-compliance issues with legacy MLS/MLM 
verifiers of not following DMARC policy and instead creating a new 
potential security threat which may required a security threat section 
or add it to the current "Display Attack" security section.  I don't 
believe we can get by this by saying it will "never happen."


2) Update section 4.4.3 Extended Tag Extensions to update the door up 
to 3rd party authorization, ATPS and possibly others.


Thanks

--
HLS



On 5/1/2023 9:49 AM, Brotman, Alex wrote:

This sounds like a separate document to me. (yes, I see Ale's draft below) And 
IMO, I don't think we should hold up DMARCbis for that work.

--
Alex Brotman
Sr. Engineer, Anti-Abuse & Messaging Policy
Comcast


-Original Message-
From: dmarc  On Behalf Of Hector Santos
Sent: Monday, May 1, 2023 9:26 AM
To: dmarc@ietf.org
Subject: Re: [dmarc-ietf] Add MLS/MLM subscription/submissions controls to
DMARCbis

On 5/1/2023 6:51 AM, Alessandro Vesely wrote:

Been there, done that.  For the message I'm replying to, I have:

Authentication-Results: wmail.tana.it;
   spf=pass smtp.mailfrom=ietf.org;
   dkim=pass reason="Original-From: transformed" header.d=google.com;
   dkim=pass (whitelisted) header.d=ietf.org
 header.b=jAsjjtsp (ietf1);
   dkim=fail (signature verification failed, whitelisted)
header.d=ietf.org
 header.b=QuwLQGvz (ietf1)

However, not all signatures can be verified.  Mailman tries and
preserve most header fields, but not all.  For example, they rewrite
MIME-Version: from scratch and don't save the old one.  So if a poster
signs that field and writes it differently (e.g. with a
comment) MLM transformation cannot be undone.
https://urldefense.com/v3/__https://datatracker.ietf.org/doc/html/draf
t-vesely-dmarc-mlm-transform__;!!CQl3mcHX2A!DfPhD9QIFk5QZaU-

JPkz748sZC
QtLXqL1FIxGonW_xDwc9pXdioEnY546GZUnzjzSNW1BdDF27VjLabqZaB5XtMgrS
WZ9HPP

m2s$


And this was my result for your message, separating lines for easier
reading:

Authentication-Results: dkim.winserver.com;
   dkim=pass header.d=ietf.org header.s=ietf1 header.i=ietf.org;
   adsp=none author.d=tana.it signer.d=ietf.org;
   dmarc=fail policy=none author.d=tana.it signer.d=ietf.org (unauthorized
signer);

   dkim=pass header.d=ietf.org header.s=ietf1 header.i=ietf.org;
   adsp=none author.d=tana.it signer.d=ietf.org;
   dmarc=fail policy=none author.d=tana.it signer.d=ietf.org (unauthorized
signer);

   dkim=fail (DKIM_BAD_SYNTAX) header.d=none header.s=none header.i=none;
   adsp=dkim-fail author.d=tana.it signer.d=;
   dmarc=dkim-fail policy=none author.d=tana.it signer.d= (unauthorized signer);

   dkim=fail (DKIM_BODY_HASH_MISMATCH) header.d=tana.it header.s=delta
header.i=tana.it;
 adsp=dkim-fail author.d=tana.it signer.d=tana.it;
 dmarc=dkim-fail policy=none author.d=tana.it signer.d=tana.it
(originating signer);

Four signatures were added to your submission and the only one that counts is
the top one, the last one added.

It failed DMARC because tana.it did not authorized ietf.org.   You can
easily resolve this by adding atps=y to your DMARC record:

  v=DMARC1; p=none; atps=y; rua=mailto:dmarca...@tana.it;
ruf=mailto:dmarcf...@tana.it;

and add an ATPS sub-domain record authorizing ietf.org in your dana.it
zone:

  pq6xadozsi47rluiq5yohg2hy3mvjyoo._atps  TXT ("v=atps01; d=ietf.org;")

Do that and all ATPS compliant verifiers should show a DMARC=pass:

Authentication-Results: dkim.winserver.com;
   dkim=pass header.d=ietf.org header.s=ietf1 header.i=ietf.org;
   adsp=none author.d=tana.it signer.d=ietf.org;
   dmarc=pass policy=none author.d=tana.it signer.d=ietf.org (ATPS signer);


For a short list of signers, I updated my DMARC evaluator to also support ASL
"Authorized Signer List" to avoid the extra ATPS record.
So doing this will work across my evaluator for smaller scale mail senders

  v=DMARC1; p=none; atps=y; asl=ietf.org; rua=mailto:dmarca...@tana.it;
ruf=mailto:dmarcf...@tana.it;


This will skip atps=y because asl=ietf.org was satisfied. It was show
how it was authorized:

   dmarc=pass policy=none author.d=tana.it signer.d=ietf.org (ASL signer);


Any ATPS or ASL idea will give us the author-defined trust of ietf.org
as a 3rd party signer.

That said,  keeping with the suggestion DMARCBis should add MLS/MLM
semantics, I believe when the Receiver is receiving mail for a
MLS/MLM,  it should have the following updated modern consideration
for a MLS/MLM:

1) It should honor policy first, by check for restrictive domains

2) It should honor the domain restrictive policy to avoid creating new
security problems

Re: [dmarc-ietf] Add MLS/MLM subscription/submissions controls to DMARCbis

2023-05-01 Thread Hector Santos

On 5/1/2023 6:51 AM, Alessandro Vesely wrote:


Been there, done that.  For the message I'm replying to, I have:

Authentication-Results: wmail.tana.it;
  spf=pass smtp.mailfrom=ietf.org;
  dkim=pass reason="Original-From: transformed" header.d=google.com;
  dkim=pass (whitelisted) header.d=ietf.org
header.b=jAsjjtsp (ietf1);
  dkim=fail (signature verification failed, whitelisted) 
header.d=ietf.org

header.b=QuwLQGvz (ietf1)

However, not all signatures can be verified.  Mailman tries and 
preserve most header fields, but not all.  For example, they rewrite 
MIME-Version: from scratch and don't save the old one.  So if a 
poster signs that field and writes it differently (e.g. with a 
comment) MLM transformation cannot be undone.

https://datatracker.ietf.org/doc/html/draft-vesely-dmarc-mlm-transform



And this was my result for your message, separating lines for easier 
reading:


Authentication-Results: dkim.winserver.com;
 dkim=pass header.d=ietf.org header.s=ietf1 header.i=ietf.org;
 adsp=none author.d=tana.it signer.d=ietf.org;
 dmarc=fail policy=none author.d=tana.it signer.d=ietf.org (unauthorized 
signer);

 dkim=pass header.d=ietf.org header.s=ietf1 header.i=ietf.org;
 adsp=none author.d=tana.it signer.d=ietf.org;
 dmarc=fail policy=none author.d=tana.it signer.d=ietf.org (unauthorized 
signer);

 dkim=fail (DKIM_BAD_SYNTAX) header.d=none header.s=none header.i=none;
 adsp=dkim-fail author.d=tana.it signer.d=;
 dmarc=dkim-fail policy=none author.d=tana.it signer.d= (unauthorized signer);

 dkim=fail (DKIM_BODY_HASH_MISMATCH) header.d=tana.it header.s=delta 
header.i=tana.it;
 adsp=dkim-fail author.d=tana.it signer.d=tana.it;
 dmarc=dkim-fail policy=none author.d=tana.it signer.d=tana.it 
(originating signer);

Four signatures were added to your submission and the only one that 
counts is the top one, the last one added.


It failed DMARC because tana.it did not authorized ietf.org.   You can 
easily resolve this by adding atps=y to your DMARC record:


v=DMARC1; p=none; atps=y; rua=mailto:dmarca...@tana.it; 
ruf=mailto:dmarcf...@tana.it;


and add an ATPS sub-domain record authorizing ietf.org in your dana.it 
zone:


pq6xadozsi47rluiq5yohg2hy3mvjyoo._atps  TXT ("v=atps01; d=ietf.org;")

Do that and all ATPS compliant verifiers should show a DMARC=pass:

Authentication-Results: dkim.winserver.com;
 dkim=pass header.d=ietf.org header.s=ietf1 header.i=ietf.org;
 adsp=none author.d=tana.it signer.d=ietf.org;
 dmarc=pass policy=none author.d=tana.it signer.d=ietf.org (ATPS signer);


For a short list of signers, I updated my DMARC evaluator to also 
support ASL "Authorized Signer List" to avoid the extra ATPS record. 
So doing this will work across my evaluator for smaller scale mail senders


v=DMARC1; p=none; atps=y; asl=ietf.org; 
rua=mailto:dmarca...@tana.it; ruf=mailto:dmarcf...@tana.it;



This will skip atps=y because asl=ietf.org was satisfied. It was show 
how it was authorized:


 dmarc=pass policy=none author.d=tana.it signer.d=ietf.org (ASL signer);


Any ATPS or ASL idea will give us the author-defined trust of ietf.org 
as a 3rd party signer.


That said,  keeping with the suggestion DMARCBis should add MLS/MLM 
semantics, I believe when the Receiver is receiving mail for a 
MLS/MLM,  it should have the following updated modern consideration 
for a MLS/MLM:


1) It should honor policy first, by check for restrictive domains

2) It should honor the domain restrictive policy to avoid creating new 
security problems and avoid delivery problems.  This means to 
implement subscription and submission controls.  DMARCbis should pass 
the buck back to the restrictive domain who must deal with user's 
needs or not.


3) It should check if the submission's author domain authorizes the 
MLM signing domain by finding a ATPS record, if so


3.1) it can continue as the 3rd party signer and also keep the From as 
is, unchanged, or


3.2) it can also consider to rewrite.  If rewrite is performed, the 
signing domain should have a security that does not allow any Display 
Attack Replays with the now altered 5322.From identity.



--
Hector Santos,
https://santronics.com
https://winserver.com



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


[dmarc-ietf] Add MLS/MLM subscription/submissions controls to DMARCbis

2023-04-30 Thread Hector Santos


> On Apr 29, 2023, at 4:42 PM, Douglas Foster 
>  wrote:
> 
> ...
> 
> But I need to clarify whether I understand your point.   What I am hearing is:
> For the short term, mailing lists should refuse postings from DMARC-enforcing 
> domains.   That position can be relaxed only if all participating domains 
> have agreed to ignore DMARC Fail for messages from the list  (Ale floated 
> some ideas about that approach.)
> For the longer term, we need a non-DKIM method for delegating rights to a 
> third party.

Ideally, the goal is to eliminate “From Rewrite” to return to the “good ol’ 
days.”  So the first time is to recognize having subscription and submissions 
controls is a natural consideration for the DKIM Policy "Protocol Complete” 
model. If the MLS supports the protocol, it would consider this method more so 
than a destruction method that tear down security.  This will also pass the 
buck back to the domain owner to deal with its user’s needs or not.

> You talk about "incomplete protocol" as if this is a commonly understood and 
> accepted term.  I interpret it to mean a third-party authentication method 
> other than DKIM.  DKIM does serve for third-party authentication where it has 
> been embraced and deployed.   So the issue is that it has not been practical 
> for many situations and we do need another option.

Protocol complete is a client/server protocol negotiation concept.  It 
basically means the “State Machine”, the conversation between the client and 
server is well-defined. No Loop Holes Very key concept in protocol design.

In terms of DKIM Signing Practices, you can read "Requirements for a DomainKeys 
Identified Mail (DKIM) Signing Practices Protocol 
https://www.rfc-editor.org/rfc/rfc5016.txt 
 for its definition.

DKIM Signing Complete: a practice where the dtomain holder assert
that all legitimate mail will be sent with a valid first party 
signature.

But I believe it is not Protocol Complete and to achieve this with DKIM Policy 
Modeling, you have to cover the other signing scenarios which includes 3rd 
party signing scenarios. 

ATPS is the best we got and it works.  You don’t have to worry,  You are using 
gmail.com . Relaxed policy. Minimal security.  ietf.org 
 Rewrite destroys my isdg.net  domain 
security even though I have ietf.org  authorized as an ATPS 
signer.  

It should honor policy and reject my submissions.   Idea.  Add the option to 
the subscription. If you don’t care, let it rewrite to join or use another 
unsecured address.

Not hard.

—
HLS


___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Messages from the dmarc list for the week ending Sun Apr 30 06:00:04 2023

2023-04-30 Thread Hector iPhone6


> On Apr 30, 2023, at 8:53 AM, Eliot Lear  wrote:
> 
> 
> 
> 
>> On 30.04.23 13:49, Hector Santos wrote:
>> What is the count based on?  Is the count the amount of mail created since 
>> the last date of this report which was 1 week ago? 
>> 
>> Did Scott create 25 messages and myself 14 messages in one week? I don't 
>> think so. 
> 
> I do.
> 
> Here's what I learned after a few minutes of review.  The point of the script 
> is to help you self-moderate, so perhaps there's something for you to 
> discover in these numbers. At the very least, you could check the IETF mail 
> archives before complaining.
> 
> Eliot
> 

Fair enough.  I count 12-13 so there is a bug. 

So is this point is this report? 

To shame people for participating like Scott, myself and the other top 10?  

How about shaming those that seem to only post to shame others? Th low posters 
that dont really care with top posters are saying. 

How about showing a report that shows the top posters and top topics, and most 
important, the response rate to compare which post have no responses. That 
would have a better feel representation of WG than rather trying to shame 
people.

I have such a groupware report generator.

—-
Winserver Support___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Messages from the dmarc list for the week ending Sun Apr 30 06:00:04 2023

2023-04-30 Thread Hector Santos
What is the count based on?  Is the count the amount of mail created 
since the last date of this report which was 1 week ago?


Did Scott create 25 messages and myself 14 messages in one week? I 
don't think so.



On 4/30/2023 5:59 AM, John Levine wrote:

Count|  Bytes |  Who
++---
  94 ( 100%) | 946980 ( 100%) | Total
  25 (26.6%) | 200417 (21.2%) | Scott Kitterman 
  14 (14.9%) | 190300 (20.1%) | Hector Santos 
  12 (12.8%) |  81505 ( 8.6%) | Alessandro Vesely 
   9 ( 9.6%) | 102937 (10.9%) | Jesse Thompson 
   7 ( 7.4%) | 123062 (13.0%) | Brotman, Alex 
   6 ( 6.4%) |  95933 (10.1%) | Douglas Foster 

   6 ( 6.4%) |  31018 ( 3.3%) | John Levine 
   3 ( 3.2%) |  30536 ( 3.2%) | Dotzero 
   3 ( 3.2%) |  17389 ( 1.8%) | Matthäus Wander 
   2 ( 2.1%) |  25665 ( 2.7%) | Barry Leiba 
   2 ( 2.1%) |  12106 ( 1.3%) | John R. Levine 
   2 ( 2.1%) |   5589 ( 0.6%) |  
   1 ( 1.1%) |  20637 ( 2.2%) | Seth Blank 
   1 ( 1.1%) |   5569 ( 0.6%) | Benny Pedersen 
   1 ( 1.1%) |   4317 ( 0.5%) | Jim Fenton 

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


 *



--
Hector Santos,
https://santronics.com
https://winserver.com



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Summary: Search for some consensus, was: Proposed text for p=reject and indirect mail flows

2023-04-29 Thread Hector Santos

> Given that lists are expected to (A) continue making content changes, and (B) 
> continue accepting all comers, I think we need to embrace From Rewrite as a 
> necessary consequence of A and B.    Unlike Hector, I don't have a problem 
> with From Rewrite because the act of altering the content makes it a new 
> message, and the modifying entity becomes responsible for the whole thing.   
> So we need a caveat to list owners which lays out the real risks and the 
> better alternatives.


Douglas,

Just a few points.

It is more accurate to state, "Unlike others," because I am not the only one 
who has a problem with altered mail authorship, and worse, when done for the 
purpose of a security teardown it potentially introduces a new security threat 
with Display Name attacks.  I believe I am “IETF” correct to raise this 
security concern where IETF security folks would agree.
 
It is often stated that it is unfair to MLS/MLM folks who have worked unchanged 
for over 30+ years to be required to change.  Please understand I have a 
commercial MLS product since 1996 and I don’t like changes just like the next 
MLS developer. I’ve extremely conservative but I do adapt when necessary. My 
MLS is a legacy product but it is still actively supported. 

Well, for the MLS or MLM refusal to adopt the protocol, the refusal to adopt 
measures known to resolve the DKIM secured with Policy mail stream, caused an 
immediate need by one MLM to create a hack to alter list submissions from 
restrictive domains. It resolved the immediate problem. The MLM could have 
adopted subscription/submission controls as outlined in 2006 and discussed many 
times in the WGs. It  was not  unknown. These correct methods would have pushed 
the burden back to the domain seeking exclusive mail security once they began 
to publish and honor p=reject. The MLM could have supported any of the many 
ADID::SDID association authorization proposals too, but it did not. So here we 
are with the DMARC rewrite problem where in my view, needs to be explained and 
corrected. 

The "new message" angle is one view, but not the definitive one to suggest it 
is okay to alter list submission copyrighted authorships. It is not a normal 
thing to do, but what you can do as an MLS/MLM developer depends widely on the 
type of list distribution. If you are just broadcasting to a list of people as 
a read-only list, then the preparation of required headers is a legitimate 
instance where it completes a new secured message with the proper secured 
business addresses.


—
HLS

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] I-D Action: draft-ietf-dmarc-aggregate-reporting-09.txt

2023-04-28 Thread Hector Santos
Douglas, 

In general, you can’t impose or mandate TLS under PORT 25 unsolicited, 
unauthenticated sessions. You can do this with ESMTP AUTH a.k.a SUBMISSION 
Protocol (RFC6409) which is Port 587. Under this port, you can mandate more 
Authentication/Authorization and mail format correctness than with Port 25 and 
not using ESMTP AUTH.

So for example, for PCI, you must use A/A mechanisms probably under Port 587 
because it can be mandated. But not under Port 25.

—
HLS

> On Apr 27, 2023, at 7:04 AM, Douglas Foster 
>  wrote:
> 
> There are options on TLS failure.  
> 
> Mandatory TLS is actually pretty common, since PCI DSS, HIPAA and GDBR have 
> all been interpreted as requiring TLS on email.For outbound mail, our MTA 
> is configured to drop the connection if encryption cannot be established.  I 
> think this configuration option has become pretty common in commercial 
> products.Domains that cannot accept encrypted traffic are handled with 
> secure web relay (Zixmail or one of its many imitators.)  In the case of a 
> report recipient that cannot accept TLS traffic, we would simply drop the 
> destination.
> 
> For inbound mail, my organization has concluded that data security is the 
> responsibility of the sender, so we do accept unencrypted messages.  
> 
> By and large, mandatory TLS will be implemented consistently, rather than on 
> a specific message like a DMARC report, so I don't know how much needs to be 
> said in this document.
> 
> Doug 
> 



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


[dmarc-ietf] Proposed Updates for DMARCbis - Section 4.4.3 and New Appendix A.8

2023-04-28 Thread Hector Santos
I would like to propose updates to the DMARCbis documentation, specifically for 
Section 4.4.3 and a new Appendix A.8. Please find the suggested revisions 
below.  Your input would be greatly appreciated.  It is just a starting point.

Proposed update for Section 4.4.3:

4.4.3. Alignment and Extension Technologies

DMARC can be extended to incorporate authentication and authorization 
mechanisms that aid in the evaluation of DMARC policy. Any new authentication 
extensions must facilitate domain identifier extraction to enable verification 
of alignment with the RFC5322.From domain.

Authorization extensions address situations where the author domain differs 
from the signer domain, known as 3rd party signatures. The following 
Author::Signer domain authorization methods have been explored:

DomainKeys Identified Mail (DKIM) Authorized Third-Party Signatures (ATPS) 
[RFC6541]
Third-Party Authorization Label (TPA) [draft-otis-tpa-label-08]
Mandatory Tags for DKIM Signatures [draft-levine-dkim-conditional-04]
Delegating DKIM Signing Authority [draft-kucherawy-dkim-delegate-02]
The first two methods are DNS-based, while the latter two are non-DNS-based. 
All share the common objective of authorizing the 3rd party signature. The ATPS 
proposal is the simplest method and has demonstrated success in practice by 
reducing false positive failure results when a valid and unverified but ATPS 
authorized 3rd party signer is present in a message. MDA receivers should 
consider using ATPS to verify 3rd party signatures.

Proposed new Appendix A.8:

A.8 Mailing List Servers

Mailing List Servers (MLS) applications that are compliant with DMARC 
operations SHOULD adhere to the following guidelines for DMARC integration:

Subscription and Submission Controls:

MLS subscription processes should perform a DMARC check to determine if the 
subscribing or submitting email domain's DMARC policy is restrictive regarding 
mail integrity changes or 3rd party signatures. The MLS SHOULD only allow 
subscriptions and submissions from original domain policies that permit 3rd 
party signatures with a p=none policy.

Message Content Integrity Change:

List Servers that alter the message content SHOULD only do so for original 
domains with optional DKIM signing practices. If the List Server does not alter 
the message, it SHOULD NOT remove the signature, if present.

Security Tear Down:

The MLS SHOULD NOT compromise the author's security by changing the authorship 
address (From) domain. Instead, it should apply subscription/submission 
controls. However, if circumstances necessitate a From rewrite, the rewrite 
with a new address SHOULD maintain the same level of security as the original 
submission to avoid potential Replay and Display Name Attacks.
Please let me know your thoughts on these proposed updates and whether they can 
be integrated into the DMARCbis documentation.

Best regards,

Hector Santos




___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Proposed text for p=reject and indirect mail flows

2023-04-28 Thread Hector Santos

On 4/28/2023 5:19 AM, Alessandro Vesely wrote:
> On Sun 02/Apr/2023 20:13:48 +0200 Scott Kitterman wrote:
>> Mailing list changes to ameliorate damage due to DMARC are in no 
way an improvement.  Absent DMARC, they would not be needed at all.

>
> +1

In my view, when an incomplete protocol is introduced, it creates 
gaps. If there are no guidelines for addressing these gaps in a 
graceful and elegant manner, solutions can vary widely. As developers, 
it's important to have the ability to make adjustments to our software.


Here are a few suggestions to "ameliorate damages" caused by an 
incomplete protocol:


1) Address the gaps with proper protocol negotiation guidelines and a 
well-defined state machine. This includes addressing third-party 
signers and providing guidance for groupware, one-to-many, mailing 
lists, and newsletter distribution mailers. This would make the 
protocol more complete.


2) If option #1 is not viable and continues to be a non-starter with 
the editor of this standard track bis document, the document's status 
and endorsement become technically challenged in many ways. It then 
becomes critical to have a Field Operations status report on a) DMARC 
p=reject problems, b) current problem mitigations, and c) any new 
security threats introduced by the mitigations, particularly with a 
DMARC Security teardown.


There aren't many options for MLS developers when dealing with an 
incomplete protocol.


I have been developing mail software since the '80s, with products 
such as Silver Xpress, Platinum Xpress, Gold Xpress, and Wildcat! 
These early experiences have informed my current understanding of the 
challenges in integrating DMARC into systems.


Regarding rewrite, we don't want to promote it, but it may now be 
necessary to describe it as a new potential "Display Name" security 
threat. However, there are legitimate situations where a rewrite is 
both technically necessary and ethically acceptable. For example:


A MLM Newsletter list domain MUST have a From: domain example-biz.com 
for security. This is a read-only list, and only a few authorized 
submitters can post newsletters. They use their ESP's MUA to submit 
using their ESP's domain address.


In this case, it is about the content payload, not the submitter. This 
is a legitimate situation where a complete rewrite of the incoming 
header is warranted. In the case of DMARC, it becomes necessary. The 
ESP's headers are removed, and the RFC5322 is applied for a secure, 
unambiguous result. It would be as if the customer logged into wcWEB 
and posted the newsletter directly in their MLM List storage area, but 
they prefer to do it via ESP email.


Therefore, rewrite can be described as BAD when used intentionally to 
break down DMARC security or GOOD when used to create DMARC secured 
distribution.


Thanks

--
Hector Santos,
https://santronics.com
https://winserver.com



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] I-D Action: draft-ietf-dmarc-aggregate-reporting-10.txt

2023-04-27 Thread Hector Santos

+1

On 4/27/2023 10:11 AM, Brotman, Alex wrote:


In summary:

“Report senders SHOULD attempt delivery via SMTP using STARTTLS to 
all receivers.  Transmitting these reports via a secured session is 
preferrable.”


I don’t think we should add this in, but receivers could deploy 
DANE/MTA-STS if they wanted to ensure senders who honor those will 
use TLS.


--

Alex Brotman

Sr. Engineer, Anti-Abuse & Messaging Policy

Comcast

*From:* dmarc  *On Behalf Of * Hector Santos
*Sent:* Wednesday, April 26, 2023 4:29 PM
*To:* Scott Kitterman 
*Cc:* IETF DMARC WG 
*Subject:* Re: [dmarc-ietf] I-D Action: 
draft-ietf-dmarc-aggregate-reporting-10.txt





On Apr 26, 2023, at 3:50 PM, Scott Kitterman
mailto:skl...@kitterman.com>> wrote:

I think it would be crazy in 2023 not to use STARTTLS is offered.


+1


Personally I interpreted it more as employ a secure transport
and think through if you really want to be sending the report if
you can't.

I think there's some room for interpretation and I think that's
fine.


I believe connectivity is independent of the application.

All connections SHOULD assume the highest possible security 
available today.


For unsolicited email, the presumption would be:

Port 25
STARTTLS

If I was start performing reports (and I think I will), that is how 
I would begin, naturally, with outbound SMTP clients with optional 
TLS if offered.


Sorry if I was not focused with the main question,

—
HLS



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc



--
Hector Santos,
https://santronics.com
https://winserver.com



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


<    1   2   3   4   5   6   7   8   9   10   >