Re: Request to Establish a Debian Mirror Server for Bangladeshi Users

2023-11-07 Thread Richard Hector

On 8/11/23 17:10, Md Shehab wrote:

Dear Debian Community,

I hope this email finds you well. I am writing to propose the 
establishment of a Debian mirror server in Bangladesh


I am confident that a Debian mirror server in Bangladesh would be a 
valuable resource for the local tech community


I would like to request your support for this proposal. I am open to any 
suggestions or feedback you may have.


I suggest starting by reading here:

https://www.debian.org/mirror/ftpmirror

Cheers,
Richard



Re: systemd service oddness with openvpn

2023-11-06 Thread Richard Hector

On 7/11/23 12:41, Richard Hector wrote:

Hi all,

I have a machine that runs as an openvpn server. It works fine; the VPN 
stays up.


However, after running for a while, I get these repeatedly in syslog:


I should also have mentioned - this is debian bookworm (12.2)

Richard



systemd service oddness with openvpn

2023-11-06 Thread Richard Hector

Hi all,

I have a machine that runs as an openvpn server. It works fine; the VPN 
stays up.


However, after running for a while, I get these repeatedly in syslog:

Nov 07 12:17:24 ovpn2 openvpn[213741]: Options error: In [CMD-LINE]:1: 
Error opening configuration file: opvn2.conf

Nov 07 12:17:24 ovpn2 openvpn[213741]: Use --help for more information.
Nov 07 12:17:24 ovpn2 systemd[1]: openvpn-server@opvn2.service: Main 
process exited, code=exited, status=1/FAILURE
Nov 07 12:17:24 ovpn2 systemd[1]: openvpn-server@opvn2.service: Failed 
with result 'exit-code'.
Nov 07 12:17:24 ovpn2 systemd[1]: Failed to start 
openvpn-server@opvn2.service - OpenVPN service for opvn2.
Nov 07 12:17:29 ovpn2 openvpn[213770]: Options error: In [CMD-LINE]:1: 
Error opening configuration file: opvn2.conf

Nov 07 12:17:29 ovpn2 openvpn[213770]: Use --help for more information.
Nov 07 12:17:29 ovpn2 systemd[1]: openvpn-server@opvn2.service: Main 
process exited, code=exited, status=1/FAILURE
Nov 07 12:17:29 ovpn2 systemd[1]: openvpn-server@opvn2.service: Failed 
with result 'exit-code'.
Nov 07 12:17:29 ovpn2 systemd[1]: Failed to start 
openvpn-server@opvn2.service - OpenVPN service for opvn2.


This is the openvpn-server@.service:

[Unit]
Description=OpenVPN service for %I
After=network-online.target
Wants=network-online.target
Documentation=man:openvpn(8)
Documentation=https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage
Documentation=https://community.openvpn.net/openvpn/wiki/HOWTO

[Service]
Type=notify
PrivateTmp=true
WorkingDirectory=/etc/openvpn/server
ExecStart=/usr/sbin/openvpn --status %t/openvpn-server/status-%i.log 
--status-version 2 --suppress-timestamps --config %i.conf
CapabilityBoundingSet=CAP_IPC_LOCK CAP_NET_ADMIN CAP_NET_BIND_SERVICE 
CAP_NET_RAW CAP_SETGID CAP_SETUID CAP_SETPCAP CAP_SYS_CHROOT 
CAP_DAC_OVERRIDE CAP_AUDIT_WRITE

LimitNPROC=10
DeviceAllow=/dev/null rw
DeviceAllow=/dev/net/tun rw
ProtectSystem=true
ProtectHome=true
KillMode=process
RestartSec=5s
Restart=on-failure

[Install]
WantedBy=multi-user.target

And this is my override.conf:

[Service]
ExecStart=
ExecStart=/usr/sbin/openvpn --status %t/openvpn-server/status-%i.log 
--status-version 2 --config %i.conf


(because I want timestamps)

As I say, the VPN is functioning, and systemctl status shows it's running.

Why would it firstly think it needs starting, and secondly fail to do 
so? The config file /etc/openvpn/server/ovpn2.conf file which it "fails 
to open" hasn't gone away ...


Any tips?

Note the machine is quite low powered; it's an old HP thin client. But 
this is all it does, and it seems to perform adequately.


Thanks,
Richard



Re: Issues about removed topics with KafkaSource

2023-11-02 Thread Hector Rios
Hi Emily

One workaround that might help is to leverage the state-processor-api[1].
You would have to do some upfront work to create a state-processor job to
wipe the state (offsets) of the topic you want to remove and use the newly
generated savepoint without the removed state of the topic or topics. It
could even be parameterized to be more generic and thus be reusable across
multiple jobs.

[1]
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/libs/state_processor_api/#state-processor-api

Hope that helps
-Hector


On Thu, Nov 2, 2023 at 7:25 AM Emily Li via user 
wrote:

> Hey Martijn
>
> Thanks for the clarification. Now it makes sense.
>
> I saw this feature FLIP-246 is still a WIP and there's no release date
> yet, and it actually contains quite some changes in it. We noticed there's
> a WIP PR for this change, just wondering if there's any plan in releasing
> this feature?
>
> For our current situation, we are subscribing to hundreds of topics, and
> we add/remove topics quite often (every few days probably), adding topics
> seems to be okay at the moment, but with the current KafkaSource design, if
> removing a topic means we need to change the kafka soure id, and restart
> with non-restored state, I assume it means we will lose the states of other
> topics as well, and because we need to do this quite often, it seems quite
> inconvenient to keep restarting the application with non-restored state.
>
> We are thinking of introducing some temporary workaround while waiting for
> this dynamic adding/removing topics feature (probably by forking the
> flink-connector-kafka and add some custom logic there), just wondering if
> there's any direction you can point us if we are to do the work around, or
> is there any pre-existing work that we could potentially re-use?
>
> On Thu, Nov 2, 2023 at 3:30 AM Martijn Visser 
> wrote:
>
>> Hi,
>>
>> That's by design: you can't dynamically add and remove topics from an
>> existing Flink job that is being restarted from a snapshot. The
>> feature you're looking for is being planned as part of FLIP-246 [1]
>>
>> Best regards,
>>
>> Martijn
>>
>> [1]
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=217389320
>>
>>
>> On Wed, Nov 1, 2023 at 7:29 AM Emily Li via user 
>> wrote:
>> >
>> > Hey
>> >
>> > We have a flinkapp which is subscribing to multiple topics, we recently
>> upgraded our application from 1.13 to 1.15, which we started to use
>> KafkaSource instead of FlinkKafkaConsumer (deprecated).
>> >
>> > But we noticed some weird issue with KafkaSource after the upgrade, we
>> are setting the topics with the kafkaSource builder like this
>> >
>> > ```
>> >
>> > KafkaSource
>> >
>> >   .builder[CustomEvent]
>> >
>> >   .setBootstrapServers(p.bootstrapServers)
>> >
>> >   .setGroupId(consumerGroupName)
>> >
>> >   .setDeserializer(deserializer)
>> >
>> >   .setTopics(topics)
>> > ```
>> >
>> > And we pass in a list of topics to subscribe, but from time to time we
>> will add some new topics or remove some topics (stop consuming them), but
>> we noticed that ever since we upgraded to 1.15, when we remove a topic from
>> the list, it somehow still consuming the topic (committed offset to the
>> already unsubscribed topics, we also have some logs and metrics showing
>> that we are still consuming the already removed topic), and from the
>> aws.kafka.sum_offset_lag metric, we can also see the removed topic having
>> negative lag...
>> >
>> >
>> > And if we delete the topic in kafka, the running flink application will
>> crash and throw an error "
>> >
>> > saying the partition cannot be found (because the topic is already
>> deleted from Kafka).
>> >
>> >
>> > We'd like to understand what could have caused this and if this is a
>> bug in KafkaSource?
>> >
>> >
>> > When we were in 1.13, this never occurred, we were able to remove
>> topics without any issues.
>> >
>> >
>> > We also tried to upgrade to flink 1.17, but the same issue occurred.
>>
>


[KPipeWire] [Bug 476186] Screen recording quality is terrible

2023-10-30 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=476186

--- Comment #3 from Hector Martin  ---
I am in fact (150%), but the bad quality looks like compression artifacts, so
it shouldn't be related to scaling/resolution, but rather a codec issue. It
also happens when recording the full screen.

@Noah yes, with fine detail like lots of text (especially colored) it falls
apart. Looking at the printed config, `rc_end_usage` is 0 which means VBR,
which is the same problem as libx264: you are telling the decoder to target a
specific bitrate regardless of picture complexity, so if things get too
complex, the whole thing becomes a blockfest since it's physically impossible
to encode in that few bits. Screen recording *really* needs constant quality
mode to be useful for offline recording (not streaming where you have
constraints).

-- 
You are receiving this mail because:
You are watching all bug changes.

[jenkins-infra/jenkins.io] 045672: Added a lighttpd reverse proxy example (#5900)

2023-10-30 Thread 'Hector Vido' via Jenkins Commits
  Branch: refs/heads/master
  Home:   https://github.com/jenkins-infra/jenkins.io
  Commit: 04567273e6bfbe91aef453dfcfdefc0b3c9f3fd4
  
https://github.com/jenkins-infra/jenkins.io/commit/04567273e6bfbe91aef453dfcfdefc0b3c9f3fd4
  Author: Hector Vido <39673799+hector-v...@users.noreply.github.com>
  Date:   2023-10-30 (Mon, 30 Oct 2023)

  Changed paths:
M 
content/doc/book/system-administration/reverse-proxy-configuration-with-jenkins/index.adoc
A 
content/doc/book/system-administration/reverse-proxy-configuration-with-jenkins/reverse-proxy-configuration-lighttpd.adoc

  Log Message:
  ---
  Added a lighttpd reverse proxy example (#5900)

* Added a lighttpd reverse proxy example

* Apply suggestions from code review

Co-authored-by: Kevin Martens <99040580+kmarten...@users.noreply.github.com>

-

Co-authored-by: Hector Vido 
Co-authored-by: Kevin Martens <99040580+kmarten...@users.noreply.github.com>
Co-authored-by: Mark Waite 
Co-authored-by: Kris Stern 


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkins-infra/jenkins.io/push/refs/heads/master/70d074-045672%40github.com.


Re: [dmarc-ietf] DMARC session at IETF 118

2023-10-30 Thread Hector Santos

Hi Barry,

We not both?  A robust discussion on the mailing list coupled with a 
dedicated session at IETF 118. This issue has deep implications for 
everyone from small businesses to the large players in domain hosting 
like Microsoft, Google, and Yahoo.


While these major players hold a disproportionate amount of influence 
given their scale, it's crucial that the IETF remains committed to 
standards that serve the broader community. The far-reaching impact of 
decisions related to SPF, DKIM, and DMARC policies cannot be 
overstated. Moreover, I believe that discussing these issues in a more 
dynamic setting like IETF 118 can bring fresh perspectives into the 
fold, especially from those who may not be regular mailing list 
contributors but have substantial stakes in this.


Specifically, I want to draw attention to the idea of expanding our 
focus to include DKIM Policy Modeling. DMARC, a derivative of the 
incomplete ADSP/ATPS protocols, has its value but has also been 
commercialized to a degree that may not fully align with IETF 
standards.   Instead of introducing new proposals, my suggestion aims 
to refocus our current discussions. I believe we could benefit from 
considering DMARC as an Informational document. This would allow us to 
collectively examine existing standards more critically and possibly 
identify areas for improvement that better align with IETF principles.



Thank you for considering this perspective.

Best,
HLS



On 10/28/2023 1:38 PM, Barry Leiba wrote:

I'm starting this in a separate thread that I want to keep for ONLY
the following question:

Do we want to use the session we have scheduled at IETF 118 to talk
about the issue that clearly is still in discussion about adding a tag
to specify which authentication mechanism(s) to use when evaluating
DMARC?

Or shall I cancel the 118 session and just let the discussion continue
on the mailing list?

And being clear here: the "eliminate SPF entirely" suggestion is
definitely out, failing rough consensus.  We're *only* talking about
the suggestion to add a tag to specify what the sender wants.

Barry

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc



--
Hector Santos,
https://santronics.com
https://winserver.com



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: Default DNS lookup command?

2023-10-30 Thread Richard Hector

On 24/10/23 06:01, Max Nikulin wrote:

On 22/10/2023 18:39, Richard Hector wrote:

But not strictly a DNS lookup tool:

richard@zircon:~$ getent hosts zircon
127.0.1.1   zircon.lan.walnut.gen.nz zircon

That's from my /etc/hosts file, and overrides DNS. I didn't see an 
option in the manpage to ignore /etc/hosts.


getent -s dns hosts zircon

However /etc/resolv.conf may point to local systemd-resolved server or 
to dnsmasq started by NetworkManager and they read /etc/hosts by default.


Ah, thanks. But I don't feel too bad about not finding that ... 
'service' is not defined in that file, 'dns' doesn't occur, and 
searching for 'hosts' doesn't give anything useful either. I guess 
reading nsswitch.conf(5) is required.


Thanks,
Richard



Re: [PATCH 4/4] usb: storage: Implement 64-bit LBA support

2023-10-29 Thread Hector Martin
On 29/10/2023 21.11, Marek Vasut wrote:
> On 10/29/23 08:23, Hector Martin wrote:
>> This makes things work properly on devices with >= 2 TiB
>> capacity. If u-boot is built without CONFIG_SYS_64BIT_LBA,
>> the capacity will be clamped at 2^32 - 1 sectors.
>>
>> Signed-off-by: Hector Martin 
>> ---
>>   common/usb_storage.c | 132 
>> ---
>>   1 file changed, 114 insertions(+), 18 deletions(-)
>>
>> diff --git a/common/usb_storage.c b/common/usb_storage.c
>> index 95507ffbce48..3035f2ee9868 100644
>> --- a/common/usb_storage.c
>> +++ b/common/usb_storage.c
>> @@ -66,7 +66,7 @@
>>   static const unsigned char us_direction[256/8] = {
>>  0x28, 0x81, 0x14, 0x14, 0x20, 0x01, 0x90, 0x77,
>>  0x0C, 0x20, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00,
>> -0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x01,
>> +0x00, 0x01, 0x00, 0x40, 0x00, 0x01, 0x00, 0x01,
> 
> What changed here ?

This is an incomplete bitmap specifying the data transfer direction for
every possible SCSI command ID. I'm adding the new commands I'm now using.

> 
>>  0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
>>   };
>>   #define US_DIRECTION(x) ((us_direction[x>>3] >> (x & 7)) & 1)
>> @@ -1073,6 +1073,27 @@ static int usb_read_capacity(struct scsi_cmd *srb, 
>> struct us_data *ss)
>>  return -1;
>>   }
> 
> [...]
> 
>> +#ifdef CONFIG_SYS_64BIT_LBA
> 
> Could you try and use CONFIG_IS_ENABLED() ?

Sure.

> 
>> +if (capacity == 0x1) {
>> +if (usb_read_capacity64(pccb, ss) != 0) {
>> +puts("READ_CAP64 ERROR\n");
>> +} else {
>> +debug("Read Capacity 64 returns: 0x%08x, 0x%08x, 
>> 0x%08x\n",
>> +  cap[0], cap[1], cap[2]);
>> +capacity = be64_to_cpu(*(uint64_t *)cap) + 1;
>> +blksz = be32_to_cpu(cap[2]);
>> +}
>> +    }
>> +#else
>> +/*
>> + * READ CAPACITY will return 0x when limited,
>> + * which wraps to 0 with the +1 above
>> + */
>> +if (!capacity) {
>> +puts("LBA exceeds 32 bits but 64-bit LBA is disabled.\n");
>> +capacity = ~0;
>> +}
>>   #endif
> 
> 

- Hector



Re: [PATCH] usb: Ignore endpoints in non-zero altsettings

2023-10-29 Thread Hector Martin
On 29/10/2023 21.04, Marek Vasut wrote:
> On 10/29/23 08:24, Hector Martin wrote:
>> We currently do not really handle altsettings properly, and no driver
>> uses them. Ignore the respective endpoint descriptors for secondary
>> altsettings, to avoid creating duplicate endpoint records in the
>> interface.
>>
>> This will have to be revisited if/when we have a driver that needs
>> altsettings to work properly.
>>
>> Signed-off-by: Hector Martin 
>> ---
>>   common/usb.c | 9 +
>>   1 file changed, 9 insertions(+)
>>
>> diff --git a/common/usb.c b/common/usb.c
>> index aad13fd9c557..90f72fda00bc 100644
>> --- a/common/usb.c
>> +++ b/common/usb.c
>> @@ -463,6 +463,15 @@ static int usb_parse_config(struct usb_device *dev,
>>  puts("Endpoint descriptor out of order!\n");
>>  break;
>>  }
>> +if (if_desc->num_altsetting > 1) {
>> +/*
>> + * Ignore altsettings, which can trigger 
>> duplicate
>> + * endpoint errors below. Revisit this when some
>> + * driver actually needs altsettings with 
>> differing
>> +     * endpoint setups.
>> + */
> 
> How do you trigger this error ?
> 

Plug in a device with altsettings, like most sound cards or UVC devices.

- Hector



[PATCH 2/2] usb: xhci: Hook up timeouts

2023-10-29 Thread Hector Martin
Now that the USB core passes through timeout info to the host
controller, actually hook it up.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 32 
 drivers/usb/host/xhci.c  | 23 +--
 include/usb/xhci.h   | 14 ++
 3 files changed, 43 insertions(+), 26 deletions(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index dabe6cf86af2..14c0c60e8524 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -463,11 +463,16 @@ static int event_ready(struct xhci_ctrl *ctrl)
  * @param expected TRB type expected from Event TRB
  * Return: pointer to event trb
  */
-union xhci_trb *xhci_wait_for_event(struct xhci_ctrl *ctrl, trb_type expected)
+union xhci_trb *xhci_wait_for_event(struct xhci_ctrl *ctrl, trb_type expected,
+   int timeout)
 {
trb_type type;
unsigned long ts = get_timer(0);
 
+   /* Fallback in case someone passes in 0 */
+   if (!timeout)
+   timeout = XHCI_TIMEOUT;
+
do {
union xhci_trb *event = ctrl->event_ring->dequeue;
 
@@ -504,7 +509,7 @@ union xhci_trb *xhci_wait_for_event(struct xhci_ctrl *ctrl, 
trb_type expected)
le32_to_cpu(event->generic.field[3]));
 
xhci_acknowledge_event(ctrl);
-   } while (get_timer(ts) < XHCI_TIMEOUT);
+   } while (get_timer(ts) < timeout);
 
if (expected == TRB_TRANSFER)
return NULL;
@@ -528,7 +533,7 @@ static void reset_ep(struct usb_device *udev, int ep_index)
 
printf("Resetting EP %d...\n", ep_index);
xhci_queue_command(ctrl, 0, udev->slot_id, ep_index, TRB_RESET_EP);
-   event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   event = xhci_wait_for_event(ctrl, TRB_COMPLETION, XHCI_SYS_TIMEOUT);
if (!event)
return;
 
@@ -539,7 +544,7 @@ static void reset_ep(struct usb_device *udev, int ep_index)
addr = xhci_trb_virt_to_dma(ring->enq_seg,
(void *)((uintptr_t)ring->enqueue | ring->cycle_state));
xhci_queue_command(ctrl, addr, udev->slot_id, ep_index, TRB_SET_DEQ);
-   event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   event = xhci_wait_for_event(ctrl, TRB_COMPLETION, XHCI_SYS_TIMEOUT);
if (!event)
return;
 
@@ -569,7 +574,7 @@ static void abort_td(struct usb_device *udev, int ep_index)
 
xhci_queue_command(ctrl, 0, udev->slot_id, ep_index, TRB_STOP_RING);
 
-   event = xhci_wait_for_event(ctrl, TRB_NONE);
+   event = xhci_wait_for_event(ctrl, TRB_NONE, XHCI_SYS_TIMEOUT);
if (!event)
return;
 
@@ -582,7 +587,8 @@ static void abort_td(struct usb_device *udev, int ep_index)
!= COMP_STOP)));
xhci_acknowledge_event(ctrl);
 
-   event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   event = xhci_wait_for_event(ctrl, TRB_COMPLETION,
+   XHCI_SYS_TIMEOUT);
if (!event)
return;
type = TRB_FIELD_TO_TYPE(le32_to_cpu(event->event_cmd.flags));
@@ -601,7 +607,7 @@ static void abort_td(struct usb_device *udev, int ep_index)
addr = xhci_trb_virt_to_dma(ring->enq_seg,
(void *)((uintptr_t)ring->enqueue | ring->cycle_state));
xhci_queue_command(ctrl, addr, udev->slot_id, ep_index, TRB_SET_DEQ);
-   event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   event = xhci_wait_for_event(ctrl, TRB_COMPLETION, XHCI_SYS_TIMEOUT);
if (!event)
return;
 
@@ -657,10 +663,11 @@ static void record_transfer_result(struct usb_device 
*udev,
  * @param pipe contains the DIR_IN or OUT , devnum
  * @param length   length of the buffer
  * @param buffer   buffer to be read/written based on the request
+ * @param timeout  timeout in milliseconds
  * Return: returns 0 if successful else -1 on failure
  */
 int xhci_bulk_tx(struct usb_device *udev, unsigned long pipe,
-   int length, void *buffer)
+   int length, void *buffer, int timeout)
 {
int num_trbs = 0;
struct xhci_generic_trb *start_trb;
@@ -825,7 +832,7 @@ int xhci_bulk_tx(struct usb_device *udev, unsigned long 
pipe,
giveback_first_trb(udev, ep_index, start_cycle, start_trb);
 
 again:
-   event = xhci_wait_for_event(ctrl, TRB_TRANSFER);
+   event = xhci_wait_for_event(ctrl, TRB_TRANSFER, timeout);
if (!event) {
debug("XHCI bulk transfer timed out, aborting...\n");
abort_td(udev, ep_index);
@@ -862,11 +869,12 @@ again:
  * @param req  request type
  * @param length   length of the buffer
  * @param buffer   buffer to be read/written based on the request

[PATCH 1/2] usb: Pass through timeout to drivers

2023-10-29 Thread Hector Martin
The old USB code was interrupt-driven and just polled at the top level.
This has been obsolete since interrupts were removed, which means the
timeout support has been completely broken.

Rip out the top-level polling and just pass through the timeout
parameter to host controller drivers. Right now this is ignored in the
individual drivers.

Signed-off-by: Hector Martin 
---
 common/usb.c| 21 ++---
 drivers/usb/host/ehci-hcd.c |  5 +++--
 drivers/usb/host/ohci-hcd.c |  5 +++--
 drivers/usb/host/r8a66597-hcd.c |  5 +++--
 drivers/usb/host/usb-sandbox.c  |  6 --
 drivers/usb/host/usb-uclass.c   |  9 +
 drivers/usb/host/xhci.c |  5 +++--
 include/usb.h   | 10 ++
 8 files changed, 29 insertions(+), 37 deletions(-)

diff --git a/common/usb.c b/common/usb.c
index 836506dcd9e9..8d13c5899027 100644
--- a/common/usb.c
+++ b/common/usb.c
@@ -241,22 +241,10 @@ int usb_control_msg(struct usb_device *dev, unsigned int 
pipe,
  request, requesttype, value, index, size);
dev->status = USB_ST_NOT_PROC; /*not yet processed */
 
-   err = submit_control_msg(dev, pipe, data, size, setup_packet);
+   err = submit_control_msg(dev, pipe, data, size, setup_packet, timeout);
if (err < 0)
return err;
-   if (timeout == 0)
-   return (int)size;
 
-   /*
-* Wait for status to update until timeout expires, USB driver
-* interrupt handler may set the status when the USB operation has
-* been completed.
-*/
-   while (timeout--) {
-   if (!((volatile unsigned long)dev->status & USB_ST_NOT_PROC))
-   break;
-   mdelay(1);
-   }
if (dev->status)
return -1;
 
@@ -275,13 +263,8 @@ int usb_bulk_msg(struct usb_device *dev, unsigned int pipe,
if (len < 0)
return -EINVAL;
dev->status = USB_ST_NOT_PROC; /*not yet processed */
-   if (submit_bulk_msg(dev, pipe, data, len) < 0)
+   if (submit_bulk_msg(dev, pipe, data, len, timeout) < 0)
return -EIO;
-   while (timeout--) {
-   if (!((volatile unsigned long)dev->status & USB_ST_NOT_PROC))
-   break;
-   mdelay(1);
-   }
*actual_length = dev->act_len;
if (dev->status == 0)
return 0;
diff --git a/drivers/usb/host/ehci-hcd.c b/drivers/usb/host/ehci-hcd.c
index 9839aa17492d..ad0a1b9d24b1 100644
--- a/drivers/usb/host/ehci-hcd.c
+++ b/drivers/usb/host/ehci-hcd.c
@@ -1627,7 +1627,7 @@ int usb_lock_async(struct usb_device *dev, int lock)
 #if CONFIG_IS_ENABLED(DM_USB)
 static int ehci_submit_control_msg(struct udevice *dev, struct usb_device 
*udev,
   unsigned long pipe, void *buffer, int length,
-  struct devrequest *setup)
+  struct devrequest *setup, int timeout)
 {
debug("%s: dev='%s', udev=%p, udev->dev='%s', portnr=%d\n", __func__,
  dev->name, udev, udev->dev->name, udev->portnr);
@@ -1636,7 +1636,8 @@ static int ehci_submit_control_msg(struct udevice *dev, 
struct usb_device *udev,
 }
 
 static int ehci_submit_bulk_msg(struct udevice *dev, struct usb_device *udev,
-   unsigned long pipe, void *buffer, int length)
+   unsigned long pipe, void *buffer, int length,
+   int timeout)
 {
debug("%s: dev='%s', udev=%p\n", __func__, dev->name, udev);
return _ehci_submit_bulk_msg(udev, pipe, buffer, length);
diff --git a/drivers/usb/host/ohci-hcd.c b/drivers/usb/host/ohci-hcd.c
index 3f4418198ccd..8dc1f6660077 100644
--- a/drivers/usb/host/ohci-hcd.c
+++ b/drivers/usb/host/ohci-hcd.c
@@ -2047,7 +2047,7 @@ int submit_control_msg(struct usb_device *dev, unsigned 
long pipe,
 #if CONFIG_IS_ENABLED(DM_USB)
 static int ohci_submit_control_msg(struct udevice *dev, struct usb_device 
*udev,
   unsigned long pipe, void *buffer, int length,
-  struct devrequest *setup)
+  struct devrequest *setup, int timeout)
 {
ohci_t *ohci = dev_get_priv(usb_get_bus(dev));
 
@@ -2056,7 +2056,8 @@ static int ohci_submit_control_msg(struct udevice *dev, 
struct usb_device *udev,
 }
 
 static int ohci_submit_bulk_msg(struct udevice *dev, struct usb_device *udev,
-   unsigned long pipe, void *buffer, int length)
+   unsigned long pipe, void *buffer, int length,
+   int timeout)
 {
ohci_t *ohci = dev_get_priv(usb_get_bus(dev));
 
diff --git a/drivers/usb/host/r8a66597-hcd.c b/drivers/usb/host/r8a66597-hcd.c
index 3ccbc16da379..0ac853dc558b 100

[PATCH 0/2] USB fixes: (Re)implement timeouts

2023-10-29 Thread Hector Martin
A long time ago, the USB code was interrupt-driven and used top-level
timeout handling. This has long been obsolete, and that code is just
broken dead cruft. HC drivers instead hardcode timeouts today.

We need to be able to specify timeouts explicitly to handle cases like
USB hard disks spinning up, without having ridiculously long timeouts
across the board (which would cause endless waiting when things go
wrong anywhere else). So, it's time to rip out the old broken nonsense
and actually pass through timeouts to USB host controller drivers, so
they can be implemented properly.

This series adds the necessary top-level scaffolding for control/bulk
timeouts, and implements them in xHCI. I didn't bother with interrupt
transfers, since I figure those probably never need long timeouts
anyway.

The platform I deal with only has xHCI, so I'll leave implementing this
for EHCI/OHCI to someone else if anyone cares :)

This series needs to be applied after [1], since the xHCI changes depend
on changes made there.

[1] 
https://lore.kernel.org/u-boot/20231029-usb-fixes-1-v2-0-623533f63...@marcan.st/

Signed-off-by: Hector Martin 
---
Hector Martin (2):
  usb: Pass through timeout to drivers
  usb: xhci: Hook up timeouts

 common/usb.c| 21 ++---
 drivers/usb/host/ehci-hcd.c |  5 +++--
 drivers/usb/host/ohci-hcd.c |  5 +++--
 drivers/usb/host/r8a66597-hcd.c |  5 +++--
 drivers/usb/host/usb-sandbox.c  |  6 --
 drivers/usb/host/usb-uclass.c   |  9 +
 drivers/usb/host/xhci-ring.c| 32 
 drivers/usb/host/xhci.c | 28 
 include/usb.h   | 10 ++
 include/usb/xhci.h  | 14 ++
 10 files changed, 72 insertions(+), 63 deletions(-)
---
base-commit: 3d5d748e4d66b98109669c05d0c473fe67795801
change-id: 20231029-usb-fixes-5-ca87bbedb40c

Best regards,
-- 
Hector Martin 



[PATCH] usb: Ignore endpoints in non-zero altsettings

2023-10-29 Thread Hector Martin
We currently do not really handle altsettings properly, and no driver
uses them. Ignore the respective endpoint descriptors for secondary
altsettings, to avoid creating duplicate endpoint records in the
interface.

This will have to be revisited if/when we have a driver that needs
altsettings to work properly.

Signed-off-by: Hector Martin 
---
 common/usb.c | 9 +
 1 file changed, 9 insertions(+)

diff --git a/common/usb.c b/common/usb.c
index aad13fd9c557..90f72fda00bc 100644
--- a/common/usb.c
+++ b/common/usb.c
@@ -463,6 +463,15 @@ static int usb_parse_config(struct usb_device *dev,
puts("Endpoint descriptor out of order!\n");
break;
}
+   if (if_desc->num_altsetting > 1) {
+   /*
+* Ignore altsettings, which can trigger 
duplicate
+* endpoint errors below. Revisit this when some
+* driver actually needs altsettings with 
differing
+* endpoint setups.
+*/
+   break;
+   }
epno = dev->config.if_desc[ifno].no_of_ep;
if_desc = >config.if_desc[ifno];
if (epno >= USB_MAXENDPOINTS) {

---
base-commit: 8ad1c9c26f7740806a162818b790d4a72f515b7e
change-id: 20231029-usb-fixes-4-ba6931acf217

Best regards,
-- 
Hector Martin 



[PATCH 4/4] usb: storage: Implement 64-bit LBA support

2023-10-29 Thread Hector Martin
This makes things work properly on devices with >= 2 TiB
capacity. If u-boot is built without CONFIG_SYS_64BIT_LBA,
the capacity will be clamped at 2^32 - 1 sectors.

Signed-off-by: Hector Martin 
---
 common/usb_storage.c | 132 ---
 1 file changed, 114 insertions(+), 18 deletions(-)

diff --git a/common/usb_storage.c b/common/usb_storage.c
index 95507ffbce48..3035f2ee9868 100644
--- a/common/usb_storage.c
+++ b/common/usb_storage.c
@@ -66,7 +66,7 @@
 static const unsigned char us_direction[256/8] = {
0x28, 0x81, 0x14, 0x14, 0x20, 0x01, 0x90, 0x77,
0x0C, 0x20, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00,
-   0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x01,
+   0x00, 0x01, 0x00, 0x40, 0x00, 0x01, 0x00, 0x01,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
 };
 #define US_DIRECTION(x) ((us_direction[x>>3] >> (x & 7)) & 1)
@@ -1073,6 +1073,27 @@ static int usb_read_capacity(struct scsi_cmd *srb, 
struct us_data *ss)
return -1;
 }
 
+#ifdef CONFIG_SYS_64BIT_LBA
+static int usb_read_capacity64(struct scsi_cmd *srb, struct us_data *ss)
+{
+   int retry;
+   /* XXX retries */
+   retry = 3;
+   do {
+   memset(>cmd[0], 0, 16);
+   srb->cmd[0] = SCSI_SRV_ACTION_IN;
+   srb->cmd[1] = (srb->lun << 5) | SCSI_SAI_RD_CAPAC16;
+   srb->cmd[13] = 32; /* Allocation length */
+   srb->datalen = 32;
+   srb->cmdlen = 16;
+   if (ss->transport(srb, ss) == USB_STOR_TRANSPORT_GOOD)
+   return 0;
+   } while (retry--);
+
+   return -1;
+}
+#endif
+
 static int usb_read_10(struct scsi_cmd *srb, struct us_data *ss,
   unsigned long start, unsigned short blocks)
 {
@@ -1107,6 +1128,49 @@ static int usb_write_10(struct scsi_cmd *srb, struct 
us_data *ss,
return ss->transport(srb, ss);
 }
 
+#ifdef CONFIG_SYS_64BIT_LBA
+static int usb_read_16(struct scsi_cmd *srb, struct us_data *ss,
+  uint64_t start, unsigned short blocks)
+{
+   memset(>cmd[0], 0, 16);
+   srb->cmd[0] = SCSI_READ16;
+   srb->cmd[1] = srb->lun << 5;
+   srb->cmd[2] = ((unsigned char) (start >> 56)) & 0xff;
+   srb->cmd[3] = ((unsigned char) (start >> 48)) & 0xff;
+   srb->cmd[4] = ((unsigned char) (start >> 40)) & 0xff;
+   srb->cmd[5] = ((unsigned char) (start >> 32)) & 0xff;
+   srb->cmd[6] = ((unsigned char) (start >> 24)) & 0xff;
+   srb->cmd[7] = ((unsigned char) (start >> 16)) & 0xff;
+   srb->cmd[8] = ((unsigned char) (start >> 8)) & 0xff;
+   srb->cmd[9] = ((unsigned char) (start)) & 0xff;
+   srb->cmd[12] = ((unsigned char) (blocks >> 8)) & 0xff;
+   srb->cmd[13] = (unsigned char) blocks & 0xff;
+   srb->cmdlen = 16;
+   debug("read16: start %llx blocks %x\n", (long long)start, blocks);
+   return ss->transport(srb, ss);
+}
+
+static int usb_write_16(struct scsi_cmd *srb, struct us_data *ss,
+   uint64_t start, unsigned short blocks)
+{
+   memset(>cmd[0], 0, 16);
+   srb->cmd[0] = SCSI_WRITE16;
+   srb->cmd[1] = srb->lun << 5;
+   srb->cmd[2] = ((unsigned char) (start >> 56)) & 0xff;
+   srb->cmd[3] = ((unsigned char) (start >> 48)) & 0xff;
+   srb->cmd[4] = ((unsigned char) (start >> 40)) & 0xff;
+   srb->cmd[5] = ((unsigned char) (start >> 32)) & 0xff;
+   srb->cmd[6] = ((unsigned char) (start >> 24)) & 0xff;
+   srb->cmd[7] = ((unsigned char) (start >> 16)) & 0xff;
+   srb->cmd[8] = ((unsigned char) (start >> 8)) & 0xff;
+   srb->cmd[9] = ((unsigned char) (start)) & 0xff;
+   srb->cmd[12] = ((unsigned char) (blocks >> 8)) & 0xff;
+   srb->cmd[13] = (unsigned char) blocks & 0xff;
+   srb->cmdlen = 16;
+   debug("write16: start %llx blocks %x\n", (long long)start, blocks);
+   return ss->transport(srb, ss);
+}
+#endif
 
 #ifdef CONFIG_USB_BIN_FIXUP
 /*
@@ -1145,6 +1209,7 @@ static unsigned long usb_stor_read(struct blk_desc 
*block_dev, lbaint_t blknr,
struct usb_device *udev;
struct us_data *ss;
int retry;
+   int ret;
struct scsi_cmd *srb = _ccb;
 #if CONFIG_IS_ENABLED(BLK)
struct blk_desc *block_dev;
@@ -1190,7 +1255,13 @@ retry_it:
usb_show_progress();
srb->datalen = block_dev->blksz * smallblks;
srb->pdata = (unsigned char *)buf_addr;
-   if (usb_read_10(srb, ss, start, smallblks)) {
+#ifdef CONFIG_SYS_64BIT_LBA
+   if (block_dev->lba > ((lbaint_t)0x1))
+  

[PATCH 3/4] usb: storage: Use the correct CBW lengths

2023-10-29 Thread Hector Martin
USB UFI uses fixed 12-byte commands (as does RBC, which is not
supported), but SCSI does not have this limitation. Use the correct
command block lengths depending on the subclass.

Signed-off-by: Hector Martin 
---
 common/usb_storage.c | 22 ++
 1 file changed, 14 insertions(+), 8 deletions(-)

diff --git a/common/usb_storage.c b/common/usb_storage.c
index 729ddbc75a48..95507ffbce48 100644
--- a/common/usb_storage.c
+++ b/common/usb_storage.c
@@ -107,6 +107,7 @@ struct us_data {
trans_reset transport_reset;/* reset routine */
trans_cmnd  transport;  /* transport routine */
unsigned short  max_xfer_blk;   /* maximum transfer blocks */
+   boolcmd12;  /* use 12-byte commands 
(RBC/UFI) */
 };
 
 #if !CONFIG_IS_ENABLED(BLK)
@@ -349,7 +350,7 @@ static void usb_show_srb(struct scsi_cmd *pccb)
 {
int i;
printf("SRB: len %d datalen 0x%lX\n ", pccb->cmdlen, pccb->datalen);
-   for (i = 0; i < 12; i++)
+   for (i = 0; i < pccb->cmdlen; i++)
printf("%02X ", pccb->cmd[i]);
printf("\n");
 }
@@ -888,7 +889,7 @@ do_retry:
psrb->cmd[4] = 18;
psrb->datalen = 18;
psrb->pdata = >sense_buf[0];
-   psrb->cmdlen = 12;
+   psrb->cmdlen = us->cmd12 ? 12 : 6;
/* issue the command */
result = usb_stor_CB_comdat(psrb, us);
debug("auto request returned %d\n", result);
@@ -989,7 +990,7 @@ static int usb_inquiry(struct scsi_cmd *srb, struct us_data 
*ss)
srb->cmd[1] = srb->lun << 5;
srb->cmd[4] = 36;
srb->datalen = 36;
-   srb->cmdlen = 12;
+   srb->cmdlen = ss->cmd12 ? 12 : 6;
i = ss->transport(srb, ss);
debug("inquiry returns %d\n", i);
if (i == 0)
@@ -1014,7 +1015,7 @@ static int usb_request_sense(struct scsi_cmd *srb, struct 
us_data *ss)
srb->cmd[4] = 18;
srb->datalen = 18;
srb->pdata = >sense_buf[0];
-   srb->cmdlen = 12;
+   srb->cmdlen = ss->cmd12 ? 12 : 6;
ss->transport(srb, ss);
debug("Request Sense returned %02X %02X %02X\n",
  srb->sense_buf[2], srb->sense_buf[12],
@@ -1032,7 +1033,7 @@ static int usb_test_unit_ready(struct scsi_cmd *srb, 
struct us_data *ss)
srb->cmd[0] = SCSI_TST_U_RDY;
srb->cmd[1] = srb->lun << 5;
srb->datalen = 0;
-   srb->cmdlen = 12;
+   srb->cmdlen = ss->cmd12 ? 12 : 6;
if (ss->transport(srb, ss) == USB_STOR_TRANSPORT_GOOD) {
ss->flags |= USB_READY;
return 0;
@@ -1064,7 +1065,7 @@ static int usb_read_capacity(struct scsi_cmd *srb, struct 
us_data *ss)
srb->cmd[0] = SCSI_RD_CAPAC;
srb->cmd[1] = srb->lun << 5;
srb->datalen = 8;
-   srb->cmdlen = 12;
+   srb->cmdlen = ss->cmd12 ? 12 : 10;
if (ss->transport(srb, ss) == USB_STOR_TRANSPORT_GOOD)
return 0;
} while (retry--);
@@ -1084,7 +1085,7 @@ static int usb_read_10(struct scsi_cmd *srb, struct 
us_data *ss,
srb->cmd[5] = ((unsigned char) (start)) & 0xff;
srb->cmd[7] = ((unsigned char) (blocks >> 8)) & 0xff;
srb->cmd[8] = (unsigned char) blocks & 0xff;
-   srb->cmdlen = 12;
+   srb->cmdlen = ss->cmd12 ? 12 : 10;
debug("read10: start %lx blocks %x\n", start, blocks);
return ss->transport(srb, ss);
 }
@@ -1101,7 +1102,7 @@ static int usb_write_10(struct scsi_cmd *srb, struct 
us_data *ss,
srb->cmd[5] = ((unsigned char) (start)) & 0xff;
srb->cmd[7] = ((unsigned char) (blocks >> 8)) & 0xff;
srb->cmd[8] = (unsigned char) blocks & 0xff;
-   srb->cmdlen = 12;
+   srb->cmdlen = ss->cmd12 ? 12 : 10;
debug("write10: start %lx blocks %x\n", start, blocks);
return ss->transport(srb, ss);
 }
@@ -1407,6 +1408,11 @@ int usb_storage_probe(struct usb_device *dev, unsigned 
int ifnum,
printf("Sorry, protocol %d not yet supported.\n", ss->subclass);
return 0;
}
+
+   /* UFI uses 12-byte commands (like RBC, unlike SCSI) */
+   if (ss->subclass == US_SC_UFI)
+   ss->cmd12 = true;
+
if (ss->ep_int) {
/* we had found an interrupt endpoint, prepare irq pipe
 * set up the IRQ pipe and handler

-- 
2.41.0



[PATCH 1/4] scsi: Fix a bunch of SCSI definitions.

2023-10-29 Thread Hector Martin
0x9e isn't Read Capacity, it's a service action and the read capacity
command is a subcommand.

READ16 is not 0x48, it's 0x88. 0x48 is SANITIZE and that sounds like we
might have been destroying data instead of reading data. No bueno.

Signed-off-by: Hector Martin 
---
 drivers/ata/ahci.c  | 9 ++---
 drivers/scsi/scsi.c | 4 ++--
 include/scsi.h  | 8 ++--
 3 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index 2062197afcd3..b252e9e525db 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -906,15 +906,18 @@ static int ahci_scsi_exec(struct udevice *dev, struct 
scsi_cmd *pccb)
case SCSI_RD_CAPAC10:
ret = ata_scsiop_read_capacity10(uc_priv, pccb);
break;
-   case SCSI_RD_CAPAC16:
-   ret = ata_scsiop_read_capacity16(uc_priv, pccb);
-   break;
case SCSI_TST_U_RDY:
ret = ata_scsiop_test_unit_ready(uc_priv, pccb);
break;
case SCSI_INQUIRY:
ret = ata_scsiop_inquiry(uc_priv, pccb);
break;
+   case SCSI_SRV_ACTION_IN:
+   if ((pccb->cmd[1] & 0x1f) == SCSI_SAI_RD_CAPAC16) {
+   ret = ata_scsiop_read_capacity16(uc_priv, pccb);
+   break;
+   }
+   /* Fallthrough */
default:
printf("Unsupport SCSI command 0x%02x\n", pccb->cmd[0]);
return -ENOTSUPP;
diff --git a/drivers/scsi/scsi.c b/drivers/scsi/scsi.c
index d7b33010e469..f2c828eb305e 100644
--- a/drivers/scsi/scsi.c
+++ b/drivers/scsi/scsi.c
@@ -380,8 +380,8 @@ static int scsi_read_capacity(struct udevice *dev, struct 
scsi_cmd *pccb,
 
/* Read capacity (10) was insufficient. Use read capacity (16). */
memset(pccb->cmd, '\0', sizeof(pccb->cmd));
-   pccb->cmd[0] = SCSI_RD_CAPAC16;
-   pccb->cmd[1] = 0x10;
+   pccb->cmd[0] = SCSI_SRV_ACTION_IN;
+   pccb->cmd[1] = SCSI_SAI_RD_CAPAC16;
pccb->cmdlen = 16;
pccb->msgout[0] = SCSI_IDENTIFY; /* NOT USED */
 
diff --git a/include/scsi.h b/include/scsi.h
index b47c7463c1d6..89e268586477 100644
--- a/include/scsi.h
+++ b/include/scsi.h
@@ -141,10 +141,9 @@ struct scsi_cmd {
 #define SCSI_MED_REMOVL0x1E/* Prevent/Allow medium Removal 
(O) */
 #define SCSI_READ6 0x08/* Read 6-byte (MANDATORY) */
 #define SCSI_READ100x28/* Read 10-byte (MANDATORY) */
-#define SCSI_READ160x48
+#define SCSI_READ160x88/* Read 16-byte */
 #define SCSI_RD_CAPAC  0x25/* Read Capacity (MANDATORY) */
 #define SCSI_RD_CAPAC10SCSI_RD_CAPAC   /* Read Capacity (10) */
-#define SCSI_RD_CAPAC160x9e/* Read Capacity (16) */
 #define SCSI_RD_DEFECT 0x37/* Read Defect Data (O) */
 #define SCSI_READ_LONG 0x3E/* Read Long (O) */
 #define SCSI_REASS_BLK 0x07/* Reassign Blocks (O) */
@@ -158,15 +157,20 @@ struct scsi_cmd {
 #define SCSI_SEEK100x2B/* Seek 10-Byte (O) */
 #define SCSI_SEND_DIAG 0x1D/* Send Diagnostics (MANDATORY) */
 #define SCSI_SET_LIMIT 0x33/* Set Limits (O) */
+#define SCSI_SRV_ACTION_IN 0x9E/* Service Action In */
+#define SCSI_SRV_ACTION_OUT0x9F/* Service Action Out */
 #define SCSI_START_STP 0x1B/* Start/Stop Unit (O) */
 #define SCSI_SYNC_CACHE0x35/* Synchronize Cache (O) */
 #define SCSI_VERIFY0x2F/* Verify (O) */
 #define SCSI_WRITE60x0A/* Write 6-Byte (MANDATORY) */
 #define SCSI_WRITE10   0x2A/* Write 10-Byte (MANDATORY) */
+#define SCSI_WRITE16   0x8A/* Write 16-byte */
 #define SCSI_WRT_VERIFY0x2E/* Write and Verify (O) */
 #define SCSI_WRITE_LONG0x3F/* Write Long (O) */
 #define SCSI_WRITE_SAME0x41/* Write Same (O) */
 
+#define SCSI_SAI_RD_CAPAC160x10/* Service Action: Read Capacity (16) */
+
 /**
  * struct scsi_plat - stores information about SCSI controller
  *

-- 
2.41.0



[PATCH 2/4] usb: storage: Increase read/write timeout

2023-10-29 Thread Hector Martin
Some USB devices (like hard disks) can take a long time to initially
respond to read/write requests. Explicitly specify a much longer timeout
than normal.

Signed-off-by: Hector Martin 
---
 common/usb_storage.c | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/common/usb_storage.c b/common/usb_storage.c
index c9e2d7343ce2..729ddbc75a48 100644
--- a/common/usb_storage.c
+++ b/common/usb_storage.c
@@ -53,6 +53,12 @@
 #undef BBB_COMDAT_TRACE
 #undef BBB_XPORT_TRACE
 
+/*
+ * Timeout for read/write transfers. This needs to be able to handle very slow
+ * devices, such as hard disks that are spinning up.
+ */
+#define US_XFER_TIMEOUT 15000
+
 #include 
 /* direction table -- this indicates the direction of the data
  * transfer for each command code -- a 1 indicates input
@@ -394,7 +400,7 @@ static int us_one_transfer(struct us_data *us, int pipe, 
char *buf, int length)
  11 - maxtry);
result = usb_bulk_msg(us->pusb_dev, pipe, buf,
  this_xfer, ,
- USB_CNTL_TIMEOUT * 5);
+ US_XFER_TIMEOUT);
debug("bulk_msg returned %d xferred %d/%d\n",
  result, partial, this_xfer);
if (us->pusb_dev->status != 0) {
@@ -743,7 +749,7 @@ static int usb_stor_BBB_transport(struct scsi_cmd *srb, 
struct us_data *us)
pipe = pipeout;
 
result = usb_bulk_msg(us->pusb_dev, pipe, srb->pdata, srb->datalen,
- _actlen, USB_CNTL_TIMEOUT * 5);
+ _actlen, US_XFER_TIMEOUT);
/* special handling of STALL in DATA phase */
if ((result < 0) && (us->pusb_dev->status & USB_ST_STALLED)) {
debug("DATA:stall\n");

-- 
2.41.0



[PATCH 0/4] USB fixes: Mass Storage bugs & 64bit support

2023-10-29 Thread Hector Martin
This series fixes some bugs in the USBMS driver and adds 64-bit LBA
support. This is required to make USB HDDs >=4TB work.

Note that the increased timeout won't actually work right now, due to
broken handling in the underlying USB infrastructure. That will be fixed
in a follow-up series, which depends on [1] being applied first. The
USBMS part is logically stand-alone and can be applied in parallel
before that.

[1] 
https://lore.kernel.org/u-boot/20231029-usb-fixes-1-v2-0-623533f63...@marcan.st/

Signed-off-by: Hector Martin 
---
Hector Martin (4):
  scsi: Fix a bunch of SCSI definitions.
  usb: storage: Increase read/write timeout
  usb: storage: Use the correct CBW lengths
  usb: storage: Implement 64-bit LBA support

 common/usb_storage.c | 164 ++-
 drivers/ata/ahci.c   |   9 ++-
 drivers/scsi/scsi.c  |   4 +-
 include/scsi.h   |   8 ++-
 4 files changed, 150 insertions(+), 35 deletions(-)
---
base-commit: 8ad1c9c26f7740806a162818b790d4a72f515b7e
change-id: 20231029-usb-fixes-3-c72f829ba61b

Best regards,
-- 
Hector Martin 



[PATCH 2/2] usb: hub: Add missing reset recovery delay

2023-10-29 Thread Hector Martin
Some devices like YubiKeys need more time before SET_ADDRESS. The spec
says we need to wait 10ms.

Signed-off-by: Hector Martin 
---
 common/usb_hub.c | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/common/usb_hub.c b/common/usb_hub.c
index ba11a188ca64..858ada0f73be 100644
--- a/common/usb_hub.c
+++ b/common/usb_hub.c
@@ -391,6 +391,13 @@ int usb_hub_port_connect_change(struct usb_device *dev, 
int port)
break;
}
 
+   /*
+* USB 2.0 7.1.7.5: devices must be able to accept a SetAddress()
+* request (refer to Section 11.24.2 and Section 9.4 respectively)
+* after the reset recovery time 10 ms
+*/
+   mdelay(10);
+
 #if CONFIG_IS_ENABLED(DM_USB)
struct udevice *child;
 

-- 
2.41.0



[PATCH 1/2] usb: kbd: Ignore Yubikeys

2023-10-29 Thread Hector Martin
We currently only support one USB keyboard device, but some devices
emulate keyboards for other purposes. Most commonly, people run into
this with Yubikeys, so let's ignore those.

Even if we end up supporting multiple keyboards in the future, it's
safer to ignore known non-keyboard devices.

This is particularly important to avoid regressing some users, since
YubiKeys often *don't* work due to other bugs in the USB stack, but will
start to work once they are fixed.

Signed-off-by: Hector Martin 
---
 common/usb_kbd.c | 19 +++
 1 file changed, 19 insertions(+)

diff --git a/common/usb_kbd.c b/common/usb_kbd.c
index 352d86fb2ece..e8c102c567e4 100644
--- a/common/usb_kbd.c
+++ b/common/usb_kbd.c
@@ -120,6 +120,15 @@ struct usb_kbd_pdata {
 
 extern int __maybe_unused net_busy_flag;
 
+/*
+ * Since we only support one usbkbd device in the iomux,
+ * ignore common keyboard-emulating devices that aren't
+ * real keyboards.
+ */
+const uint16_t vid_blocklist[] = {
+   0x1050, /* Yubico */
+};
+
 /* The period of time between two calls of usb_kbd_testc(). */
 static unsigned long kbd_testc_tms;
 
@@ -465,6 +474,7 @@ static int usb_kbd_probe_dev(struct usb_device *dev, 
unsigned int ifnum)
struct usb_endpoint_descriptor *ep;
struct usb_kbd_pdata *data;
int epNum;
+   int i;
 
if (dev->descriptor.bNumConfigurations != 1)
return 0;
@@ -480,6 +490,15 @@ static int usb_kbd_probe_dev(struct usb_device *dev, 
unsigned int ifnum)
if (iface->desc.bInterfaceProtocol != USB_PROT_HID_KEYBOARD)
return 0;
 
+   for (i = 0; i < ARRAY_SIZE(vid_blocklist); i++) {
+   if (dev->descriptor.idVendor == vid_blocklist[i]) {
+   printf("Ignoring keyboard device 0x%x:0x%x\n",
+  dev->descriptor.idVendor,
+  dev->descriptor.idProduct);
+   return 0;
+   }
+   }
+
for (epNum = 0; epNum < iface->desc.bNumEndpoints; epNum++) {
ep = >ep_desc[epNum];
 

-- 
2.41.0



[PATCH 0/2] USB fixes: Add missing timeout, ignore YubiKeys

2023-10-29 Thread Hector Martin
This mini series fixes one bug, but in the process makes YubiKeys work,
which then regresses people who have one *and* a USB keyboard, since we
only support a single keyboard device.

Therefore patch #1 makes U-Boot ignore YubiKeys, so #2 does not
regress things.

Signed-off-by: Hector Martin 
---
Hector Martin (2):
  usb: kbd: Ignore Yubikeys
  usb: hub: Add missing reset recovery delay

 common/usb_hub.c |  7 +++
 common/usb_kbd.c | 19 +++
 2 files changed, 26 insertions(+)
---
base-commit: 8ad1c9c26f7740806a162818b790d4a72f515b7e
change-id: 20231029-usb-fixes-2-976486d1603c

Best regards,
-- 
Hector Martin 



[PATCH v2 8/8] usb: xhci: Add more debugging

2023-10-29 Thread Hector Martin
A bunch of miscellaneous debug messages to aid in working out USB
issues.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 29 ++---
 1 file changed, 26 insertions(+), 3 deletions(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index b60661fe05e7..dabe6cf86af2 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -214,6 +214,9 @@ static dma_addr_t queue_trb(struct xhci_ctrl *ctrl, struct 
xhci_ring *ring,
 
addr = xhci_trb_virt_to_dma(ring->enq_seg, (union xhci_trb *)trb);
 
+   debug("trb @ %llx: %08x %08x %08x %08x\n", addr,
+ trb_fields[0], trb_fields[1], trb_fields[2], trb_fields[3]);
+
inc_enq(ctrl, ring, more_trbs_coming);
 
return addr;
@@ -296,6 +299,8 @@ void xhci_queue_command(struct xhci_ctrl *ctrl, dma_addr_t 
addr, u32 slot_id,
 {
u32 fields[4];
 
+   debug("CMD: %llx 0x%x 0x%x %d\n", addr, slot_id, ep_index, cmd);
+
BUG_ON(prepare_ring(ctrl, ctrl->cmd_ring, EP_STATE_RUNNING));
 
fields[0] = lower_32_bits(addr);
@@ -471,8 +476,14 @@ union xhci_trb *xhci_wait_for_event(struct xhci_ctrl 
*ctrl, trb_type expected)
 
type = TRB_FIELD_TO_TYPE(le32_to_cpu(event->event_cmd.flags));
if (type == expected ||
-   (expected == TRB_NONE && type != TRB_PORT_STATUS))
+   (expected == TRB_NONE && type != TRB_PORT_STATUS)) {
+   debug("Event: %08x %08x %08x %08x\n",
+ le32_to_cpu(event->generic.field[0]),
+ le32_to_cpu(event->generic.field[1]),
+ le32_to_cpu(event->generic.field[2]),
+ le32_to_cpu(event->generic.field[3]));
return event;
+   }
 
if (type == TRB_PORT_STATUS)
/* TODO: remove this once enumeration has been reworked */
@@ -484,8 +495,9 @@ union xhci_trb *xhci_wait_for_event(struct xhci_ctrl *ctrl, 
trb_type expected)
le32_to_cpu(event->generic.field[2])) !=
COMP_SUCCESS);
else
-   printf("Unexpected XHCI event TRB, skipping... "
+   printf("Unexpected XHCI event TRB, expected %d... "
"(%08x %08x %08x %08x)\n",
+   expected,
le32_to_cpu(event->generic.field[0]),
le32_to_cpu(event->generic.field[1]),
le32_to_cpu(event->generic.field[2]),
@@ -602,10 +614,13 @@ static void abort_td(struct usb_device *udev, int 
ep_index)
 static void record_transfer_result(struct usb_device *udev,
   union xhci_trb *event, int length)
 {
+   xhci_comp_code code = GET_COMP_CODE(
+   le32_to_cpu(event->trans_event.transfer_len));
+
udev->act_len = min(length, length -

(int)EVENT_TRB_LEN(le32_to_cpu(event->trans_event.transfer_len)));
 
-   switch (GET_COMP_CODE(le32_to_cpu(event->trans_event.transfer_len))) {
+   switch (code) {
case COMP_SUCCESS:
BUG_ON(udev->act_len != length);
/* fallthrough */
@@ -613,16 +628,23 @@ static void record_transfer_result(struct usb_device 
*udev,
udev->status = 0;
break;
case COMP_STALL:
+   debug("Xfer STALL\n");
udev->status = USB_ST_STALLED;
break;
case COMP_DB_ERR:
+   debug("Xfer DB_ERR\n");
+   udev->status = USB_ST_BUF_ERR;
+   break;
case COMP_TRB_ERR:
+   debug("Xfer TRB_ERR\n");
udev->status = USB_ST_BUF_ERR;
break;
case COMP_BABBLE:
+   debug("Xfer BABBLE\n");
udev->status = USB_ST_BABBLE_DET;
break;
default:
+   debug("Xfer error: %d\n", code);
udev->status = 0x80;  /* USB_ST_TOO_LAZY_TO_MAKE_A_NEW_MACRO */
}
 }
@@ -1016,6 +1038,7 @@ int xhci_ctrl_tx(struct usb_device *udev, unsigned long 
pipe,
record_transfer_result(udev, event, length);
xhci_acknowledge_event(ctrl);
if (udev->status == USB_ST_STALLED) {
+   debug("EP %d stalled\n", ep_index);
reset_ep(udev, ep_index);
return -EPIPE;
}

-- 
2.41.0



[PATCH v2 7/8] usb: xhci: Fix DMA address calculation in queue_trb

2023-10-29 Thread Hector Martin
We need to get the DMA address before incrementing the pointer, as that
might move us onto another segment.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index ae0ab5744df0..b60661fe05e7 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -202,6 +202,7 @@ static dma_addr_t queue_trb(struct xhci_ctrl *ctrl, struct 
xhci_ring *ring,
bool more_trbs_coming, unsigned int *trb_fields)
 {
struct xhci_generic_trb *trb;
+   dma_addr_t addr;
int i;
 
trb = >enqueue->generic;
@@ -211,9 +212,11 @@ static dma_addr_t queue_trb(struct xhci_ctrl *ctrl, struct 
xhci_ring *ring,
 
xhci_flush_cache((uintptr_t)trb, sizeof(struct xhci_generic_trb));
 
+   addr = xhci_trb_virt_to_dma(ring->enq_seg, (union xhci_trb *)trb);
+
inc_enq(ctrl, ring, more_trbs_coming);
 
-   return xhci_trb_virt_to_dma(ring->enq_seg, (union xhci_trb *)trb);
+   return addr;
 }
 
 /**

-- 
2.41.0



[PATCH v2 6/8] usb: xhci: Do not panic on event timeouts

2023-10-29 Thread Hector Martin
Now that we always check the return value, just return NULL on timeouts.
We can still log the error since this is a problem, but it's not reason
to panic.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index a969eafdc8ee..ae0ab5744df0 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -494,8 +494,9 @@ union xhci_trb *xhci_wait_for_event(struct xhci_ctrl *ctrl, 
trb_type expected)
if (expected == TRB_TRANSFER)
return NULL;
 
-   printf("XHCI timeout on event type %d... cannot recover.\n", expected);
-   BUG();
+   printf("XHCI timeout on event type %d...\n", expected);
+
+   return NULL;
 }
 
 /*

-- 
2.41.0



[PATCH v2 5/8] usb: xhci: Fail on attempt to queue TRBs to a halted endpoint

2023-10-29 Thread Hector Martin
This isn't going to work, don't pretend it will and then end up timing
out.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index db8b8f200250..a969eafdc8ee 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -243,7 +243,8 @@ static int prepare_ring(struct xhci_ctrl *ctrl, struct 
xhci_ring *ep_ring,
puts("WARN waiting for error on ep to be cleared\n");
return -EINVAL;
case EP_STATE_HALTED:
-   puts("WARN halted endpoint, queueing URB anyway.\n");
+   puts("WARN endpoint is halted\n");
+   return -EINVAL;
case EP_STATE_STOPPED:
case EP_STATE_RUNNING:
debug("EP STATE RUNNING.\n");

-- 
2.41.0



[PATCH v2 4/8] usb: xhci: Recover from halted bulk endpoints

2023-10-29 Thread Hector Martin
There is currently no codepath to recover from this case. In principle
we could require that the upper layer do this explicitly, but let's just
do it in xHCI when the next bulk transfer is started, since that
reasonably implies whatever caused the problem has been dealt with.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index e02a6e300c4f..db8b8f200250 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -671,6 +671,14 @@ int xhci_bulk_tx(struct usb_device *udev, unsigned long 
pipe,
 
ep_ctx = xhci_get_ep_ctx(ctrl, virt_dev->out_ctx, ep_index);
 
+   /*
+* If the endpoint was halted due to a prior error, resume it before
+* the next transfer. It is the responsibility of the upper layer to
+* have dealt with whatever caused the error.
+*/
+   if ((le32_to_cpu(ep_ctx->ep_info) & EP_STATE_MASK) == EP_STATE_HALTED)
+   reset_ep(udev, ep_index);
+
ring = virt_dev->eps[ep_index].ring;
/*
 * How much data is (potentially) left before the 64KB boundary?

-- 
2.41.0



[PATCH v2 3/8] usb: xhci: Allow context state errors when halting an endpoint

2023-10-29 Thread Hector Martin
There is a race where an endpoint may halt by itself while we are trying
to halt it, which results in a context state error. See xHCI 4.6.9 which
mentions this case.

This also avoids BUGging when we attempt to stop an endpoint which was
already stopped to begin with, which is probably a bug elsewhere but
not a good reason to crash.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index d21e76e0bdb6..e02a6e300c4f 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -545,6 +545,7 @@ static void abort_td(struct usb_device *udev, int ep_index)
struct xhci_ctrl *ctrl = xhci_get_ctrl(udev);
struct xhci_ring *ring =  ctrl->devs[udev->slot_id]->eps[ep_index].ring;
union xhci_trb *event;
+   xhci_comp_code comp;
trb_type type;
u64 addr;
u32 field;
@@ -573,10 +574,11 @@ static void abort_td(struct usb_device *udev, int 
ep_index)
printf("abort_td: Expected a TRB_TRANSFER TRB first\n");
}
 
+   comp = GET_COMP_CODE(le32_to_cpu(event->event_cmd.status));
BUG_ON(type != TRB_COMPLETION ||
TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
-   != udev->slot_id || GET_COMP_CODE(le32_to_cpu(
-   event->event_cmd.status)) != COMP_SUCCESS);
+   != udev->slot_id || (comp != COMP_SUCCESS && comp
+   != COMP_CTX_STATE));
xhci_acknowledge_event(ctrl);
 
addr = xhci_trb_virt_to_dma(ring->enq_seg,

-- 
2.41.0



[PATCH v2 2/8] usb: xhci: Better error handling in abort_td()

2023-10-29 Thread Hector Martin
If the xHC has a problem with our STOP ENDPOINT command, it is likely to
return a completion directly instead of first a transfer event for the
in-progress transfer. Handle that more gracefully.

We still BUG() on the error code, but at least we don't end up timing
out on the event and ending up with unexpected event errors.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 34 ++
 include/usb/xhci.h   |  2 ++
 2 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index d0960812a47b..d21e76e0bdb6 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -466,7 +466,8 @@ union xhci_trb *xhci_wait_for_event(struct xhci_ctrl *ctrl, 
trb_type expected)
continue;
 
type = TRB_FIELD_TO_TYPE(le32_to_cpu(event->event_cmd.flags));
-   if (type == expected)
+   if (type == expected ||
+   (expected == TRB_NONE && type != TRB_PORT_STATUS))
return event;
 
if (type == TRB_PORT_STATUS)
@@ -544,27 +545,36 @@ static void abort_td(struct usb_device *udev, int 
ep_index)
struct xhci_ctrl *ctrl = xhci_get_ctrl(udev);
struct xhci_ring *ring =  ctrl->devs[udev->slot_id]->eps[ep_index].ring;
union xhci_trb *event;
+   trb_type type;
u64 addr;
u32 field;
 
xhci_queue_command(ctrl, 0, udev->slot_id, ep_index, TRB_STOP_RING);
 
-   event = xhci_wait_for_event(ctrl, TRB_TRANSFER);
+   event = xhci_wait_for_event(ctrl, TRB_NONE);
if (!event)
return;
 
-   field = le32_to_cpu(event->trans_event.flags);
-   BUG_ON(TRB_TO_SLOT_ID(field) != udev->slot_id);
-   BUG_ON(TRB_TO_EP_INDEX(field) != ep_index);
-   BUG_ON(GET_COMP_CODE(le32_to_cpu(event->trans_event.transfer_len
-   != COMP_STOP)));
-   xhci_acknowledge_event(ctrl);
+   type = TRB_FIELD_TO_TYPE(le32_to_cpu(event->event_cmd.flags));
+   if (type == TRB_TRANSFER) {
+   field = le32_to_cpu(event->trans_event.flags);
+   BUG_ON(TRB_TO_SLOT_ID(field) != udev->slot_id);
+   BUG_ON(TRB_TO_EP_INDEX(field) != ep_index);
+   BUG_ON(GET_COMP_CODE(le32_to_cpu(event->trans_event.transfer_len
+   != COMP_STOP)));
+   xhci_acknowledge_event(ctrl);
 
-   event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
-   if (!event)
-   return;
+   event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return;
+   type = TRB_FIELD_TO_TYPE(le32_to_cpu(event->event_cmd.flags));
 
-   BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
+   } else {
+   printf("abort_td: Expected a TRB_TRANSFER TRB first\n");
+   }
+
+   BUG_ON(type != TRB_COMPLETION ||
+   TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
!= udev->slot_id || GET_COMP_CODE(le32_to_cpu(
event->event_cmd.status)) != COMP_SUCCESS);
xhci_acknowledge_event(ctrl);
diff --git a/include/usb/xhci.h b/include/usb/xhci.h
index 4a4ac10229ac..04d16a256bbd 100644
--- a/include/usb/xhci.h
+++ b/include/usb/xhci.h
@@ -901,6 +901,8 @@ union xhci_trb {
 
 /* TRB type IDs */
 typedef enum {
+   /* reserved, used as a software sentinel */
+   TRB_NONE = 0,
/* bulk, interrupt, isoc scatter/gather, and control data stage */
TRB_NORMAL = 1,
/* setup stage for control transfers */

-- 
2.41.0



[PATCH v2 1/8] usb: xhci: Guard all calls to xhci_wait_for_event

2023-10-29 Thread Hector Martin
xhci_wait_for_event returns NULL on timeout, so the caller always has to
check for that. This addresses immediate explosions in this part
of the code when timeouts happen, but not the root cause for the
timeout.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 15 +++
 drivers/usb/host/xhci.c  |  9 +
 2 files changed, 24 insertions(+)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index c8260cbdf94b..d0960812a47b 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -511,6 +511,9 @@ static void reset_ep(struct usb_device *udev, int ep_index)
printf("Resetting EP %d...\n", ep_index);
xhci_queue_command(ctrl, 0, udev->slot_id, ep_index, TRB_RESET_EP);
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return;
+
field = le32_to_cpu(event->trans_event.flags);
BUG_ON(TRB_TO_SLOT_ID(field) != udev->slot_id);
xhci_acknowledge_event(ctrl);
@@ -519,6 +522,9 @@ static void reset_ep(struct usb_device *udev, int ep_index)
(void *)((uintptr_t)ring->enqueue | ring->cycle_state));
xhci_queue_command(ctrl, addr, udev->slot_id, ep_index, TRB_SET_DEQ);
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return;
+
BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
!= udev->slot_id || GET_COMP_CODE(le32_to_cpu(
event->event_cmd.status)) != COMP_SUCCESS);
@@ -544,6 +550,9 @@ static void abort_td(struct usb_device *udev, int ep_index)
xhci_queue_command(ctrl, 0, udev->slot_id, ep_index, TRB_STOP_RING);
 
event = xhci_wait_for_event(ctrl, TRB_TRANSFER);
+   if (!event)
+   return;
+
field = le32_to_cpu(event->trans_event.flags);
BUG_ON(TRB_TO_SLOT_ID(field) != udev->slot_id);
BUG_ON(TRB_TO_EP_INDEX(field) != ep_index);
@@ -552,6 +561,9 @@ static void abort_td(struct usb_device *udev, int ep_index)
xhci_acknowledge_event(ctrl);
 
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return;
+
BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
!= udev->slot_id || GET_COMP_CODE(le32_to_cpu(
event->event_cmd.status)) != COMP_SUCCESS);
@@ -561,6 +573,9 @@ static void abort_td(struct usb_device *udev, int ep_index)
(void *)((uintptr_t)ring->enqueue | ring->cycle_state));
xhci_queue_command(ctrl, addr, udev->slot_id, ep_index, TRB_SET_DEQ);
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return;
+
BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
!= udev->slot_id || GET_COMP_CODE(le32_to_cpu(
event->event_cmd.status)) != COMP_SUCCESS);
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 5cacf0769ec7..d13cbff9b372 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -451,6 +451,9 @@ static int xhci_configure_endpoints(struct usb_device 
*udev, bool ctx_change)
xhci_queue_command(ctrl, in_ctx->dma, udev->slot_id, 0,
   ctx_change ? TRB_EVAL_CONTEXT : TRB_CONFIG_EP);
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return -ETIMEDOUT;
+
BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
!= udev->slot_id);
 
@@ -647,6 +650,9 @@ static int xhci_address_device(struct usb_device *udev, int 
root_portnr)
xhci_queue_command(ctrl, virt_dev->in_ctx->dma,
   slot_id, 0, TRB_ADDR_DEV);
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return -ETIMEDOUT;
+
BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags)) != slot_id);
 
switch (GET_COMP_CODE(le32_to_cpu(event->event_cmd.status))) {
@@ -722,6 +728,9 @@ static int _xhci_alloc_device(struct usb_device *udev)
 
xhci_queue_command(ctrl, 0, 0, 0, TRB_ENABLE_SLOT);
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return -ETIMEDOUT;
+
BUG_ON(GET_COMP_CODE(le32_to_cpu(event->event_cmd.status))
!= COMP_SUCCESS);
 

-- 
2.41.0



[PATCH v2 0/8] USB fixes: xHCI error handling

2023-10-29 Thread Hector Martin
This series is the first of a few bundles of USB fixes we have been
carrying downstream on the Asahi U-Boot branch for a few months.

Patches #1-#6 fix a series of related robustness issues. Certain
conditions related to endpoint stalls revealed a chain of bugs
throughout the stack that caused U-Boot to completely fail when
encountering some errors (and errors are a fact of life with USB).

Example scenario:
- The device stalls a bulk endpoint due to an error
- The upper layer driver tries to use the endpoint again
- There is no endpoint stall clear wired up in the API, so for starters
  this is doomed to fail (fix: #4)
- xHCI knows the endpoint is halted, but tries to queue the TRB anyway,
  which can't work (fix: #5)
- Since the endpoint is halted nothing happens, so the transfer times
  out.
- The timeout handling tries to abort the transfer
- abort_td() tries to stop the endpoint, but since it is already halted,
  this results in a context state error. As the transfer never started,
  there is no completion event, so xhci_wait_for_event() is waiting for
  the wrong event type, and logs an error and returns NULL. (fix: #2)
- abort_td() crashes due to failing to guard against the NULL event
  (fix: #1)
- Even after fixing all that, abort_td() would not handle the context
  state error properly and BUG() (fix: #3). This also fixes a race
  condition documented in the xHCI spec that could occur even in the
  absence of all the other bugs.

Patch #6 addresses a related robustness issue where
xhci_wait_for_event() panics on event timeouts other than for transfers.
While this is clearly an unexpected condition and indicates a bug
elsewhere, it's no reason to outright crash. USB is notoriously
impossible to get 100% right, and we can't afford to be breaking users'
systems at any sign of trouble. Error conditions should always be dealt
with as gracefully as possible (even if that results in a completely
broken USB controller, that is much preferable to aborting the boot
process entirely, especially on devices with non-USB storage and
keyboards where USB support is effectively optional for most users).
Since after patch #1 we now guard all callers to xhci_wait_for_event()
with at least trivial NULL checks, it's okay to return and continue in
case of timeouts.

Patch #7 addresses an unrelated bug I ran into while debugging all this,
and patch #8 adds extra debug logs to make finding future issues less
painful.

I believe this should fix this Fedora bug too, which is either an
instance of the above sequence of events, or (more likely, given the
difficulty reproducing) the race condition documented in xHCI 4.6.9:

https://bugzilla.redhat.com/show_bug.cgi?id=2244305

Signed-off-by: Hector Martin 
---
Changes in v2:
- Squashed in a trivial fix for patch #1
- Removed spurious blank line
- Added a longer description to the cover letter
- Link to v1: 
https://lore.kernel.org/r/20231027-usb-fixes-1-v1-0-1c879bbcd...@marcan.st

---
Hector Martin (8):
  usb: xhci: Guard all calls to xhci_wait_for_event
  usb: xhci: Better error handling in abort_td()
  usb: xhci: Allow context state errors when halting an endpoint
  usb: xhci: Recover from halted bulk endpoints
  usb: xhci: Fail on attempt to queue TRBs to a halted endpoint
  usb: xhci: Do not panic on event timeouts
  usb: xhci: Fix DMA address calculation in queue_trb
  usb: xhci: Add more debugging

 drivers/usb/host/xhci-ring.c | 99 
 drivers/usb/host/xhci.c  |  9 
 include/usb/xhci.h   |  2 +
 3 files changed, 92 insertions(+), 18 deletions(-)
---
base-commit: 7c0d668103abae3ec14cd96d882ba20134bb4529
change-id: 20231027-usb-fixes-1-83bfc7013012

Best regards,
-- 
Hector Martin 



Re: [dmarc-ietf] DMARCbis way forward: Do we need our session at IETF 118

2023-10-28 Thread Hector Santos
Fwiiw, Lurker opinion:

I ideally vote to make DMARCBis Experimental Status and begin to explore the 
“required” integration between envelope (5321 only) protocols and payload 
(5322) protocols. Specifically, work on a “proper” DKIM+SPF Policy Modeling 
with 3rd party signature support. 

But realistically, we should finish DMARCBis as is, as Levine desires.  
However, imto, keeping it a Standard Track will increase the ignorance.  I 
don’t see any new gains for current my package implementation.  At best, the 
industry is acknowledging a big integration problem. Domains who can’ get past 
sending mail the large Mall Hosting domains and managed domains DMARC 
processing are learning to relax their policies. 

PS:  I don’t plan any appeal 

—
HLS


> On Oct 28, 2023, at 12:49 PM, Murray S. Kucherawy  wrote:
> 
> On Sat, Oct 28, 2023 at 8:28 AM Richard Clayton  > wrote:
>> Paying attention to the (sometimes inferred) age of a signature is also
>> important for reducing the opportunity for replay, viz: it would be a
>> Good Thing for senders to set appropriately short expire times.
> 
> Why does it have to be inferred sometimes?  Have you found "t=" values to be 
> occasionally inaccurate?
> 
> The DKIM standard advises against using "x=" to combat replay attacks.  We 
> could always update that advice, but we might also want to review why it was 
> put there in the first place.  I remember the reason being a good one.
> 
> I think there's also been discussion around the reliability of "x=" across 
> implementations.  Since it's not mandatory to support, it doesn't seem to be 
> very common to produce without the expectation of consumers.
> 
> -MSK, participating
> ___
> dmarc mailing list
> dmarc@ietf.org
> https://www.ietf.org/mailman/listinfo/dmarc

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


[KPipeWire] [Bug 475472] Spectacle fails to record a window with h264 in specific dimensions

2023-10-28 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=475472

--- Comment #5 from Hector Martin  ---
Yes, 4:2:0 should be the default since a lot of decoders choke on 4:4:4 too. If
offered, 4:4:4 should be an explicit opt-in for users that know their use case
will handle it.

-- 
You are receiving this mail because:
You are watching all bug changes.

[KPipeWire] [Bug 475472] Spectacle fails to record a window with h264 in specific dimensions

2023-10-28 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=475472

Hector Martin  changed:

   What|Removed |Added

Version|23.08.1 |unspecified
 CC||aleix...@kde.org
  Component|General |general
Product|Spectacle   |KPipeWire
   Assignee|noaha...@gmail.com  |plasma-b...@kde.org

--- Comment #3 from Hector Martin  ---
Reassigning to KPipeWire since I'm pretty sure that's where this should be
handled.

-- 
You are receiving this mail because:
You are watching all bug changes.

[Spectacle] [Bug 475472] Spectacle fails to record a window with h264 in specific dimensions

2023-10-28 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=475472

--- Comment #2 from Hector Martin  ---
* I meant also width there, not height (which is what the OP reported).

-- 
You are receiving this mail because:
You are watching all bug changes.

[KPipeWire] [Bug 476187] New: OpenH264 codec support

2023-10-28 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=476187

Bug ID: 476187
   Summary: OpenH264 codec support
Classification: Frameworks and Libraries
   Product: KPipeWire
   Version: unspecified
  Platform: Other
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: plasma-b...@kde.org
  Reporter: mar...@marcan.st
CC: aleix...@kde.org
  Target Milestone: ---

KPipeWire currently only supports software h.264 encoding via libx264. libx264
cannot be safely distributed by default in countries with restrictive software
patent legislation. OpenH264 is an open source h.264 encoder distributed by
Cisco, making use of their H.264 license (https://www.openh264.org/). It allows
distributions like Fedora to support h.264 without much fuss, and in particular
for the Fedora Asahi Remix we have hooked it up to be installed automatically,
so H.264 works out of the box in most apps.

Since KPipeWire uses libavcodec, and libavcodec has openh264 integration, it
should be pretty easy to generalize the code to not explicitly hardcode libx264
but rather allow openh264 as well.

The codec name in this case is `libopenh264`. Quality controls have to be
validated to make sure they work well on both codecs (see bug 476186 for the
story on libx264, I don't know off the top of my head what the appropriate
quality controls are for openh264 but I can investigate). Where libx264 is
available, it should be preferred, since x264 is widely considered to be the
best h.264 encoder in the world and particularly well optimized.

In the future there will be more h.264 encoder options, e.g. for Fedora Asahi
we plan to expose the internal hardware encoder as a h264_v4l2m2m
implementation. This is also supported by ffmpeg. So it might be worth setting
things up so that, at the very least as a fallback, "any codec that can encode
h.264" is selected. I believe this should be possible with ffmpeg by requesting
the `h264` codec and letting it pick an appropriate encoder. That will make
h.264 encoding at least work (if perhaps without ideal quality control) on any
machine that has a usable codec, without explicitly handling them all in
KPipeWire.

-- 
You are receiving this mail because:
You are watching all bug changes.

[KPipeWire] [Bug 476186] New: Screen recording quality is terrible

2023-10-28 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=476186

Bug ID: 476186
   Summary: Screen recording quality is terrible
Classification: Frameworks and Libraries
   Product: KPipeWire
   Version: unspecified
  Platform: Other
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: plasma-b...@kde.org
  Reporter: mar...@marcan.st
CC: aleix...@kde.org
  Target Milestone: ---

Screen recording quality in Spectacle is bad to the point of being unusable for
anything more than casual use.

Selecting quality is a codec, specific issue, but for x264 in particular, the
current code is very questionable:

https://invent.kde.org/plasma/kpipewire/-/blob/master/src/libx264encoder.cpp

This sets `m_avCodecContext->global_quality` from some weird formula, but it's
unclear how that maps to encoder settings in the end within ffmpeg. What I see
in the resulting files is that Average Bitrate mode is being used (rc=abr),
with bitrate somehow varying based on dimensions. This is a very bad choice for
x264.

The correct "just give me a given quality please" mode in x264 is CRF (constant
rate factor) mode, which does not force any given bitrate but rather targets a
specific visual quality. If the quality settings are not configurable (or not
configurable beyond a simple quality slider), then that mode should be the
default to give decent output without more fuss. CRF mode is
resolution-independent, and will automatically scale bitrate depending on the
requirements (video size, motion complexity, etc.). It's the best option to
default to for most users.

-- 
You are receiving this mail because:
You are watching all bug changes.

[Spectacle] [Bug 475472] Spectacle fails to record a window with h264 in specific dimensions

2023-10-28 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=475472

Hector Martin  changed:

   What|Removed |Added

 Ever confirmed|0   |1
 CC||mar...@marcan.st
 Status|REPORTED|CONFIRMED

--- Comment #1 from Hector Martin  ---
It also fails if the height is not divisible by 2. This is a KPipeWire bug. It
needs to pad dimensions to an even value. This is required by most codecs due
to color subsampling (unless you use 4:4:4 mode, which should probably be
offered for screen recording anyway).

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: [PATCH] fixup! usb: xhci: Guard all calls to xhci_wait_for_event

2023-10-27 Thread Hector Martin
On 27/10/2023 09.36, Marek Vasut wrote:
> On 10/27/23 01:26, Hector Martin wrote:
>> Gah, I should've paid more attention to that rebase. Here's a dumb
>> fixup for this patch. I'll squash it into a v2 if needed alongside
>> any other changes, or if it looks good feel free to apply/squash
>> it in directly.
>>
>> ---
>>   drivers/usb/host/xhci-ring.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
>> index e2bd2e999a8e..7f2507be0cf0 100644
>> --- a/drivers/usb/host/xhci-ring.c
>> +++ b/drivers/usb/host/xhci-ring.c
>> @@ -532,6 +532,7 @@ static void reset_ep(struct usb_device *udev, int 
>> ep_index)
>>  event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
>>  if (!event)
>>  return;
>> +field = le32_to_cpu(event->trans_event.flags);
>>  BUG_ON(TRB_TO_SLOT_ID(field) != udev->slot_id);
>>  xhci_acknowledge_event(ctrl);
> 
> Please squash, and add
> 
> Reviewed-by: Marek Vasut 
> 
> Also, +CC Minda,
> 
> there has been a similar fix to this one but with much more information 
> about the failure, see:
> 
> [PATCH v1] usb: xhci: Check return value of wait for TRB_TRANSFER event
> 
> I think it would be useful to somehow include that information, so it 
> wouldn't be lost.

The root cause for *that* failure is what I fix in patch #5. From that
thread:

> scanning bus xhci_pci for devices... WARN halted endpoint, queueing
URB anyway.

Without #5 and without #1, that situation then leads to fireworks.

A bunch of things are broken, which is why this is a series and not a
single patch. I have more fixes to timeout handling, mass storage, etc.
coming up after this. I fixed most of this in one long day of trying
random USB devices, so it's not so much subtle super specific problems I
could document the crashes for as just wide-ranging problems in the
u-boot USB stack. None of this is particularly hard to repro if you just
try a bunch of normal consumer USB devices, including things like USB
HDDs which take time to spin-up leading to timeouts etc. The crash dumps
aren't particularly useful, it's the subtleties of the xHCI interactions
that are, but for that you need to add and enable a lot more debugging
(patch #8).

I think the main reason all this stuff is broken and we're finding out
now is that u-boot has traditionally been used in embedded devices with
fairly narrow use cases for USB, and now we're running it on
workstation-grade laptops and desktops and people are plugging in all
kinds of normal USB devices and expecting their bootloader not to freak
out with them (even though most of the time it doesn't need to talk to
them). It's really easy to run into this stuff when that's your use case.

- Hector



[PATCH] fixup! usb: xhci: Guard all calls to xhci_wait_for_event

2023-10-26 Thread Hector Martin
Gah, I should've paid more attention to that rebase. Here's a dumb
fixup for this patch. I'll squash it into a v2 if needed alongside
any other changes, or if it looks good feel free to apply/squash
it in directly.

---
 drivers/usb/host/xhci-ring.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index e2bd2e999a8e..7f2507be0cf0 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -532,6 +532,7 @@ static void reset_ep(struct usb_device *udev, int ep_index)
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
if (!event)
return;
+   field = le32_to_cpu(event->trans_event.flags);
BUG_ON(TRB_TO_SLOT_ID(field) != udev->slot_id);
xhci_acknowledge_event(ctrl);
 
-- 
2.41.0



[PATCH 8/8] usb: xhci: Add more debugging

2023-10-26 Thread Hector Martin
A bunch of miscellaneous debug messages to aid in working out USB
issues.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 29 ++---
 1 file changed, 26 insertions(+), 3 deletions(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index b9518e4a6743..e2bd2e999a8e 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -215,6 +215,9 @@ static dma_addr_t queue_trb(struct xhci_ctrl *ctrl, struct 
xhci_ring *ring,
 
addr = xhci_trb_virt_to_dma(ring->enq_seg, (union xhci_trb *)trb);
 
+   debug("trb @ %llx: %08x %08x %08x %08x\n", addr,
+ trb_fields[0], trb_fields[1], trb_fields[2], trb_fields[3]);
+
inc_enq(ctrl, ring, more_trbs_coming);
 
return addr;
@@ -297,6 +300,8 @@ void xhci_queue_command(struct xhci_ctrl *ctrl, dma_addr_t 
addr, u32 slot_id,
 {
u32 fields[4];
 
+   debug("CMD: %llx 0x%x 0x%x %d\n", addr, slot_id, ep_index, cmd);
+
BUG_ON(prepare_ring(ctrl, ctrl->cmd_ring, EP_STATE_RUNNING));
 
fields[0] = lower_32_bits(addr);
@@ -472,8 +477,14 @@ union xhci_trb *xhci_wait_for_event(struct xhci_ctrl 
*ctrl, trb_type expected)
 
type = TRB_FIELD_TO_TYPE(le32_to_cpu(event->event_cmd.flags));
if (type == expected ||
-   (expected == TRB_NONE && type != TRB_PORT_STATUS))
+   (expected == TRB_NONE && type != TRB_PORT_STATUS)) {
+   debug("Event: %08x %08x %08x %08x\n",
+ le32_to_cpu(event->generic.field[0]),
+ le32_to_cpu(event->generic.field[1]),
+ le32_to_cpu(event->generic.field[2]),
+ le32_to_cpu(event->generic.field[3]));
return event;
+   }
 
if (type == TRB_PORT_STATUS)
/* TODO: remove this once enumeration has been reworked */
@@ -485,8 +496,9 @@ union xhci_trb *xhci_wait_for_event(struct xhci_ctrl *ctrl, 
trb_type expected)
le32_to_cpu(event->generic.field[2])) !=
COMP_SUCCESS);
else
-   printf("Unexpected XHCI event TRB, skipping... "
+   printf("Unexpected XHCI event TRB, expected %d... "
"(%08x %08x %08x %08x)\n",
+   expected,
le32_to_cpu(event->generic.field[0]),
le32_to_cpu(event->generic.field[1]),
le32_to_cpu(event->generic.field[2]),
@@ -601,10 +613,13 @@ static void abort_td(struct usb_device *udev, int 
ep_index)
 static void record_transfer_result(struct usb_device *udev,
   union xhci_trb *event, int length)
 {
+   xhci_comp_code code = GET_COMP_CODE(
+   le32_to_cpu(event->trans_event.transfer_len));
+
udev->act_len = min(length, length -

(int)EVENT_TRB_LEN(le32_to_cpu(event->trans_event.transfer_len)));
 
-   switch (GET_COMP_CODE(le32_to_cpu(event->trans_event.transfer_len))) {
+   switch (code) {
case COMP_SUCCESS:
BUG_ON(udev->act_len != length);
/* fallthrough */
@@ -612,16 +627,23 @@ static void record_transfer_result(struct usb_device 
*udev,
udev->status = 0;
break;
case COMP_STALL:
+   debug("Xfer STALL\n");
udev->status = USB_ST_STALLED;
break;
case COMP_DB_ERR:
+   debug("Xfer DB_ERR\n");
+   udev->status = USB_ST_BUF_ERR;
+   break;
case COMP_TRB_ERR:
+   debug("Xfer TRB_ERR\n");
udev->status = USB_ST_BUF_ERR;
break;
case COMP_BABBLE:
+   debug("Xfer BABBLE\n");
udev->status = USB_ST_BABBLE_DET;
break;
default:
+   debug("Xfer error: %d\n", code);
udev->status = 0x80;  /* USB_ST_TOO_LAZY_TO_MAKE_A_NEW_MACRO */
}
 }
@@ -1015,6 +1037,7 @@ int xhci_ctrl_tx(struct usb_device *udev, unsigned long 
pipe,
record_transfer_result(udev, event, length);
xhci_acknowledge_event(ctrl);
if (udev->status == USB_ST_STALLED) {
+   debug("EP %d stalled\n", ep_index);
reset_ep(udev, ep_index);
return -EPIPE;
}

-- 
2.41.0



[PATCH 6/8] usb: xhci: Do not panic on event timeouts

2023-10-26 Thread Hector Martin
Now that we always check the return value, just return NULL on timeouts.
We can still log the error since this is a problem, but it's not reason
to panic.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index 3ef8c2f14ccc..b95f20642943 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -494,8 +494,9 @@ union xhci_trb *xhci_wait_for_event(struct xhci_ctrl *ctrl, 
trb_type expected)
if (expected == TRB_TRANSFER)
return NULL;
 
-   printf("XHCI timeout on event type %d... cannot recover.\n", expected);
-   BUG();
+   printf("XHCI timeout on event type %d...\n", expected);
+
+   return NULL;
 }
 
 /*

-- 
2.41.0



[PATCH 7/8] usb: xhci: Fix DMA address calculation in queue_trb

2023-10-26 Thread Hector Martin
We need to get the DMA address before incrementing the pointer, as that
might move us onto another segment.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index b95f20642943..b9518e4a6743 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -202,18 +202,22 @@ static dma_addr_t queue_trb(struct xhci_ctrl *ctrl, 
struct xhci_ring *ring,
bool more_trbs_coming, unsigned int *trb_fields)
 {
struct xhci_generic_trb *trb;
+   dma_addr_t addr;
int i;
 
trb = >enqueue->generic;
 
+
for (i = 0; i < 4; i++)
trb->field[i] = cpu_to_le32(trb_fields[i]);
 
xhci_flush_cache((uintptr_t)trb, sizeof(struct xhci_generic_trb));
 
+   addr = xhci_trb_virt_to_dma(ring->enq_seg, (union xhci_trb *)trb);
+
inc_enq(ctrl, ring, more_trbs_coming);
 
-   return xhci_trb_virt_to_dma(ring->enq_seg, (union xhci_trb *)trb);
+   return addr;
 }
 
 /**

-- 
2.41.0



[PATCH 4/8] usb: xhci: Recover from halted non-control endpoints

2023-10-26 Thread Hector Martin
There is currently no codepath to recover from this case. In principle
we could require that the upper layer do this explicitly, but let's just
do it in xHCI when the next bulk transfer is started, since that
reasonably implies whatever caused the problem has been dealt with.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index 5c2d16b58589..60f2cf72dffa 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -669,6 +669,14 @@ int xhci_bulk_tx(struct usb_device *udev, unsigned long 
pipe,
 
ep_ctx = xhci_get_ep_ctx(ctrl, virt_dev->out_ctx, ep_index);
 
+   /*
+* If the endpoint was halted due to a prior error, resume it before
+* the next transfer. It is the responsibility of the upper layer to
+* have dealt with whatever caused the error.
+*/
+   if ((le32_to_cpu(ep_ctx->ep_info) & EP_STATE_MASK) == EP_STATE_HALTED)
+   reset_ep(udev, ep_index);
+
ring = virt_dev->eps[ep_index].ring;
/*
 * How much data is (potentially) left before the 64KB boundary?

-- 
2.41.0



[PATCH 5/8] usb: xhci: Fail on attempt to queue TRBs to a halted endpoint

2023-10-26 Thread Hector Martin
This isn't going to work, don't pretend it will and then end up timing
out.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index 60f2cf72dffa..3ef8c2f14ccc 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -243,7 +243,8 @@ static int prepare_ring(struct xhci_ctrl *ctrl, struct 
xhci_ring *ep_ring,
puts("WARN waiting for error on ep to be cleared\n");
return -EINVAL;
case EP_STATE_HALTED:
-   puts("WARN halted endpoint, queueing URB anyway.\n");
+   puts("WARN endpoint is halted\n");
+   return -EINVAL;
case EP_STATE_STOPPED:
case EP_STATE_RUNNING:
debug("EP STATE RUNNING.\n");

-- 
2.41.0



[PATCH 3/8] usb: xhci: Allow context state errors when halting an endpoint

2023-10-26 Thread Hector Martin
There is a race where an endpoint may halt by itself while we are trying
to halt it, which results in a context state error. See xHCI 4.6.9 which
mentions this case.

This also avoids BUGging when we attempt to stop an endpoint which was
already stopped to begin with, which is probably a bug elsewhere but
not a good reason to crash.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index d08bb8e2bfba..5c2d16b58589 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -543,6 +543,7 @@ static void abort_td(struct usb_device *udev, int ep_index)
struct xhci_ctrl *ctrl = xhci_get_ctrl(udev);
struct xhci_ring *ring =  ctrl->devs[udev->slot_id]->eps[ep_index].ring;
union xhci_trb *event;
+   xhci_comp_code comp;
trb_type type;
u64 addr;
u32 field;
@@ -571,10 +572,11 @@ static void abort_td(struct usb_device *udev, int 
ep_index)
printf("abort_td: Expected a TRB_TRANSFER TRB first\n");
}
 
+   comp = GET_COMP_CODE(le32_to_cpu(event->event_cmd.status));
BUG_ON(type != TRB_COMPLETION ||
TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
-   != udev->slot_id || GET_COMP_CODE(le32_to_cpu(
-   event->event_cmd.status)) != COMP_SUCCESS);
+   != udev->slot_id || (comp != COMP_SUCCESS && comp
+   != COMP_CTX_STATE));
xhci_acknowledge_event(ctrl);
 
addr = xhci_trb_virt_to_dma(ring->enq_seg,

-- 
2.41.0



[PATCH 2/8] usb: xhci: Better error handling in abort_td()

2023-10-26 Thread Hector Martin
If the xHC has a problem with our STOP ENDPOINT command, it is likely to
return a completion directly instead of first a transfer event for the
in-progress transfer. Handle that more gracefully.

Right now we still BUG() on the error code, but at least we don't end up
timing out on the event and ending up with unexpected event errors.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 34 ++
 include/usb/xhci.h   |  2 ++
 2 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index aaf128ff9317..d08bb8e2bfba 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -466,7 +466,8 @@ union xhci_trb *xhci_wait_for_event(struct xhci_ctrl *ctrl, 
trb_type expected)
continue;
 
type = TRB_FIELD_TO_TYPE(le32_to_cpu(event->event_cmd.flags));
-   if (type == expected)
+   if (type == expected ||
+   (expected == TRB_NONE && type != TRB_PORT_STATUS))
return event;
 
if (type == TRB_PORT_STATUS)
@@ -542,27 +543,36 @@ static void abort_td(struct usb_device *udev, int 
ep_index)
struct xhci_ctrl *ctrl = xhci_get_ctrl(udev);
struct xhci_ring *ring =  ctrl->devs[udev->slot_id]->eps[ep_index].ring;
union xhci_trb *event;
+   trb_type type;
u64 addr;
u32 field;
 
xhci_queue_command(ctrl, 0, udev->slot_id, ep_index, TRB_STOP_RING);
 
-   event = xhci_wait_for_event(ctrl, TRB_TRANSFER);
+   event = xhci_wait_for_event(ctrl, TRB_NONE);
if (!event)
return;
 
-   field = le32_to_cpu(event->trans_event.flags);
-   BUG_ON(TRB_TO_SLOT_ID(field) != udev->slot_id);
-   BUG_ON(TRB_TO_EP_INDEX(field) != ep_index);
-   BUG_ON(GET_COMP_CODE(le32_to_cpu(event->trans_event.transfer_len
-   != COMP_STOP)));
-   xhci_acknowledge_event(ctrl);
+   type = TRB_FIELD_TO_TYPE(le32_to_cpu(event->event_cmd.flags));
+   if (type == TRB_TRANSFER) {
+   field = le32_to_cpu(event->trans_event.flags);
+   BUG_ON(TRB_TO_SLOT_ID(field) != udev->slot_id);
+   BUG_ON(TRB_TO_EP_INDEX(field) != ep_index);
+   BUG_ON(GET_COMP_CODE(le32_to_cpu(event->trans_event.transfer_len
+   != COMP_STOP)));
+   xhci_acknowledge_event(ctrl);
 
-   event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
-   if (!event)
-   return;
+   event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return;
+   type = TRB_FIELD_TO_TYPE(le32_to_cpu(event->event_cmd.flags));
 
-   BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
+   } else {
+   printf("abort_td: Expected a TRB_TRANSFER TRB first\n");
+   }
+
+   BUG_ON(type != TRB_COMPLETION ||
+   TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
!= udev->slot_id || GET_COMP_CODE(le32_to_cpu(
event->event_cmd.status)) != COMP_SUCCESS);
xhci_acknowledge_event(ctrl);
diff --git a/include/usb/xhci.h b/include/usb/xhci.h
index 4a4ac10229ac..04d16a256bbd 100644
--- a/include/usb/xhci.h
+++ b/include/usb/xhci.h
@@ -901,6 +901,8 @@ union xhci_trb {
 
 /* TRB type IDs */
 typedef enum {
+   /* reserved, used as a software sentinel */
+   TRB_NONE = 0,
/* bulk, interrupt, isoc scatter/gather, and control data stage */
TRB_NORMAL = 1,
/* setup stage for control transfers */

-- 
2.41.0



[PATCH 1/8] usb: xhci: Guard all calls to xhci_wait_for_event

2023-10-26 Thread Hector Martin
xhci_wait_for_event returns NULL on timeout, so the caller always has to
check for that. This addresses the immediate explosions in this part
of the code, but not the original cause.

Signed-off-by: Hector Martin 
---
 drivers/usb/host/xhci-ring.c | 15 ++-
 drivers/usb/host/xhci.c  |  9 +
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
index c8260cbdf94b..aaf128ff9317 100644
--- a/drivers/usb/host/xhci-ring.c
+++ b/drivers/usb/host/xhci-ring.c
@@ -511,7 +511,8 @@ static void reset_ep(struct usb_device *udev, int ep_index)
printf("Resetting EP %d...\n", ep_index);
xhci_queue_command(ctrl, 0, udev->slot_id, ep_index, TRB_RESET_EP);
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
-   field = le32_to_cpu(event->trans_event.flags);
+   if (!event)
+   return;
BUG_ON(TRB_TO_SLOT_ID(field) != udev->slot_id);
xhci_acknowledge_event(ctrl);
 
@@ -519,6 +520,9 @@ static void reset_ep(struct usb_device *udev, int ep_index)
(void *)((uintptr_t)ring->enqueue | ring->cycle_state));
xhci_queue_command(ctrl, addr, udev->slot_id, ep_index, TRB_SET_DEQ);
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return;
+
BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
!= udev->slot_id || GET_COMP_CODE(le32_to_cpu(
event->event_cmd.status)) != COMP_SUCCESS);
@@ -544,6 +548,9 @@ static void abort_td(struct usb_device *udev, int ep_index)
xhci_queue_command(ctrl, 0, udev->slot_id, ep_index, TRB_STOP_RING);
 
event = xhci_wait_for_event(ctrl, TRB_TRANSFER);
+   if (!event)
+   return;
+
field = le32_to_cpu(event->trans_event.flags);
BUG_ON(TRB_TO_SLOT_ID(field) != udev->slot_id);
BUG_ON(TRB_TO_EP_INDEX(field) != ep_index);
@@ -552,6 +559,9 @@ static void abort_td(struct usb_device *udev, int ep_index)
xhci_acknowledge_event(ctrl);
 
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return;
+
BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
!= udev->slot_id || GET_COMP_CODE(le32_to_cpu(
event->event_cmd.status)) != COMP_SUCCESS);
@@ -561,6 +571,9 @@ static void abort_td(struct usb_device *udev, int ep_index)
(void *)((uintptr_t)ring->enqueue | ring->cycle_state));
xhci_queue_command(ctrl, addr, udev->slot_id, ep_index, TRB_SET_DEQ);
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return;
+
BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
!= udev->slot_id || GET_COMP_CODE(le32_to_cpu(
event->event_cmd.status)) != COMP_SUCCESS);
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index 5cacf0769ec7..d13cbff9b372 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -451,6 +451,9 @@ static int xhci_configure_endpoints(struct usb_device 
*udev, bool ctx_change)
xhci_queue_command(ctrl, in_ctx->dma, udev->slot_id, 0,
   ctx_change ? TRB_EVAL_CONTEXT : TRB_CONFIG_EP);
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return -ETIMEDOUT;
+
BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
!= udev->slot_id);
 
@@ -647,6 +650,9 @@ static int xhci_address_device(struct usb_device *udev, int 
root_portnr)
xhci_queue_command(ctrl, virt_dev->in_ctx->dma,
   slot_id, 0, TRB_ADDR_DEV);
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return -ETIMEDOUT;
+
BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags)) != slot_id);
 
switch (GET_COMP_CODE(le32_to_cpu(event->event_cmd.status))) {
@@ -722,6 +728,9 @@ static int _xhci_alloc_device(struct usb_device *udev)
 
xhci_queue_command(ctrl, 0, 0, 0, TRB_ENABLE_SLOT);
event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
+   if (!event)
+   return -ETIMEDOUT;
+
BUG_ON(GET_COMP_CODE(le32_to_cpu(event->event_cmd.status))
!= COMP_SUCCESS);
 

-- 
2.41.0



[PATCH 0/8] USB fixes: xHCI error handling

2023-10-26 Thread Hector Martin
This series is the first of a few bundles of USB fixes we have been
carrying downstream on the Asahi U-Boot branch for a few months.

Most importantly, this related set of patches makes xHCI error/stall
recovery more robust (or work at all in some cases). There are also a
couple patches fixing other xHCI bugs and adding better debug logs.

I believe this should fix this Fedora bug too:

https://bugzilla.redhat.com/show_bug.cgi?id=2244305

Signed-off-by: Hector Martin 
---
Hector Martin (8):
  usb: xhci: Guard all calls to xhci_wait_for_event
  usb: xhci: Better error handling in abort_td()
  usb: xhci: Allow context state errors when halting an endpoint
  usb: xhci: Recover from halted non-control endpoints
  usb: xhci: Fail on attempt to queue TRBs to a halted endpoint
  usb: xhci: Do not panic on event timeouts
  usb: xhci: Fix DMA address calculation in queue_trb
  usb: xhci: Add more debugging

 drivers/usb/host/xhci-ring.c | 100 +++
 drivers/usb/host/xhci.c  |   9 
 include/usb/xhci.h   |   2 +
 3 files changed, 92 insertions(+), 19 deletions(-)
---
base-commit: fb428b61819444b9337075f49c72f326f5d12085
change-id: 20231027-usb-fixes-1-83bfc7013012

Best regards,
-- 
Hector Martin 



Re: [dmarc-ietf] DMARCbis way forward: Do we need our session at IETF 118

2023-10-24 Thread Hector Santos

On 10/24/2023 2:15 PM, Barry Leiba wrote:

Now that we have a consensus call on the main issue that has remained open:

1. Do we need to retain our session at IETF 118 and discuss this (or
something else) further?

...or...

2. Do we have what we need to finish up the DMARCbis document, and
should the chairs cancel the session at 118?


I think #2  is best, imto.DMARC and DMARCbis will remain a processing 
overhead for logging, but no honoring of policies. I have yet to see 
any consistency. No faith in this protocol. It can not be considered a 
deterministic protocol. With all the debate on lookup, I still don't 
understand what is expected. Would be nice to see some simple pseudo 
code for the new logic. But why? Nothing deterministic about it to say 
- REJECT with confidence.


SPF is still king here though .


Oh, and...

3. Is there something else (such as the reporting documents) that we
should use the time at 118 to discuss?  Or can we continue with those
on the mailing list for now?  My sense is that aggregate reporting, at
least, is just about ready to go and doesn't need the face-to-face
time.


Primary technical problem is inconsistency in reading the report formats.

I want to know the following in a report:

Which domain? Who try to use it?  What was return path, the IP and 
principle DKIM identities, if you got that far?


I still won't know what I will gain but I do hope the receivers honor 
my policies especially for SPF because I am honoring SPF rejects on 
the receiver side.


SPF remains the only protocol I honor 100% and according to my 
business site sites, this month total rejects are 34% SPF!


If anything, I get DMARC reports but I learn nothing from them.

--
HLS___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: Default DNS lookup command?

2023-10-22 Thread Richard Hector

On 22/10/23 04:56, Greg Wooledge wrote:

On Sat, Oct 21, 2023 at 05:35:21PM +0200, Reiner Buehl wrote:

is there a DNS lookup command that is installed by default on any Debian


getent hosts NAME
getent ahostsv4 NAME

That said, you get much finer control from dedicated tools.



That is a useful tool I should remember.

But not strictly a DNS lookup tool:

richard@zircon:~$ getent hosts zircon
127.0.1.1   zircon.lan.walnut.gen.nz zircon

That's from my /etc/hosts file, and overrides DNS. I didn't see an 
option in the manpage to ignore /etc/hosts.


I haven't found a way to get just DNS results, without pulling in extra 
software.


Richard



Re: [DISCUSS] KIP-987: Connect Static Assignments

2023-10-20 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi,

I think different algorithms might work for different workload/scenarios. I 
have some thoughts that are somewhat tangential to this KIP: it might be a good 
idea to elevate the ConnectAssignor to the category of plugin, so users can 
provide their own implementation. 

The fact that there's a public o.a.k.c.r.distributed.ConnectAssignor interface 
is brilliant (I actually wanted the same thing on the Kafka client side, alas  
https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator).
 I think it should play well with the future Connect's counterpart of KIP-848 
(new consumer rebalance protocol).

I don't want to hijack this thread, but will definitely raise a KIP and start a 
discussion around this idea.

From: dev@kafka.apache.org At: 10/20/23 07:21:11 UTC-4:00To:  
dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-987: Connect Static Assignments

Hi Greg,

Thanks for the reply.

I still find the proposed mechanism limited and I'm not sure it really
addressed the pain points I've experienced with Connect.
As you said different tasks from a connector may have different
workload. Connectors may also change the assignment of tasks at
runtime so for example it task-2 is really busy (because it's assigned
a partition with high throughput), this may not be true in 10 minutes
as this partition is now assigned to task-1. So having to put which
tasks can run on each worker does not really help in this case.

I think the "hints" where to place a connector/tasks should come from
the connector configuration as it's the engineers building a pipeline
that knows best the requirements (in terms of isolation, resources) of
their workload. This is basically point 3) in my initial email. The
mechanism you propose puts this burden on the cluster administrators
who may well not know the workloads and also have to guess/know in
advance to properly configure workers.

I've not looked into the feasibility but I wonder if a simplified
taint/selector approach could give us enough flexibility to make
Connect behave better in containerized environments. I understand it's
an alternative you rejected but I think could have some benefits. Here
is my green field thinking:
Add 2 new fields in the connector config: placement and tags.
Placement defines the degree of isolation a task requires, it accept 3
values: any (can be placed anywhere like today, the default),
colocated (can run on a worker with other tasks from this connector),
isolated (requires a dedicated worker). I think these degrees of
isolation should cover most use cases. Tags accepts a collections of
key=value pair. These can have arbitrary values and are meant to mean
something to the management system (for example Strimzi). The accepted
values could be configured on the workers by the administrators as
they also operate the management system.

When a connector is created, the runtime tries to place tasks on the
available brokers by matching the placement and tags. If no suitable
workers are found, the tasks stay in unassigned state and the runtime
waits for the management system to create the necessary workers.

We could even envisage to start with only the placement field as in my
opinion this is what brings the most value to users.

Thanks,
Mickael

On Wed, Oct 18, 2023 at 8:12 PM Greg Harris
 wrote:
>
> Hey Sagar,
>
> Thanks for the questions. I hope you find the answers satisfying:
>
> 1. This is detailed in the KIP two sentences earlier: "If the
> connect.protocol is set to static, each worker will send it's
> static.connectors and static.tasks to the coordinator during
> rebalances."
>
> 2. If you have a static worker and a wildcard worker, the static
> worker will be assigned the work preferentially. If the static worker
> goes offline, the wildcard worker will be used as a backup.
>
> 3. I don't think that new Connect users will make use of this feature,
> but I've added that clarification.
>
> 4. Users can implement the strategy you're describing by leaving the
> static.connectors field unset. I think that Connect should include
> static.connectors for users that do want to control the placement of
> connectors.
>
> 5. Yes. Arbitrary here just means that the assignment is not
> influenced by the static assignment.
>
> 6. Yes. There are no guardrails that ensure that the balance of the
> static assignments is better than the builtin algorithm because we
> have no method to compare them.
>
> 7. If the whole cluster uses static assignments with each job only
> specified on one worker, the assignments are completely sticky. If a
> worker goes offline, those tasks will be offline until that worker
> comes back.
> If there are multiple workers for a single job, that is specified as
> "arbitrary". We could choose to wait for the delay to elapse or
> immediately reassign it, the KIP as written could be implemented by
> either.
> If the assignment would land on a wildcard worker, that should use
> cooperative rules, so we 

[jira] [Commented] (KAFKA-14132) Remaining PowerMock to Mockito tests

2023-10-17 Thread Hector Geraldino (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17776375#comment-17776375
 ] 

Hector Geraldino commented on KAFKA-14132:
--

I think we can set the target to KAFKA-14683 as well. Now that KAFKA-14684 has 
been merged, my plan is to tackle this last one next.

> Remaining PowerMock to Mockito tests
> 
>
> Key: KAFKA-14132
> URL: https://issues.apache.org/jira/browse/KAFKA-14132
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Major
> Fix For: 3.7.0
>
>
> {color:#de350b}Some of the tests below use EasyMock as well. For those 
> migrate both PowerMock and EasyMock to Mockito.{color}
> Unless stated in brackets the tests are in the connect module.
> A list of tests which still require to be moved from PowerMock to Mockito as 
> of 2nd of August 2022 which do not have a Jira issue and do not have pull 
> requests I am aware of which are opened:
> {color:#ff8b00}InReview{color}
> {color:#00875a}Merged{color}
>  # {color:#00875a}ErrorHandlingTaskTest{color} (owner: [~shekharrajak])
>  # {color:#00875a}SourceTaskOffsetCommiterTest{color} (owner: Christo)
>  # {color:#00875a}WorkerMetricsGroupTest{color} (owner: Divij)
>  # {color:#00875a}WorkerTaskTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ErrorReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}RetryWithToleranceOperatorTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}WorkerErrantRecordReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ConnectorsResourceTest{color} (owner: [~mdedetrich-aiven])
>  # {color:#ff8b00}StandaloneHerderTest{color} (owner: [~mdedetrich-aiven]) 
> ([https://github.com/apache/kafka/pull/12728])
>  # KafkaConfigBackingStoreTest (UNOWNED)
>  # {color:#00875a}KafkaOffsetBackingStoreTest{color} (owner: Christo) 
> ([https://github.com/apache/kafka/pull/12418])
>  # {color:#00875a}KafkaBasedLogTest{color} (owner: @bachmanity ])
>  # {color:#00875a}RetryUtilTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}RepartitionTopicTest{color} (streams) (owner: Christo)
>  # {color:#00875a}StateManagerUtilTest{color} (streams) (owner: Christo)
> *The coverage report for the above tests after the change should be >= to 
> what the coverage is now.*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[systemsettings] [Bug 475435] New: default system keyboard model is not correctly set on Wayland

2023-10-10 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=475435

Bug ID: 475435
   Summary: default system keyboard model is not correctly set on
Wayland
Classification: Applications
   Product: systemsettings
   Version: 5.27.8
  Platform: Fedora RPMs
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: kcm_keyboard
  Assignee: plasma-b...@kde.org
  Reporter: mar...@marcan.st
CC: butir...@gmail.com
  Target Milestone: ---

On a fresh user account, the keyboard layout is set by default to the value
configured in `localectl`. However, the keyboard model is not, and ends up at
"Generic 101-key PC".

This matters particularly for Apple machines (on Asahi Linux), where we strive
to set the default keyboard model systemwide properly since the Apple models
have subtle but important changes vs the standard layouts.

STEPS TO REPRODUCE
1. localectl set-x11-keymap jp applealu_jis
2. Create a fresh user account and log in
3. Go into kcm_keyboard

OBSERVED RESULT

Keyboard model is listed as "Generic 101-key PC" and behaves as such.

EXPECTED RESULT

Keyboard model should be "Apple Aluminum (JIS)".

SOFTWARE/OS VERSIONS

Operating System: Fedora Linux Asahi Remix 39
KDE Plasma Version: 5.27.8
KDE Frameworks Version: 5.110.0
Qt Version: 5.15.10
Kernel Version: 6.5.6-400.asahi.fc39.aarch64+16k (64-bit)
Graphics Platform: Wayland
Processors: 8 × Apple Firestorm (M1 Pro), 2 × Apple Icestorm (M1 Pro)
Memory: 30.6 GiB of RAM
Graphics Processor: Apple M1 Pro
Product Name: Apple MacBook Pro (14-inch, M1 Pro, 2021)
U-Boot Version: 2023.07

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: [VOTE] KIP-976: Cluster-wide dynamic log adjustment for Kafka Connect

2023-10-09 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Good stuff, +1 (non-binding) from me as well

De: dev@kafka.apache.org A: 10/09/23 05:16:06 UTC-4:00A:  dev@kafka.apache.org
Subject: Re: [VOTE] KIP-976: Cluster-wide dynamic log adjustment for Kafka 
Connect

Hi Chris,

+1 (non binding)

Thanks
Fede

On Sun, Oct 8, 2023 at 10:11 AM Yash Mayya  wrote:
>
> Hi Chris,
>
> Thanks for the KIP!
> +1 (binding)
>
> Yash
>
> On Fri, Oct 6, 2023 at 9:54 PM Greg Harris 
> wrote:
>
> > Hey Chris,
> >
> > Thanks for the KIP!
> > I think that preserving the ephemeral nature of the logging change is
> > the right choice here, and using the config topic for intra-cluster
> > broadcast is better than REST forwarding.
> >
> > +1 (binding)
> >
> > Thanks,
> > Greg
> >
> > On Fri, Oct 6, 2023 at 9:05 AM Chris Egerton 
> > wrote:
> > >
> > > Hi all,
> > >
> > > I'd like to call for a vote on KIP-976, which augments the existing
> > dynamic
> > > logger adjustment REST API for Kafka Connect to apply changes
> > cluster-wide
> > > instead on a per-worker basis.
> > >
> > > The KIP:
> > >
> > 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-976:+Cluster-wide+dynamic+
log+adjustment+for+Kafka+Connect
> > >
> > > The discussion thread:
> > > https://lists.apache.org/thread/w3x3f3jmyd1vfjxho06y8xgt6mhhzpl5
> > >
> > > Cheers,
> > >
> > > Chris
> >




[Desktop-packages] [Bug 1816497] Re: [snap] vaapi chromium no video hardware decoding

2023-10-06 Thread Hector CAO
No, it is not a bug, hardware decoding for video conferencing on google
meet only works for very few specific use cases,

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to chromium-browser in Ubuntu.
https://bugs.launchpad.net/bugs/1816497

Title:
  [snap] vaapi chromium no video hardware decoding

Status in chromium-browser package in Ubuntu:
  In Progress

Bug description:
  To test the snap with VA-API changes,

  1. Install the Chromium snap from either beta or edge channels,

     sudo snap install --beta chromium

  2. Start Chromium,

     snap run chromium

  3. Open a video, e.g. one from https://github.com/chthomos/video-
  media-samples.

  4.Check with intel_gpu_top (from intel-gpu-tools package) that the
  video engine bars are non zero during playback.

  --Original Bug report -

  Libva is no longer working for snap installed chromium 72.0.3626.109
  (Official Build) snap (64-bit)

  I followed this instruction
  sudo snap install --channel=candidate/vaapi chromium

  My amdgpu can use libva

  `vainfo: Driver version: Mesa Gallium driver 18.3.3 for AMD STONEY (DRM 
3.27.0, 4.20.0-10.1-liquorix-amd64, LLVM 7.0.1)
  vainfo: Supported profile and entrypoints
    VAProfileMPEG2Simple:   VAEntrypointVLD
    VAProfileMPEG2Main  :   VAEntrypointVLD
    VAProfileVC1Simple  :   VAEntrypointVLD
    VAProfileVC1Main:   VAEntrypointVLD
    VAProfileVC1Advanced:   VAEntrypointVLD
    VAProfileH264ConstrainedBaseline:   VAEntrypointVLD
    VAProfileH264ConstrainedBaseline:   VAEntrypointEncSlice
    VAProfileH264Main   :   VAEntrypointVLD
    VAProfileH264Main   :   VAEntrypointEncSlice
    VAProfileH264High   :   VAEntrypointVLD
    VAProfileH264High   :   VAEntrypointEncSlice
    VAProfileHEVCMain   :   VAEntrypointVLD
    VAProfileHEVCMain10 :   VAEntrypointVLD
    VAProfileJPEGBaseline   :   VAEntrypointVLD
    VAProfileNone   :   VAEntrypointVideoProc`

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/chromium-browser/+bug/1816497/+subscriptions


-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


Re: [ANNOUNCE] New committer: Yash Mayya

2023-09-21 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Congrats! Well deserved

From: dev@kafka.apache.org At: 09/21/23 17:05:01 UTC-4:00To:  
dev@kafka.apache.org
Cc:  r...@confluent.io.invalid
Subject: Re: [ANNOUNCE] New committer: Yash Mayya

Congratulations, Yash!

On Thu 21. Sep 2023 at 21.57, Randall Hauch  wrote:

> Congratulations, Yash!
>
> On Thu, Sep 21, 2023 at 12:31 PM Satish Duggana 
> wrote:
>
> > Congratulations Yash!!
> >
> > On Thu, 21 Sept 2023 at 22:57, Viktor Somogyi-Vass
> >  wrote:
> > >
> > > Congrats Yash!
> > >
> > > On Thu, Sep 21, 2023 at 7:04 PM Josep Prat  >
> > > wrote:
> > >
> > > > Congrats Yash!
> > > >
> > > > ———
> > > > Josep Prat
> > > >
> > > > Aiven Deutschland GmbH
> > > >
> > > > Alexanderufer 3-7, 10117 Berlin
> > > >
> > > > Amtsgericht Charlottenburg, HRB 209739 B
> > > >
> > > > Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> > > >
> > > > m: +491715557497
> > > >
> > > > w: aiven.io
> > > >
> > > > e: josep.p...@aiven.io
> > > >
> > > > On Thu, Sep 21, 2023, 18:55 Raymond Ng 
> > wrote:
> > > >
> > > > > Congrats Yash! Well-deserved!
> > > > >
> > > > > /Ray
> > > > >
> > > > > On Thu, Sep 21, 2023 at 9:40 AM Kamal Chandraprakash <
> > > > > kamal.chandraprak...@gmail.com> wrote:
> > > > >
> > > > > > Congratulations Yash!
> > > > > >
> > > > > > On Thu, Sep 21, 2023, 22:03 Bill Bejeck 
> > wrote:
> > > > > >
> > > > > > > Congrats Yash!
> > > > > > >
> > > > > > > On Thu, Sep 21, 2023 at 12:26 PM Divij Vaidya <
> > > > divijvaidy...@gmail.com
> > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Congratulations Yash!
> > > > > > > >
> > > > > > > > Divij Vaidya
> > > > > > > >
> > > > > > > >
> > > > > > > > On Thu, Sep 21, 2023 at 6:18 PM Sagar <
> > sagarmeansoc...@gmail.com>
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > Congrats Yash !
> > > > > > > > > On Thu, 21 Sep 2023 at 9:38 PM, Ashwin
> > > > >  > > > > > >
> > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Awesome ! Congratulations Yash !!
> > > > > > > > > >
> > > > > > > > > > On Thu, Sep 21, 2023 at 9:25 PM Edoardo Comar <
> > > > > > edoardli...@gmail.com
> > > > > > > >
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Congratulations Yash
> > > > > > > > > > >
> > > > > > > > > > > On Thu, 21 Sept 2023 at 16:28, Bruno Cadonna <
> > > > > cado...@apache.org
> > > > > > >
> > > > > > > > wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > Hi all,
> > > > > > > > > > > >
> > > > > > > > > > > > The PMC of Apache Kafka is pleased to announce a new
> > Kafka
> > > > > > > > committer
> > > > > > > > > > > > Yash Mayya.
> > > > > > > > > > > >
> > > > > > > > > > > > Yash's major contributions are around Connect.
> > > > > > > > > > > >
> > > > > > > > > > > > Yash authored the following KIPs:
> > > > > > > > > > > >
> > > > > > > > > > > > KIP-793: Allow sink connectors to be used with
> > > > topic-mutating
> > > > > > > SMTs
> > > > > > > > > > > > KIP-882: Kafka Connect REST API configuration
> > validation
> > > > > > timeout
> > > > > > > > > > > > improvements
> > > > > > > > > > > > KIP-970: Deprecate and remove Connect's redundant
> task
> > > > > > > > configurations
> > > > > > > > > > > > endpoint
> > > > > > > > > > > > KIP-980: Allow creating connectors in a stopped state
> > > > > > > > > > > >
> > > > > > > > > > > > Overall, Yash is known for insightful and friendly
> > input to
> > > > > > > > discussions
> > > > > > > > > > > > and his high quality contributions.
> > > > > > > > > > > >
> > > > > > > > > > > > Congratulations, Yash!
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks,
> > > > > > > > > > > >
> > > > > > > > > > > > Bruno (on behalf of the Apache Kafka PMC)
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> >
>




Re: [dmarc-ietf] Some Gmail comments on DMARCbis version 28

2023-09-18 Thread Hector Santos
I have been “militantly” against Authorship destruction.  But fast forward to 
today, I am willing to support it IFF it can be officially sanctioned by the 
IETF using a well-defined Rewrite protocol for List Systems.

Overall, I believe if the middle ware performs a rewrite due to an author’s 
restrictive policy, we should consider these technical concepts:

1) Applicable to p=reject or p=quarantine only,
2) A domain rewrite SHOULD match the original domain security.

For example,  for this list,  the IETF list manager rewrites my address:

hsantos@isdg,net

to

hsantos=40isdg@dmarc.ietf.org <mailto:hsantos=40isdg@dmarc.ietf.org>

I believe any domain transformation should retain the p=reject isdg.net 
<http://isdg.net/> policy security level.  In this case, p=reject was weaken to 
p=none with the domain change.

So I can support rewrites iff the domain security can be retained.

All the best,
Hector Santos



> On Sep 14, 2023, at 5:27 PM, Wei Chuang  
> wrote:
> 
> 
> 
> On Sun, Sep 10, 2023 at 11:34 AM Scott Kitterman  <mailto:skl...@kitterman.com>> wrote:
>> On Thursday, September 7, 2023 12:28:59 PM EDT Wei Chuang wrote:
>> > We had an opportunity to further review the DMARCbis changes more broadly
>> > within Gmail.  While we don't see any blockers in the language in DMARCbis
>> > version 28
>> > <https://datatracker.ietf.org/doc/html/draft-ietf-dmarc-dmarcbis-28> and
>> > can live with what is there, we wanted to briefly raise some concerns
>> > around some of the changes.  Two points.
>> > 
>> > Regarding the languages in section 8.6 "It is therefore critical that
>> > domains that host users who might post messages to mailing lists SHOULD NOT
>> > publish p=reject.  Domains that choose to publish p=reject SHOULD implement
>> > policies that their users not post to Internet mailing lists", we wanted to
>> > point out that this is impossible to implement.  Many enterprises already
>> > have "p=reject" policies.  Presumably those domains were subject to some
>> > sort of spoofing which is why they went to such a strict policy.  It would
>> > be unreasonable to tell them to stop posting to mailing lists as many
>> > likely already use mailing list services and will want to continue to use
>> > them.  The one thing that makes this tractable is the SHOULD language as we
>> > may choose not to not follow this aspect of the specification.  Our
>> > suggestion is that there is not a lot of value in including this language
>> > in the bis document if the likely outcome is that it will be ignored, and
>> > rather more effort should be placed with a technical solution for interop
>> > with mailing lists.
>> 
>> It might be helpful if you could describe this technical solution from your 
>> perspective.
>> 
>> If there were a reasonable technical solution available, I think this would 
>> be 
>> a much easier change to support (in my opinion, and a believe a substantial 
>> number of others, rewriting From is not a reasonable technical solution).
>> 
>> Scott K
> 
> Apologies for the delay in getting back to this. 
> 
> So yes I believe there are two possible technical approaches broadly speaking 
> 1) Support rewriting From and being able to reverse it along with message 
> modifications to recover the original DKIM message hash to validate the 
> original DKIM signature.  2) Create a new message authentication method that 
> is tolerant of message modifications and message forwarding, and supported by 
> DMARC.  From header rewriting would not be necessary in this scenario.  
> Beyond the complexity of supporting either method, another tricky thing in 
> both cases is supporting an ecosystem with diverse adoption of said 
> technique.  More concrete proposals for 1) and 2) are 1) 
> draft-chuang-mailing-list-modifications 
> <https://datatracker.ietf.org/doc/draft-chuang-mailing-list-modifications/> 
> and 2) draft-chuang-replay-resistant-arc 
> <http://draft-chuang-replay-resistant-arc/>.  And there are other I-Ds out 
> there particularly for the first approach.
> 
> -Wei
> 
> ___
> dmarc mailing list
> dmarc@ietf.org <mailto:dmarc@ietf.org>
> https://www.ietf.org/mailman/listinfo/dmarc

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Some Gmail comments on DMARCbis version 28

2023-09-18 Thread Hector Santos

Hello esteemed colleagues,

I'm sure we're on the cusp of a future where only "Authenticated Mail of the 
Fifth Kind" will reign supreme—much like the exclusive club of Submission 
Protocol requiring ESMTP AUTH on the ultra-special port 587. And surely, the 
ever-trusted port 25 will forever stand as a beacon of hope, right? Because how 
could we possibly survive without unsolicited, unauthenticated messages? 

Perhaps, just perhaps, in this utopian future, every sender will be upstanding 
citizens with impeccable SPF and DKIM policies. But of course, if any domain 
dares to relax these stringent policies, the noble and always compliant 
receivers will swoop in to defend the realm. And heaven forbid the receivers 
falter, for the all-seeing MAEA "Mail Authentication Enforcement Agency" is 
ever watchful, ready to unleash a fine.

And maybe, just maybe, we'll live in a world where the almighty MAEA gets to 
play tax collector, ensuring every sender pays their dues. Ah, but not the 
Fidonet and QWK loyalists! They'll be cruising without a care, exempt from the 
MAEA's grasp.

All in jest, of course, but it's food for thought!

Best regards,
Hector Santos


> On Sep 16, 2023, at 11:18 PM, Barry Leiba  wrote:
> 
> I believe, with you, that there's likely to be a time when
> unauthenticated mail simply will not be delivered by most receiving
> domains.  I similarly believe (as the owner of a Tesla) that there
> will be a time when cars will be self-driving in the vast majority,
> and that that will make the roads both safer and more efficient.
> 
> Neither of those situations is here yet, though, and neither is likely
> to arrive very soon.  Some day, yes.  Not yet, and not soon.
> 
> While we can certainly discuss the former -- particularly with a focus
> on what needs to happen before that situation can be realised -- we
> need to first make sure that we resolve the few remaining issues in
> the DMARCbis and reporting documents and get them published.
> 
> Barry
> 
> On Sat, Sep 16, 2023 at 6:56 AM Douglas Foster
>  wrote:
>> 
>> Yes, I suspected awhile back that I was the only one in the world 
>> implementing mandatory authentication.   This group has confirmed it.
>> 
>> But I hold out hope thst others will see the opportunity that it provides.  
>> Perhaps someone will read my Best Practices draft and sponsor it as an 
>> individual contribution or experimental draft.
>> 
>> Doug
>> 
>> 
>> On Fri, Sep 15, 2023, 9:26 AM Barry Leiba  wrote:
>>> 
>>>> Content filtering creates a need for whitelisting
>>>> Any domain may require whitelisting, regardless of sender policy.
>>>> Whitelisting is only safe if it is coupled with an authentication 
>>>> mechanism which prevents impersonation.
>>>> Therefore, sender authentication, by a combination of local policy and 
>>>> sender policy, needs to be defined for all possible messages.
>>> 
>>> The last statement there is where things go off the rails.  No,
>>> nothing has to work perfectly here, and mechanisms are useful and can
>>> well be standardized even when they don't work in "all possible"
>>> situations.
>>> 
>>> It's really important that we stop insisting on perfection or nothing,
>>> as that's a false dichotomy.  What we have now is demonstrably useful
>>> as *part of* an overall system.  We need to move forward with
>>> finishing the document.
>>> 
>>> Barry
>> 
>> ___
>> dmarc mailing list
>> dmarc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dmarc
> 
> ___
> dmarc mailing list
> dmarc@ietf.org
> https://www.ietf.org/mailman/listinfo/dmarc



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: Fresh install, Bookworm, XFCE keeps recreating directories

2023-09-15 Thread Richard Hector

On 16/09/23 12:19, Curt Howland wrote:


Good evening. Did a fresh install of Bookworm, installing desktop with
XFCE.

I'm not interested in having directories like "Public" and "Videos",
but every time I delete them something recreates those directories.

I can't find where these are set to be created, and re-re-re created.

Is there a way to turn this off?


Have a look at the output of "apt show xdg-user-dirs" - looks like you 
need to edit .config/user-dirs.dirs


Cheers,
Richard



Re: [dmarc-ietf] Some Gmail comments on DMARCbis version 28

2023-09-14 Thread Hector Santos
> On Sep 14, 2023, at 10:39 AM, Murray S. Kucherawy  wrote:
> 
> On Wed, Sep 13, 2023 at 6:01 PM Douglas Foster 
>  <mailto:dougfoster.emailstanda...@gmail.com>> wrote:
>> 
>> The coverage problem is aggravated if we assume rational attackers.   With a 
>> plethora of domains available for impersonation, attackers are least likely 
>> to use domains that are protected with p=reject.  Therefore the reference 
>> model implementation protects an evaluator where attacks are least likely, 
>> and fails to protect an evaluator where attacks are most likely.
> 
> So you're saying DMARC fails to protect domains that don't set "p=reject"?  
> That claim has the appearance of a tautology.
> 


Firs, I agree with your thoughts here.

I always considered these new DNS-based Apps that offered policies, their 
highest payoff is the most restrictive policy, the partials policies like SPF 
soft fail or unknown policies or DMARD p=none policies is technically overhead 
and redundancy if every query is always a “well I don’t know” do what you wish. 
 DNS and processing overhead.

The highest payoff for SPF is -ALL and then highest payoff for DMARC is 
p=reject despite its faulty authorization or restrictive algorithm,

All the best,
Hector Santos

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Some Gmail comments on DMARCbis version 28

2023-09-14 Thread Hector Santos
> On Sep 14, 2023, at 7:36 AM, Dotzero  wrote:
> 
> On Wed, Sep 13, 2023 at 9:21 PM Hector Santos  <mailto:hsan...@isdg.net>> wrote:
>> 
>>> On Sep 13, 2023, at 8:51 PM, Dotzero >> <mailto:dotz...@gmail.com>> wrote:
>>> 
>>> DMARC does one thing and one thing only. It mitigates against direct domain 
>>> abuse in a deterministic manner, nothing else. It doesn't stop spam and it 
>>> doesn't depend on or involve reputation. It is but one tool among a number 
>>> of tools that various parties can choose from. A message passing DMARC 
>>> validation does not mean the message is "good". There is no question of 
>>> fault. Perhaps you should recommend changes to incorporate a blame game if 
>>> your goal is to determine fault. 
>> 
>> Deterministic means there is no question -  you follow the protocol. Your 
>> (speaking in general) opinions don’t matter. 
> 
> It means that the output of the algorithm is deterministic. It does not mean 
> that the receiver blindly act on that output. As has been stated many times 
> by many people, a policy assertion is a request by the sending domain 
> administrator/owner, not a mandate. That is why local policy on the part of 
> the receiver overrides a sender policy assertion.
> 


Over the years, as a supporter of SPF and DKIM Policy, and being the DSAP's 
author, I've witnessed how deterministic protocols like SSP, DSAP, ADSP, and 
DMARC pave the way for policy-driven rejections. They operate without 
subjectivity. But the inclusion of local policies can lead to diverse behaviors 
among platforms. While Site A might conform strictly to a policy, Site B might 
diverge.

The introduction of RFC 5016, Section 5.3, Item 10 underlines the primacy of 
local policies. This was especially pertinent for Mailing List systems, which 
often tampered with the original DKIM author's signature integrity. These 
systems then re-signed the altered message for list distribution as a 3rd 
party. At that time, a gap existed as we lacked a deterministic policy catering 
to these 3rd parties, which could work alongside SSP,  ADSP and now DMARC's 1st 
party only signer algorithm.

DMARC has amplified the significance of local policies, given the high 
likelihood of false positives. The introduction of local policies has somewhat 
diluted the effectiveness of deterministic protocols. We're still navigating 
these nuances, even after 15+ years.

All the best,
Hector Santos



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Some Gmail comments on DMARCbis version 28

2023-09-13 Thread Hector Santos

All the best,
Hector Santos



> On Sep 13, 2023, at 8:51 PM, Dotzero  wrote:
> 
> 
> 
> On Wed, Sep 13, 2023 at 5:28 PM Hector Santos  <mailto:hsan...@isdg.net>> wrote:
>>> On Sep 11, 2023, at 9:24 AM, Dotzero >> <mailto:dotz...@gmail.com>> chastised Douglas Foster
>>> 
>>> Absolutely incorrect. DMARC is a deterministic pass|fail approach based on 
>>> validation through DKIM or SPF pass (or if both pass). It says nothing 
>>> about the acceptability/goodness/badness of a source. 
>> 
>> So why are we here?
> 
> Because you care? 

I do. 

>> 
>> Correct or incorrect, a published p=reject has to mean something to the 
>> verifier who is doing the domain a favor by a) implementing the protocol and 
>> b) the goal of eliminating junk.   If there are false negatives, whose fault 
>> is that?  The Domain, The Verifier or the Protocol?
> 
> DMARC does one thing and one thing only. It mitigates against direct domain 
> abuse in a deterministic manner, nothing else. It doesn't stop spam and it 
> doesn't depend on or involve reputation. It is but one tool among a number of 
> tools that various parties can choose from. A message passing DMARC 
> validation does not mean the message is "good". There is no question of 
> fault. Perhaps you should recommend changes to incorporate a blame game if 
> your goal is to determine fault. 

Deterministic means there is no question -  you follow the protocol. Your 
(speaking in general) opinions don’t matter. 

>> 
>> Please try to be more civil with people’s views or position with this 
>> problematic protocol.
> 
> Thank you for sharing your opinion. I'm truly and deeply sorrowful if I have 
> offended your sensibilities. I will consider including trigger warnings on 
> future posts. 

Share that with Douglas Foster.

>> 
>> Thanks
> 
> You are welcome.

My Pleasure.

—
HLS

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Some Gmail comments on DMARCbis version 28

2023-09-13 Thread Hector Santos
> On Sep 11, 2023, at 9:24 AM, Dotzero  chastised Douglas 
> Foster
> 
> Absolutely incorrect. DMARC is a deterministic pass|fail approach based on 
> validation through DKIM or SPF pass (or if both pass). It says nothing about 
> the acceptability/goodness/badness of a source. 

So why are we here?

Correct or incorrect, a published p=reject has to mean something to the 
verifier who is doing the domain a favor by a) implementing the protocol and b) 
the goal of eliminating junk.   If there are false negatives, whose fault is 
that?  The Domain, The Verifier or the Protocol?

I think it’s the protocol but thats my opinion as one of early DKIM POLICY 
adopters and an advanced and costly implementation. If policy does not help 
protect a domain and also the receiver with failure hints or better said 
negative classification of a source per the domain policy, then what is the 
point of the work here or lack of there? 

Same is true with SPF.

Please try to be more civil with people’s views or position with this 
problematic protocol.

Thanks

All the best,
Hector Santos


___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


[OAUTH-WG] (no subject)

2023-09-06 Thread Hector Zepeda
Downloaded and install
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


[kwin] [Bug 465891] Square artifacts following cursor on some UI elements

2023-09-04 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=465891

Hector Martin  changed:

   What|Removed |Added

 CC||vlad.zahorod...@kde.org

--- Comment #5 from Hector Martin  ---
Adding Vlad to Cc since he might know more about what's going on, given his
involvement with Bug 465158 and Bug 455526

-- 
You are receiving this mail because:
You are watching all bug changes.

[kwin] [Bug 465891] Square artifacts following cursor on some UI elements

2023-09-04 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=465891

Hector Martin  changed:

   What|Removed |Added

 CC||mar...@marcan.st

--- Comment #4 from Hector Martin  ---
Interestingly, it looks like this one isn't specifically about blur (which got
fixed recently). This happens with blur disabled too, if you kill plasmashell.

Steps to reproduce:
- Set scale to 150% (or anything non integer)
- Open some apps (e.g. konsole, systemsettings)
- killall plasmashell
- (optional) `swaybg -o '*' -i /usr/share/backgrounds/default.png` (or whatever
image) to put up a background (makes the problem more obvious)
- Move the mouse and windows around (force software cursors to make it more
obvious if your driver supports hardware cursors)

You'll see black single pixel trails around the cursor and window edges on the
swaybg wallpaper and some window contents, but not all. E.g. the glitches seem
to appear on most of the Konsole window, but on System Settings only the window
decorations are affected. This happens with the blur effect completely
disabled.

AIUI the blur glitch was about the blurring itself being computed wrong. This
one looks like a different problem, related to dirty rectangle/redraw tracking.

KDE Plasma Version: 5.27.7 (wayland)

-- 
You are receiving this mail because:
You are watching all bug changes.

[jira] [Updated] (FLINK-33010) NPE when using GREATEST() in Flink SQL

2023-08-31 Thread Hector Rios (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Rios updated FLINK-33010:

Description: 
Hi,

I see NPEs in flink 1.14 and flink 1.16 when running queries with GREATEST() 
and timestamps. Below is an example to help in reproducing the issue.
{code:java}
CREATE TEMPORARY VIEW Positions AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestamp
FROM (VALUES
(1, 'USD', '2022-01-01'),
(2, 'GBP', '2022-02-02'),
(3, 'GBX', '2022-03-03'),
(4, 'GBX', '2022-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

CREATE TEMPORARY VIEW Benchmarks AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestamp
FROM (VALUES
(3, 'USD', '2023-01-01'),
(4, 'GBP', '2023-02-02'),
(5, 'GBX', '2023-03-03'),
(6, 'GBX', '2023-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

SELECT *,
GREATEST(
IFNULL(Positions.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3))),
IFNULL(Benchmarks.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3)))
)
FROM Positions
FULL JOIN Benchmarks ON Positions.SecurityId = Benchmarks.SecurityId {code}
 

Using "IF" is a workaround at the moment instead of using "GREATEST"

  

  was:
Hi,

I see NPEs in flink 1.14 and flink 1.16 when running queries with GREATEST() 
and timestamps. Below is an example to help in reproducing the issue.
{code:java}
CREATE TEMPORARY VIEW Positions AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestampFROM (VALUES
(1, 'USD', '2022-01-01'),
(2, 'GBP', '2022-02-02'),
(3, 'GBX', '2022-03-03'),
(4, 'GBX', '2022-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

CREATE TEMPORARY VIEW Benchmarks AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestampFROM (VALUES
(3, 'USD', '2023-01-01'),
(4, 'GBP', '2023-02-02'),
(5, 'GBX', '2023-03-03'),
(6, 'GBX', '2023-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

SELECT *,
GREATEST(
IFNULL(Positions.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3))),
IFNULL(Benchmarks.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3)))
)
FROM Positions
FULL JOIN Benchmarks ON Positions.SecurityId = Benchmarks.SecurityId {code}
 

Using "IF" is a workaround at the moment instead of using "GREATEST"

  


> NPE when using GREATEST() in Flink SQL
> --
>
> Key: FLINK-33010
> URL: https://issues.apache.org/jira/browse/FLINK-33010
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.16.1, 1.16.2
>Reporter: Hector Rios
>Priority: Minor
>
> Hi,
> I see NPEs in flink 1.14 and flink 1.16 when running queries with GREATEST() 
> and timestamps. Below is an example to help in reproducing the issue.
> {code:java}
> CREATE TEMPORARY VIEW Positions AS
> SELECT
> SecurityId,
> ccy1,
> CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestamp
> FROM (VALUES
> (1, 'USD', '2022-01-01'),
> (2, 'GBP', '2022-02-02'),
> (3, 'GBX', '2022-03-03'),
> (4, 'GBX', '2022-04-4'))
> AS ccy(SecurityId, ccy1, publishTimestamp);
> CREATE TEMPORARY VIEW Benchmarks AS
> SELECT
> SecurityId,
> ccy1,
> CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestamp
> FROM (VALUES
> (3, 'USD', '2023-01-01'),
> (4, 'GBP', '2023-02-02'),
> (5, 'GBX', '2023-03-03'),
> (6, 'GBX', '2023-04-4'))
> AS ccy(SecurityId, ccy1, publishTimestamp);
> SELECT *,
> GREATEST(
> IFNULL(Positions.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3))),
> IFNULL(Benchmarks.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3)))
> )
> FROM Positions
> FULL JOIN Benchmarks ON Positions.SecurityId = Benchmarks.SecurityId {code}
>  
> Using "IF" is a workaround at the moment instead of using "GREATEST"
>   



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33010) NPE when using GREATEST() in Flink SQL

2023-08-31 Thread Hector Rios (Jira)
Hector Rios created FLINK-33010:
---

 Summary: NPE when using GREATEST() in Flink SQL
 Key: FLINK-33010
 URL: https://issues.apache.org/jira/browse/FLINK-33010
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API, Table SQL / Planner
Affects Versions: 1.16.2, 1.16.1
Reporter: Hector Rios


Hi,

I see NPEs in flink 1.14 and flink 1.16 when running queries with GREATEST() 
and timestamps. Below is an example to help in reproducing the issue.
{code:java}
CREATE TEMPORARY VIEW Positions AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestampFROM (VALUES
(1, 'USD', '2022-01-01'),
(2, 'GBP', '2022-02-02'),
(3, 'GBX', '2022-03-03'),
(4, 'GBX', '2022-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

CREATE TEMPORARY VIEW Benchmarks AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestampFROM (VALUES
(3, 'USD', '2023-01-01'),
(4, 'GBP', '2023-02-02'),
(5, 'GBX', '2023-03-03'),
(6, 'GBX', '2023-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

SELECT *,
GREATEST(
IFNULL(Positions.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3))),
IFNULL(Benchmarks.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3)))
)
FROM Positions
FULL JOIN Benchmarks ON Positions.SecurityId = Benchmarks.SecurityId {code}
 

Using "IF" is a workaround at the moment instead of using "GREATEST"

  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33010) NPE when using GREATEST() in Flink SQL

2023-08-31 Thread Hector Rios (Jira)
Hector Rios created FLINK-33010:
---

 Summary: NPE when using GREATEST() in Flink SQL
 Key: FLINK-33010
 URL: https://issues.apache.org/jira/browse/FLINK-33010
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API, Table SQL / Planner
Affects Versions: 1.16.2, 1.16.1
Reporter: Hector Rios


Hi,

I see NPEs in flink 1.14 and flink 1.16 when running queries with GREATEST() 
and timestamps. Below is an example to help in reproducing the issue.
{code:java}
CREATE TEMPORARY VIEW Positions AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestampFROM (VALUES
(1, 'USD', '2022-01-01'),
(2, 'GBP', '2022-02-02'),
(3, 'GBX', '2022-03-03'),
(4, 'GBX', '2022-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

CREATE TEMPORARY VIEW Benchmarks AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestampFROM (VALUES
(3, 'USD', '2023-01-01'),
(4, 'GBP', '2023-02-02'),
(5, 'GBX', '2023-03-03'),
(6, 'GBX', '2023-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

SELECT *,
GREATEST(
IFNULL(Positions.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3))),
IFNULL(Benchmarks.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3)))
)
FROM Positions
FULL JOIN Benchmarks ON Positions.SecurityId = Benchmarks.SecurityId {code}
 

Using "IF" is a workaround at the moment instead of using "GREATEST"

  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re:[VOTE] KIP-970: Deprecate and remove Connect's redundant task configurations endpoint

2023-08-30 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
This makes sense to me, +1 (non-binding)

From: dev@kafka.apache.org At: 08/30/23 02:58:59 UTC-4:00To:  
dev@kafka.apache.org
Subject: [VOTE] KIP-970: Deprecate and remove Connect's redundant task 
configurations endpoint

Hi all,

This is the vote thread for KIP-970 which proposes deprecating (in the
Apache Kafka 3.7 release) and eventually removing (in the next major Apache
Kafka release - 4.0) Connect's redundant task configurations endpoint.

KIP -
https://cwiki.apache.org/confluence/display/KAFKA/KIP-970%3A+Deprecate+and+remov
e+Connect%27s+redundant+task+configurations+endpoint

Discussion thread -
https://lists.apache.org/thread/997qg9oz58kho3c19mdrjodv0n98plvj

Thanks,
Yash




RV: FALLO SVN Y DIRECTORIO SVN TECMEC 1 MES

2023-08-29 Thread Abajo Maestre, Hector Daniel via users
Hello,
We use subversión and we are having some trouble that stop sus at all.

Suddenly when we try to "SVN update" from explorer it fails due to corruption 
of some file of certain folder as shows in the image... then we cannot update. 
How can we proceed? :( there is only two folders with something corrupted... 
this two showed in figure... we would like eliminate the revision or 
anything... we have last version of this in local to upload again but we cannot 
like SVN update normally so its not possible to use the software... can you 
help us please?


[cid:image001.png@01D9D9AE.384CE280]
[cid:image002.png@01D9D9AE.384CE280]

De: Abajo Maestre, Hector Daniel
Enviado el: lunes, 28 de agosto de 2023 11:43
Para: Gonzalez Calvo, Jon Ander (External) 
mailto:jonander.gonza...@itpaero.com>>; Nalda 
Aliende, Eva mailto:eva.na...@itpaero.com>>; de la 
Calzada Mazeres, Pedro 
mailto:pedro.delacalz...@itpaero.com>>; 
Gallastegui Irure, Joseba 
mailto:joseba.gallaste...@itpaero.com>>; Garcia 
Moya, Borja mailto:borja.gar...@itpaero.com>>; Valdez 
Berriozabal, Francisco 
mailto:francisco.val...@itpaero.com>>
Asunto: FALLO SVN Y DIRECTORIO SVN TECMEC 1 MES

Hola,

Llevamos 1 mes sin poder descargar el trabajo del directorio SVN TECMEC.

La gente de JIRA nos dicen que ellos no llevan el tema, quien pues? Nos dicen 
PLM Servers, pero quienes? Para contactar con ellos directamente.

Esto de que haya algo que este en el limbo es lo peor, nosotros no podemos 
trabajar, creo que el equipo IT debería llevarlo de forma normal.

Ahora mismo estamos semi parados.

Saludos!



==

AVISO LEGAL: Este mensaje y cualquier documento adjunto en el mismo incluye 
información que puede ser calificada Confidencial, privada y/o bajo Copyright 
y, por tanto, como Propiedad Intelectual protegida por la ley. En cualquier 
caso está dirigida exclusivamente al destinatario y cualquier divulgación, 
distribución otro uso no autorizado de la misma están expresamente prohibidos y 
puede ser considerada en su caso como un acto ilegal. Si Vd. recibiera este 
correo por error, por favor notifique al remitente y destruya el mensaje y 
cualquier copia de mismo. ITP Aero le informa que sus datos personales tratados 
en nuestros sistemas de información de CONTACTOS tienen la finalidad de 
mantener las relaciones comerciales y las comunicaciones de cualquier tipo 
vinculadas a la organización. Puede ejercer su derecho de acceso, y en su caso, 
de rectificación, cancelación y oposición mediante escrito dirigido a 
datapriv...@itpaero.com adjuntando un documento identificativo de su identidad. 
Para más información acceda a la política de privacidad de nuestra web 
www.itpaero.com.

--

DISCLAIMER: This email message and any document attached here to may contain 
confidential and/or proprietary information, under copyright and thus protected 
by virtue of the law. In any event it is intended solely for the recipient and 
any unauthorized disclosure, distribution, or other use thereof is expressly 
prohibited and could be deemed illegal. If you received this e-mail in error, 
please notify the sender and destroy all copies of the original message. ITP 
Aero informs you that your personal data enclosed in our information systems 
related to CONTACTS has the purpose of maintaining business relationships and 
communications of any kind linked to the organization. You can exercise the 
right of access (and if needed) modification, correction, cancelation and 
opposition throughout by writing form to datapriv...@itpaero.com attaching a 
proof of your identity or identification document. For more information, access 
the privacy policy of our website www.itpaero.com.

==


[OAUTH-WG] (no subject)

2023-08-29 Thread Hector Zepeda
Need this downloaded and install please
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


[OAUTH-WG] (no subject)

2023-08-29 Thread Hector Zepeda
Need this downloaded and install
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


[OAUTH-WG] (no subject)

2023-08-29 Thread Hector Zepeda
Need this downloaded
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


[OAUTH-WG] (no subject)

2023-08-29 Thread Hector Zepeda
Need this downloaded
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth Digest, Vol 178, Issue 76

2023-08-29 Thread Hector Zepeda
Need this down load

On Mon, Aug 28, 2023, 1:35 PM  wrote:

> Send OAuth mailing list submissions to
> oauth@ietf.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://www.ietf.org/mailman/listinfo/oauth
> or, via email, send a message with subject or body 'help' to
> oauth-requ...@ietf.org
>
> You can reach the person managing the list at
> oauth-ow...@ietf.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of OAuth digest..."
>
>
> Today's Topics:
>
>1. Fwd: New Version Notification for
>   draft-gilman-wimse-use-cases-00.txt (Justin Richer)
>2. Re: Fwd: New Version Notification for
>   draft-gilman-wimse-use-cases-00.txt (Dick Hardt)
>
>
> --
>
> Message: 1
> Date: Mon, 28 Aug 2023 18:11:42 +
> From: "Justin Richer" 
> To: oauth 
> Subject: [OAUTH-WG] Fwd: New Version Notification for
> draft-gilman-wimse-use-cases-00.txt
> Message-ID: 
> Content-Type: text/plain; charset="utf-8"
>
> Hi all,
>
> Back at IETF116 in Yokohama, Evan Gilman presented information about
> SPIFFE, a workload security platform. At IETF 117 in SF, we presented a set
> of questions and possible new work, to lots of positive feedback. Now we?ve
> set up the Workload Identity in Multi System Environments (WIMSE) mailing
> list for discussing things, wi...@ietf.org ? and
> we?ve just published the following -00 use cases document. If this topic
> area interests you, please take a look through the use cases (it?s pretty
> short right now) and join the conversation on the WIMSE mailing list.
>
> Thanks,
>  ? Justin
>
> Begin forwarded message:
>
> From: internet-dra...@ietf.org
> Subject: New Version Notification for draft-gilman-wimse-use-cases-00.txt
> Date: August 28, 2023 at 1:53:01 PM EDT
> To: "Evan Gilman" , "Joseph Salowey" ,
> "Justin Richer" , "Pieter Kasselman" <
> pieter.kassel...@microsoft.com>
>
> A new version of Internet-Draft draft-gilman-wimse-use-cases-00.txt has
> been
> successfully submitted by Justin Richer and posted to the
> IETF repository.
>
> Name: draft-gilman-wimse-use-cases
> Revision: 00
> Title:Workload Identity Use Cases
> Date: 2023-08-28
> Group:Individual Submission
> Pages:7
> URL:
> https://www.ietf.org/archive/id/draft-gilman-wimse-use-cases-00.txt
> Status:   https://datatracker.ietf.org/doc/draft-gilman-wimse-use-cases/
> HTML:
> https://www.ietf.org/archive/id/draft-gilman-wimse-use-cases-00.html
> HTMLized:
> https://datatracker.ietf.org/doc/html/draft-gilman-wimse-use-cases
>
>
> Abstract:
>
>   Workload identity systems like SPIFFE provide a unique set of
>   security challenges, constraints, and possibilities that affect the
>   larger systems they are a part of.  This document seeks to collect
>   use cases within that space, with a specific look at both the OAuth
>   and SPIFFE technologies.
>
> Discussion Venues
>
>   This note is to be removed before publishing as an RFC.
>
>   Source for this draft and an issue tracker can be found at
>   https://github.com/bspk/draft-gilman-wimse-use-cases.
>
>
>
> The IETF Secretariat
>
>
>
> -- next part --
> An HTML attachment was scrubbed...
> URL: <
> https://mailarchive.ietf.org/arch/browse/oauth/attachments/20230828/4cc29390/attachment.htm
> >
>
> --
>
> Message: 2
> Date: Mon, 28 Aug 2023 11:34:19 -0700
> From: Dick Hardt 
> To: Justin Richer 
> Cc: oauth 
> Subject: Re: [OAUTH-WG] Fwd: New Version Notification for
> draft-gilman-wimse-use-cases-00.txt
> Message-ID:
> <
> cad9ie-vmu+l+c31mgb4iltqbmz919pu9d-kbbg8o+jm0o2r...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Link for WIMSE list https://www.ietf.org/mailman/listinfo/wimse
>
> On Mon, Aug 28, 2023 at 11:12?AM Justin Richer  wrote:
>
> > Hi all,
> >
> > Back at IETF116 in Yokohama, Evan Gilman presented information about
> > SPIFFE, a workload security platform. At IETF 117 in SF, we presented a
> set
> > of questions and possible new work, to lots of positive feedback. Now
> we?ve
> > set up the Workload Identity in Multi System Environments (WIMSE) mailing
> > list for discussing things, wi...@ietf.org ? and we?ve just published
> the
> > following -00 use cases document. If this topic area interests you,
> please
> > take a look through the use cases (it?s pretty short right now) and join
> > the conversation on the WIMSE mailing list.
> >
> > Thanks,
> >  ? Justin
> >
> > Begin forwarded message:
> >
> > *From: *internet-dra...@ietf.org
> > *Subject: **New Version Notification for
> > draft-gilman-wimse-use-cases-00.txt*
> > *Date: *August 28, 2023 at 1:53:01 PM EDT
> > *To: *"Evan Gilman" , "Joseph Salowey"  >,
> > "Justin Richer" , "Pieter Kasselman" <
> > pieter.kassel...@microsoft.com>
> >
> > A new version of Internet-Draft 

[OAUTH-WG] Download and install please

2023-08-25 Thread Hector Zepeda
-- 
null
___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


Re: [OAUTH-WG] OAuth Digest, Vol 178, Issue 51

2023-08-25 Thread Hector Zepeda
Download and install please

On Thu, Aug 24, 2023 at 6:50 PM  wrote:

> Send OAuth mailing list submissions to
> oauth@ietf.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://www.ietf.org/mailman/listinfo/oauth
> or, via email, send a message with subject or body 'help' to
> oauth-requ...@ietf.org
>
> You can reach the person managing the list at
> oauth-ow...@ietf.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of OAuth digest..."
>
>
> Today's Topics:
>
>1. Re: SD-JWT does not meet standard security definitions
>   (Watson Ladd)
>2. Re: SD-JWT does not meet standard security definitions
>   (Kristina Yasuda)
>3. Re: SD-JWT does not meet standard security definitions
>   (Watson Ladd)
>
>
> --
>
> Message: 1
> Date: Thu, 24 Aug 2023 13:07:39 -0700
> From: Watson Ladd 
> To: Daniel Fett 
> Cc: Hannes Tschofenig , oauth@ietf.org,
> draft-ietf-oauth-selective-disclosure-jwt@ietf.org
> Subject: Re: [OAUTH-WG] SD-JWT does not meet standard security
> definitions
> Message-ID:
>  y...@mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> On Thu, Aug 24, 2023 at 3:44?AM Daniel Fett  wrote:
> >
> > Thanks, Hannes.
> >
> > The fact that technologies like AnonCreds are based on such old
> principles, yet they are not uniformly standardized, often times limited to
> a few implementations that may or may not be secure, are full of security
> footguns, lack hardware support, and are just extremely hard or impossible
> to deploy speaks for itself.
> >
> > That's why things like SD-JWT exist and gain traction.
> >
> > Yes, you have to jump through hoops to get unlinkability, but it is not
> impossible, and it seems to be a good tradeoff for many.
>
> Is there a document describing this that we can compare to the BBS
> version? Because it's a lot harder than you think: you need a blind
> signature and cut and choose for the credential openings (or
> rerandomization via structure preserving signatures, hello pairings),
> you need to deal with exhaustion of the supply of tokens, your
> issuance process has to be repeatable at low cost, so that's also
> getting messy, and then the hardware binding has its own special
> problems. None of that is in this draft, and I think it would be a lot
> better if we spelled it out here or someplace else to get a better
> sense of the tradeoffs.
>
> I would also like to point out that if end users don't like the
> privacy aspects, they simply won't use this technology. That's a very
> serious deployment issue.
>
> Sincerely,
> Watson Ladd
>
> --
> Astra mortemque praestare gradatim
>
>
>
> --
>
> Message: 2
> Date: Thu, 24 Aug 2023 20:32:02 +
> From: Kristina Yasuda 
> To: Watson Ladd , Daniel Fett
> 
> Cc: Hannes Tschofenig , "oauth@ietf.org"
> ,
> "draft-ietf-oauth-selective-disclosure-jwt@ietf.org"
> 
> Subject: Re: [OAUTH-WG] SD-JWT does not meet standard security
> definitions
> Message-ID:
> <
> sa1pr00mb13101b25440011fd872fd7eee5...@sa1pr00mb1310.namprd00.prod.outlook.com
> >
>
> Content-Type: text/plain; charset="utf-8"
>
> First of all, BBS and SD-JWT are not comparable apple to apple. BBS is a
> signature scheme and it needs to be combined with few other things like JWP
> or BBS data integrity proof type (https://www.w3.org/TR/vc-di-bbs/) with
> JSON-LD payload. While SD-JWT is a mechanism that can be used with any
> crypto suite.
>
> Second, how to do batch issuance of the credential (honestly, of any
> credential format: not just SD-JWT VCs but also mdocs and JWT-VCs) and
> whether it can be done low cost is out of scope of the credential format
> (or any of its components) specification itself. Btw when using OpenID4VCI
> (an extension of oauth), batch issuing SD-JWTs does not need a blind
> signature and I do not know what you mean by exhaustion of the supply of
> tokens, there are only access token and refresh token involved in a usual
> manner.
>
> Best,
> Kristina
>
> Get Outlook for iOS
> 
> From: Watson Ladd 
> Sent: Thursday, August 24, 2023 9:08 PM
> To: Daniel Fett 
> Cc: Hannes Tschofenig ; oauth@ietf.org <
> oauth@ietf.org>; draft-ietf-oauth-selective-disclosure-jwt@ietf.org <
> draft-ietf-oauth-selective-disclosure-jwt@ietf.org>
> Subject: Re: SD-JWT does not meet standard security definitions
>
> [You don't often get email from watsonbl...@gmail.com. Learn why this is
> important at https://aka.ms/LearnAboutSenderIdentification ]
>
> On Thu, Aug 24, 2023 at 3:44?AM Daniel Fett  wrote:
> >
> > Thanks, Hannes.
> >
> > The fact that technologies like AnonCreds are based on such old
> principles, yet they are not uniformly standardized, often times limited to
> a few implementations that 

Re: [dmarc-ietf] p=interoperability p=compliance p=orgname:policyname

2023-08-23 Thread Hector Santos
We have many considerations and if we “are [near] finish” then please publish a 
new draft to see where are at.  With so many unknowns, its fertilizes 
uncertainty, “desperate questions” and ignored suggestions and proposals.

I believe an update is warranted.

All the best,
Hector Santos



> On Aug 23, 2023, at 4:10 PM, Barry Leiba  wrote:
> 
> Apart from "never finish", I would contend that changes of that nature
> violate the "preserve interoperability with the installed base of
> DMARC systems" clause of our charter.  We *can* make changes such as
> this if we have a reason that's compelling enough, but as we talk
> about changing the strings that we use for "p=", the arguments are
> more cosmetic than truly functional, and I certainly don't see them as
> compelling.
> 
> Barry
> 
> On Wed, Aug 23, 2023 at 12:11 PM John Levine  wrote:
>> 
>> It appears that Jesse Thompson   said:
>>> I'm beginning to think that a solution to this problem is "other channels"
>>> 
>>> Let's discuss p=interoperability, p=compliance, or p=orgname:policyname
>> 
>> Please, no.  This WG has already run a year past its sell-by date.  Stuff
>> like this will just tell the world that we'll never finish.
>> 
>> R's,
>> John
>> 
>> ___
>> dmarc mailing list
>> dmarc@ietf.org
>> https://www.ietf.org/mailman/listinfo/dmarc
> 
> ___
> dmarc mailing list
> dmarc@ietf.org
> https://www.ietf.org/mailman/listinfo/dmarc

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


TortoiseSVN error, pristine file can not be opened

2023-08-23 Thread Hector Daniel Orozco mediante TortoiseSVN
Good morning, 
I'm working with a local project already versioned in SVN, 
When I execute the command line  svn status 
C:\Users\svgchi_bgrfvt>svn status "C:\DepthCam Cal"

Im getting the next error message,

svn: E720002: Can't open file 'C:\DepthCam 
Cal\.svn\pristine\93\931129534ff4d198a85adcc0a985e405f47a05d7.svn-base': 
The system cannot find the file specified.


I already tried several options:
- Import my project to SVN and then perform SVN Checkout

But the problem continues.

Is there a way to correct it?

Im using last version of TortoiseSVN, 1.14.5, Build 29465

-- 
You received this message because you are subscribed to the Google Groups 
"TortoiseSVN" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to tortoisesvn+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tortoisesvn/72039669-6147-4774-a50a-2921e50388e0n%40googlegroups.com.


Re: MYSQL cygwin database connection requests root password

2023-08-18 Thread HECTOR MENDEZ via Cygwin
 
Hi there,
 I tried with an empty password and "root" word a password but no luck, so far.
Thank youEl miércoles, 16 de agosto de 2023, 22:56:12 GMT-6, rapp...@dds.nl 
 escribió:  
 
 > Hi everyone I saw that in order to connect MYSQL database on cygwin, this 
 > statement must be executed:
> mysql -u root -p -h 127.0.0.1
> However, as far as I know, there's no root user on cygwin.
> How can I get that requested password?

Isn't "root" a MySQL username? If I recall correctly its default
password is either empty or "root".
  

-- 
Problem reports:  https://cygwin.com/problems.html
FAQ:  https://cygwin.com/faq/
Documentation:https://cygwin.com/docs.html
Unsubscribe info: https://cygwin.com/ml/#unsubscribe-simple


MYSQL cygwin database connection requests root password

2023-08-16 Thread HECTOR MENDEZ via Cygwin
Hi everyone I saw that in order to connect MYSQL database on cygwin, this 
statement must be executed:
mysql -u root -p -h 127.0.0.1
However, as far as I know, there's no root user on cygwin.
How can I get that requested password?
Thank you in advance.
Regards

-- 
Problem reports:  https://cygwin.com/problems.html
FAQ:  https://cygwin.com/faq/
Documentation:https://cygwin.com/docs.html
Unsubscribe info: https://cygwin.com/ml/#unsubscribe-simple


[jira] [Commented] (HDFS-17128) RBF: SQLDelegationTokenSecretManager should use version of tokens updated by other routers

2023-08-16 Thread Hector Sandoval Chaverri (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17755198#comment-17755198
 ] 

Hector Sandoval Chaverri commented on HDFS-17128:
-

[~slfan1989] gentle ping on the request above :) The patch I provided should 
apply cleanly on branch-3.3. Thanks for all your help!

> RBF: SQLDelegationTokenSecretManager should use version of tokens updated by 
> other routers
> --
>
> Key: HDFS-17128
> URL: https://issues.apache.org/jira/browse/HDFS-17128
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-17128-branch-3.3.patch
>
>
> The SQLDelegationTokenSecretManager keeps tokens that it has interacted with 
> in a memory cache. This prevents routers from connecting to the SQL server 
> for each token operation, improving performance.
> We've noticed issues with some tokens being loaded in one router's cache and 
> later renewed on a different one. If clients try to use the token in the 
> outdated router, it will throw an "Auth failed" error when the cached token's 
> expiration has passed.
> This can also affect cancelation scenarios since a token can be removed from 
> one router's cache and still exist in another one.
> A possible solution is already implemented on the 
> ZKDelegationTokenSecretManager, which consists of having an executor 
> refreshing each router's cache on a periodic basis. We should evaluate 
> whether this will work with the volume of tokens expected to be handled by 
> the SQLDelegationTokenSecretManager.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



Re: [VOTE] KIP-953: partition method to be overloaded to accept headers as well.

2023-08-16 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
+1 (non-binding) 

Thanks for your KIP!

From: dev@kafka.apache.org At: 08/16/23 04:48:13 UTC-4:00To:  
dev@kafka.apache.org
Subject: Re: [VOTE] KIP-953: partition method to be overloaded to accept 
headers as well.

Thanks Sagar and Chris for your votes. I will add the details Chris has
asked for to the KIP.

Hey everyone,

Please consider this as a gentle reminder.


On Sat, Aug 12, 2023 at 10:48 AM Chris Egerton 
wrote:

> Hi Jack,
>
> +1 (binding)
>
> Some friendly, non-blocking suggestions:
>
> - IMO it's still worth specifying that the headers will be read-only; this
> clarifies the intended API contract both for reviewers of the KIP who
> haven't read the GitHub PR yet, and for developers who may leverage this
> new method
> - May be worth mentioning in the compatibility section that any
> partitioners that only implement the new interface will be incompatible
> with older Kafka clients versions (this is less likely to be a serious
> problem in the clients world, but it's a much hairier problem with Connect,
> where cross-compatibility between newer/older versions of connectors and
> the Kafka Connect runtime is a serious concern)
>
> Again, these are not blockers and I'm in favor of the KIP with or without
> them since I believe both can be addressed at least partially during PR
> review and don't have to be tackled at this stage.
>
> Cheers,
>
> Chris
>
> On Sat, Aug 12, 2023 at 12:43 AM Sagar  wrote:
>
> > Hey jack ,
> >
> > +1 (non binding)
> >
> > Sagar.
> >
> > On Sat, 12 Aug 2023 at 8:04 AM, Jack Tomy  wrote:
> >
> > > Hey everyone,
> > >
> > > Please consider this as a gentle reminder.
> > >
> > > On Mon, Aug 7, 2023 at 5:55 PM Jack Tomy 
> wrote:
> > >
> > > > Hey everyone.
> > > >
> > > > I would like to call for a vote on KIP-953: partition method to be
> > > > overloaded to accept headers as well.
> > > >
> > > > KIP :
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=263424937
> > > > Discussion thread :
> > > > https://lists.apache.org/thread/0f20kvfqkmhdqrwcb8vqgqn80szcrcdd
> > > >
> > > > Thanks
> > > > --
> > > > Best Regards
> > > > *Jack*
> > > >
> > >
> > >
> > > --
> > > Best Regards
> > > *Jack*
> > >
> >
>


-- 
Best Regards
*Jack*




Re: [Issue] Repeatedly receiving same message from Kafka

2023-08-15 Thread Hector Rios
Hi there

It would be helpful if you could include the code for your pipeline. One
suggestion, can you disable the "EXACTLY_ONCE" semantics on the producer.
Using EXACTLY_ONCE will leverage Kafka transactions and thus add overhead.
I would disable it to see if you still get the same situation.

Also, can you look in the Flink UI for this job and see if checkpoints are
in fact being taken?

Hope that helps
-Hector

On Tue, Aug 15, 2023 at 11:36 AM Dennis Jung  wrote:

> Sorry, I've forgot putting title, so sending again.
>
> 2023년 8월 15일 (화) 오후 6:27, Dennis Jung 님이 작성:
>
>> (this is issue from Flink 1.14)
>>
>> Hello,
>>
>> I've set up following logic to consume messages from kafka, and produce
>> them to another kafka broker. For producer, I've configured
>> `Semantics.EXACTLY_ONCE` to send messages exactly once. (also setup
>> 'StreamExecutionEnvironment::enableCheckpointing' as
>> 'CheckpointingMode.EXACTLY_ONCE')
>>
>>
>> 
>> kafka A -> FlinkKafkaConsumer -> ... -> FlinkKafkaProducer -> kafka B
>>
>> 
>>
>> But though I've just produced only 1 message to 'kafka A', consumer
>> consumes the same message repeatedly.
>>
>> When I remove `FlinkKafkaProducer` part and make it 'read only', it does
>> not happen.
>>
>> Could someone suggest a way to debug or fix this?
>>
>> Thank you.
>>
>


[jira] [Commented] (HDFS-17128) RBF: SQLDelegationTokenSecretManager should use version of tokens updated by other routers

2023-08-14 Thread Hector Sandoval Chaverri (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-17128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17754175#comment-17754175
 ] 

Hector Sandoval Chaverri commented on HDFS-17128:
-

[~slfan1989] could you help commit the attached patch to branch-3.3? 
[^HDFS-17128-branch-3.3.patch]

 

> RBF: SQLDelegationTokenSecretManager should use version of tokens updated by 
> other routers
> --
>
> Key: HDFS-17128
> URL: https://issues.apache.org/jira/browse/HDFS-17128
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-17128-branch-3.3.patch
>
>
> The SQLDelegationTokenSecretManager keeps tokens that it has interacted with 
> in a memory cache. This prevents routers from connecting to the SQL server 
> for each token operation, improving performance.
> We've noticed issues with some tokens being loaded in one router's cache and 
> later renewed on a different one. If clients try to use the token in the 
> outdated router, it will throw an "Auth failed" error when the cached token's 
> expiration has passed.
> This can also affect cancelation scenarios since a token can be removed from 
> one router's cache and still exist in another one.
> A possible solution is already implemented on the 
> ZKDelegationTokenSecretManager, which consists of having an executor 
> refreshing each router's cache on a periodic basis. We should evaluate 
> whether this will work with the volume of tokens expected to be handled by 
> the SQLDelegationTokenSecretManager.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-17128) RBF: SQLDelegationTokenSecretManager should use version of tokens updated by other routers

2023-08-14 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-17128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HDFS-17128:

Attachment: HDFS-17128-branch-3.3.patch

> RBF: SQLDelegationTokenSecretManager should use version of tokens updated by 
> other routers
> --
>
> Key: HDFS-17128
> URL: https://issues.apache.org/jira/browse/HDFS-17128
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: rbf
>Reporter: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS-17128-branch-3.3.patch
>
>
> The SQLDelegationTokenSecretManager keeps tokens that it has interacted with 
> in a memory cache. This prevents routers from connecting to the SQL server 
> for each token operation, improving performance.
> We've noticed issues with some tokens being loaded in one router's cache and 
> later renewed on a different one. If clients try to use the token in the 
> outdated router, it will throw an "Auth failed" error when the cached token's 
> expiration has passed.
> This can also affect cancelation scenarios since a token can be removed from 
> one router's cache and still exist in another one.
> A possible solution is already implemented on the 
> ZKDelegationTokenSecretManager, which consists of having an executor 
> refreshing each router's cache on a periodic basis. We should evaluate 
> whether this will work with the volume of tokens expected to be handled by 
> the SQLDelegationTokenSecretManager.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[kwin] [Bug 455526] Blur glitches started to appear in wayland again

2023-08-11 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=455526

--- Comment #37 from Hector Martin  ---
Hmm, it looks like whatever was done to 5.27.7 to fix the non-integer scale
redraw artifacts (which was another major issue) also fixed or significantly
improved blur? I can't reproduce the kind of horrible glitching 5.27.6 had any
more.

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: [VOTE] KIP-959 Add BooleanConverter to Kafka Connect

2023-08-10 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Thanks Mickael!

The KIP has passed with 3 binding votes (Chris Egerton, Greg Harris, Mickael 
Maison) and 3 non-binding votes (Andrew Schofield, Yash Mayya, Kamal 
Chandraprakash).

I'll update the KIP status, meanwhile the PR is still pending: 
https://github.com/apache/kafka/pull/14093

From: dev@kafka.apache.org At: 08/08/23 08:33:21 UTC-4:00To:  
dev@kafka.apache.org
Subject: Re: [VOTE] KIP-959 Add BooleanConverter to Kafka Connect

Hi,

+1 (binding)

Thanks for the KIP!

On Mon, Aug 7, 2023 at 3:15 PM Hector Geraldino (BLOOMBERG/ 919 3RD A)
 wrote:
>
> Hello,
>
> I still need help from a committer to review/approve this (small) KIP, which 
adds a new BooleanConverter to the list of converters in Kafka Connect.
>
> The KIP has a companion PR implementing the feature as well.
>
> Thanks again!
> Sent from Bloomberg Professional for iPhone
>
> - Original Message -
> From: Hector Geraldino 
> To: dev@kafka.apache.org
> At: 08/01/23 11:48:23 UTC-04:00
>
>
> Hi,
>
> Still missing one binding vote for this (very small) KIP to pass :)
>
> From: dev@kafka.apache.org At: 07/28/23 09:37:45 UTC-4:00To:  
dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-959 Add BooleanConverter to Kafka Connect
>
> Hi everyone,
>
> Thanks everyone who has reviewed and voted for this KIP.
>
> So far it has received 3 non-binding votes (Andrew Schofield, Yash Mayya, 
Kamal
> Chandraprakash) and 2 binding votes (Chris Egerton, Greg Harris)- still shy of
> one binding vote to pass.
>
> Can we get help from a committer to push it through?
>
> Thank you!
> Hector
>
> Sent from Bloomberg Professional for iPhone
>
> - Original Message -
> From: Greg Harris 
> To: dev@kafka.apache.org
> At: 07/26/23 12:23:20 UTC-04:00
>
>
> Hey Hector,
>
> Thanks for the straightforward and clear KIP!
> +1 (binding)
>
> Thanks,
> Greg
>
> On Wed, Jul 26, 2023 at 5:16 AM Chris Egerton  wrote:
> >
> > +1 (binding)
> >
> > Thanks Hector!
> >
> > On Wed, Jul 26, 2023 at 3:18 AM Kamal Chandraprakash <
> > kamal.chandraprak...@gmail.com> wrote:
> >
> > > +1 (non-binding). Thanks for the KIP!
> > >
> > > On Tue, Jul 25, 2023 at 11:12 PM Yash Mayya  wrote:
> > >
> > > > Hi Hector,
> > > >
> > > > Thanks for the KIP!
> > > >
> > > > +1 (non-binding)
> > > >
> > > > Thanks,
> > > > Yash
> > > >
> > > > On Tue, Jul 25, 2023 at 11:01 PM Andrew Schofield <
> > > > andrew_schofield_j...@outlook.com> wrote:
> > > >
> > > > > Thanks for the KIP. As you say, not that controversial.
> > > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > Thanks,
> > > > > Andrew
> > > > >
> > > > > > On 25 Jul 2023, at 18:22, Hector Geraldino (BLOOMBERG/ 919 3RD A) <
> > > > > hgerald...@bloomberg.net> wrote:
> > > > > >
> > > > > > Hi everyone,
> > > > > >
> > > > > > The changes proposed by KIP-959 (Add BooleanConverter to Kafka
> > > Connect)
> > > > > have a limited scope and shouldn't be controversial. I'm opening a
> > > voting
> > > > > thread with the hope that it can be included in the next upcoming 3.6
> > > > > release.
> > > > > >
> > > > > > Here are some links:
> > > > > >
> > > > > > KIP:
> > > > >
> > > >
> > >
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-959%3A+Add+BooleanConverte
> r+to+Kafka+Connect
> > > > > > JIRA: https://issues.apache.org/jira/browse/KAFKA-15248
> > > > > > Discussion thread:
> > > > > https://lists.apache.org/thread/15c2t0kl9bozmzjxmkl5n57kv4l4o1dt
> > > > > > Pull Request: https://github.com/apache/kafka/pull/14093
> > > > > >
> > > > > > Thanks!
> > > > >
> > > > >
> > > > >
> > > >
> > >




[dmarc-ietf] Current Status of DMARCBis

2023-08-09 Thread Hector Santos
I am interested in understanding what is the current consensus for the key 
changes in DMARCbis. Anxious to begin exploratory coding, I am personally 
focused on integration algorithms to apply dynamically processed results for 
SPF, DMARC,Alignment and the “relaxer” auth= tag.

spf=pass
spf=hardfail
spf=softfail
spf=neutral
spf=unknown

dmarc=Pass
dmarc=Reject 
dmarc=Quarantine
dmarc=None

alignment=pass
alignment=fail

auth=spf
auth=dkim
auth=spf,dkim

With no judgement.

Of course, the issue has been there are too many false negatives with p=reject 
and p=quarantine applications.

Are we considering results for ARC?  I don’t know the ARC state conditions to 
state here, but I presume it provides a "trusted” or “self-signed” solution to 
correct broken 1st party signatures?

I would also like to see an updated DMARCBis protocol lookup procedure or 
algorithm when considering the proposed optional process parameter “auth=“ 
value.

An updated draft would be the ideal for the most current consensus.

Thanks

—
HLS

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


[jira] [Created] (HDFS-17148) RBF: SQLDelegationTokenSecretManager must cleanup expired tokens in SQL

2023-08-08 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HDFS-17148:
---

 Summary: RBF: SQLDelegationTokenSecretManager must cleanup expired 
tokens in SQL
 Key: HDFS-17148
 URL: https://issues.apache.org/jira/browse/HDFS-17148
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Reporter: Hector Sandoval Chaverri


The SQLDelegationTokenSecretManager fetches tokens from SQL and stores them 
temporarily in a memory cache with a short TTL. The ExpiredTokenRemover in 
AbstractDelegationTokenSecretManager runs periodically to cleanup any expired 
tokens from the cache, but most tokens have been evicted automatically per the 
TTL configuration. This leads to many expired tokens in the SQL database that 
should be cleaned up.

The SQLDelegationTokenSecretManager should find expired tokens in SQL instead 
of in the memory cache when running the periodic cleanup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-17148) RBF: SQLDelegationTokenSecretManager must cleanup expired tokens in SQL

2023-08-08 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HDFS-17148:
---

 Summary: RBF: SQLDelegationTokenSecretManager must cleanup expired 
tokens in SQL
 Key: HDFS-17148
 URL: https://issues.apache.org/jira/browse/HDFS-17148
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: rbf
Reporter: Hector Sandoval Chaverri


The SQLDelegationTokenSecretManager fetches tokens from SQL and stores them 
temporarily in a memory cache with a short TTL. The ExpiredTokenRemover in 
AbstractDelegationTokenSecretManager runs periodically to cleanup any expired 
tokens from the cache, but most tokens have been evicted automatically per the 
TTL configuration. This leads to many expired tokens in the SQL database that 
should be cleaned up.

The SQLDelegationTokenSecretManager should find expired tokens in SQL instead 
of in the memory cache when running the periodic cleanup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



Re: [VOTE] KIP-959 Add BooleanConverter to Kafka Connect

2023-08-07 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hello,

I still need help from a committer to review/approve this (small) KIP, which 
adds a new BooleanConverter to the list of converters in Kafka Connect.

The KIP has a companion PR implementing the feature as well. 

Thanks again!
Sent from Bloomberg Professional for iPhone

- Original Message -
From: Hector Geraldino 
To: dev@kafka.apache.org
At: 08/01/23 11:48:23 UTC-04:00


Hi,

Still missing one binding vote for this (very small) KIP to pass :)

From: dev@kafka.apache.org At: 07/28/23 09:37:45 UTC-4:00To:  
dev@kafka.apache.org
Subject: Re: [VOTE] KIP-959 Add BooleanConverter to Kafka Connect

Hi everyone,

Thanks everyone who has reviewed and voted for this KIP.

So far it has received 3 non-binding votes (Andrew Schofield, Yash Mayya, Kamal
Chandraprakash) and 2 binding votes (Chris Egerton, Greg Harris)- still shy of
one binding vote to pass.

Can we get help from a committer to push it through?

Thank you!
Hector

Sent from Bloomberg Professional for iPhone

- Original Message -
From: Greg Harris 
To: dev@kafka.apache.org
At: 07/26/23 12:23:20 UTC-04:00


Hey Hector,

Thanks for the straightforward and clear KIP!
+1 (binding)

Thanks,
Greg

On Wed, Jul 26, 2023 at 5:16 AM Chris Egerton  wrote:
>
> +1 (binding)
>
> Thanks Hector!
>
> On Wed, Jul 26, 2023 at 3:18 AM Kamal Chandraprakash <
> kamal.chandraprak...@gmail.com> wrote:
>
> > +1 (non-binding). Thanks for the KIP!
> >
> > On Tue, Jul 25, 2023 at 11:12 PM Yash Mayya  wrote:
> >
> > > Hi Hector,
> > >
> > > Thanks for the KIP!
> > >
> > > +1 (non-binding)
> > >
> > > Thanks,
> > > Yash
> > >
> > > On Tue, Jul 25, 2023 at 11:01 PM Andrew Schofield <
> > > andrew_schofield_j...@outlook.com> wrote:
> > >
> > > > Thanks for the KIP. As you say, not that controversial.
> > > >
> > > > +1 (non-binding)
> > > >
> > > > Thanks,
> > > > Andrew
> > > >
> > > > > On 25 Jul 2023, at 18:22, Hector Geraldino (BLOOMBERG/ 919 3RD A) <
> > > > hgerald...@bloomberg.net> wrote:
> > > > >
> > > > > Hi everyone,
> > > > >
> > > > > The changes proposed by KIP-959 (Add BooleanConverter to Kafka
> > Connect)
> > > > have a limited scope and shouldn't be controversial. I'm opening a
> > voting
> > > > thread with the hope that it can be included in the next upcoming 3.6
> > > > release.
> > > > >
> > > > > Here are some links:
> > > > >
> > > > > KIP:
> > > >
> > >
> >
https://cwiki.apache.org/confluence/display/KAFKA/KIP-959%3A+Add+BooleanConverte
r+to+Kafka+Connect
> > > > > JIRA: https://issues.apache.org/jira/browse/KAFKA-15248
> > > > > Discussion thread:
> > > > https://lists.apache.org/thread/15c2t0kl9bozmzjxmkl5n57kv4l4o1dt
> > > > > Pull Request: https://github.com/apache/kafka/pull/14093
> > > > >
> > > > > Thanks!
> > > >
> > > >
> > > >
> > >
> >


[kwin] [Bug 455526] Blur glitches started to appear in wayland again

2023-08-06 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=455526

Hector Martin  changed:

   What|Removed |Added

 CC||mar...@marcan.st

--- Comment #32 from Hector Martin  ---
Can we disable blur in KWin by default until this is fixed? Having no blur by
default is a lot better than having out-of-the-box Plasma installs glitch like
crazy on the task bar and other contexts on some systems. We already disable
blur by default in Asahi Linux for this reason, and we're likely going to get
that default pushed into the Fedora KDE configs too.

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: [HCDX] Yoruba Nation Shortwave Radio

2023-08-06 Thread Hector (Luigi) Perez via Hard-Core-DX
 Nothing heard on R. Yoruba down here in Puerto Rico. Using a dipole ante and a 
JRC JRD-525
Hector (Luigi) Perez KPR260-SWL

El domingo, 6 de agosto de 2023, 03:17:23 p. m. GMT-4, Lúcio Bobrowiec via 
Hard-Core-DX  escribió:  
 
 4940kHz, 4940, Colombia/ Venezuela; 05/08/2023, 0909 – 1014 male: “Estación 
4940”, religious content by male: “la palavra revelada…la palavra és la 
verdad…capitulo 4 versículo 16,,,perdio su vida el epirito…los muertos del 
espiirito, no teniam ninguna comunhao con Dios…Genesis capitulo 8…Apocalipse 2 
– 9…” At 0949 female announcements, male ID: “…por la onda corta, 4940kHz en 60 
metros”, music slections, at 1000 male unreadable talks, religious music. 
Exceptionally clear, good signal. At 0941 first signs of weakening of signal; 
at the end of the listening it was already well degraded (LOB-B) 
.https://on.soundcloud.com/qNjMd

17735kHz Yoruba Nation Shortwave Radio via Woofferton, UK; 05/08/2023, 1900 – 
1925 at tune in hi life music, at 1906 male in unknow language talks (some 
English words heard). Maybe by the deficiency of my rx, YNSR suffered most of 
the time strong QRM from R. Voz  Missionária spurious, but had sometimes 
without this problem; when there was no such QRM, the signal looked fair and 
clear but with some het (LOB-B).
https://soundcloud.com/user-463139565/ynsr17735khzkhz1900utc05082023?ref=clipboard=a=1=ad4a681a42ae4d95ab0592bf047fcee5_source=clipboard_medium=text_campaign=social_sharing

5952kHz, R. Pio XII, Bolivia, 05/08/2023 some checks at 2300 - 2330, noticed 
that usual het wich must be this station after 2 weeks silent; no audio, 
(LOB-B).

Tecsun 310et 
Wire 14m,  dipole 18m 
Embu SP Brasil

Enviado do Yahoo Mail no Android
_
Hard-Core-DX mailing list
Hard-Core-DX@hard-core-dx.com
http://montreal.kotalampi.com/mailman/listinfo/hard-core-dx
http://www.hard-core-dx.com/
___

THE INFORMATION IN THIS ARTICLE IS FREE. It may be copied, distributed
and/or modified under the conditions set down in the Design Science License
published by Michael Stutz at
http://www.gnu.org/licenses/dsl.html
  
_
Hard-Core-DX mailing list
Hard-Core-DX@hard-core-dx.com
http://montreal.kotalampi.com/mailman/listinfo/hard-core-dx
http://www.hard-core-dx.com/
___

THE INFORMATION IN THIS ARTICLE IS FREE. It may be copied, distributed
and/or modified under the conditions set down in the Design Science License
published by Michael Stutz at
http://www.gnu.org/licenses/dsl.html


Re: [dmarc-ietf] Proposal for additional Security Considerations for SPF Upgrade in draft-ietf-dmarc-dmarcbis

2023-08-06 Thread Hector Santos


> On Aug 5, 2023, at 5:37 PM, Scott Kitterman  wrote:
> 
> On Saturday, August 5, 2023 3:59:02 PM EDT John Levine wrote:
>> It appears that Scott Kitterman   said:
 When receivers apply the "MUST NOT reject" in Section 8.6 to accept
 unauthenticated messages as quarantined messages, receivers SHOULD
 carefully review how they forward mail traffic to prevent additional
 security risk.  That is, this downgrade can enable spoofed messages that
 are SPF DMARC authenticated with a fraudulent From identity despite having
 an associated strong DMARC policy of "p=reject". ...
>> 
>> We all realize that SPF has problems, but I really do not want to fill
>> up the DMARC document with text that says "you can authenticate with
>> SPF, hahaha no just kidding."
>> 
>> The way to fix Microsoft's forwarding SPF problem is for Microsoft to put
>> the forwarding user's bounce address on the message, not for us to tell
>> the entire world to use kludgy workarounds.
> 
> I agree.  We need to be careful to solve protocol problems in the protocol 
> and 
> leave fixing implementation problems to implementers.  We aren't going to 
> protocol our way out of bad implementation decisions.

Taken within the good-intention, protocol-compliant implementations, which one 
do we add as “Implementations Notes?”  Which or rather What are “Current 
Practice”behavior can we note?  

—
HLS




___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Idle Musings - Why Is It DMARC and not DMARD?

2023-08-05 Thread Hector Santos
On Aug 5, 2023, at 12:57 PM, Benny Pedersen  wrote:
> 
> Dave Crocker skrev den 2023-08-05 18:49:
> 
>>> Governance seems like the best word to me, since Governance is what 
>>> Reporting has provided to ADs in Monitoring Mode, but I do not want to say 
>>> DMARG out loud either :-)
>> Here, too, the domain owner does not govern the platform receiver.
> 
> good news for paypal phishers, sadly
> 
> the recipient should newer recieve mail that is with credit card info by 
> dmarc is unaligned to the dmarc policy, when policy is basicly ignored we 
> have the underlaying problem dmarc should solve, but as is does not
> 

As a receiver,  I don’t wish to be inundated with spam or spoofs. I will honor 
incoming mail domain policies with deterministic rules.  As a sender, I want 
other receivers to also honor and protect my domains as well.  It’s a win-win. 

SPF -ALL has proved to help with an average of ~5% rejects since its 
introduction.  The growth was slow and it has come with its irritating small 
amount of well-known forwarding problems.  With DMARC, we just have not enabled 
p=reject failures yet.  We need more persistent deterministic DMARC “rules” 
before flipping this switch.

SPF and DKIM Policy models since SSP has been about informing receivers about 
Domain Mail Operational expectations.  This has been good. Receiver Local 
Policy always prevails but a “hint” can help decide things especially when it 
comes to failures.   

—
HLS



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Reflections on IETF 117 Conference and DMARC Meeting

2023-08-04 Thread Hector Santos
Overall, DMARCbis has a “SPF comes before DMARC” conflict where SPF can 
“preempt” DMARC.  

The implementation suggestion is leveraging an existing ESMTP extension 
capability to obtain the DMARC policy at SMTP for one reason - to help DMARC 
fit better with SMTP-level SPF processing.  Otherwise DMARC has an 
implementation design presumption that SPF+DMARC will always be processed 
together and this is not always true.   

The SUBMITTER/PRA was a patented (donated to the IETF Trust) optimizer for the 
payload version of SPF called SenderID by passing the extracted PRA “Purported 
Responsible Address” with the reverse-path. I know this. Enable the ESMTP 
extension and your receiver will see many transactions come in submitter 
information.  So it is still being used.

Does have both data points (Reverse-Path, PRA)  available at SMTP MAIL FROM 
state help?   

I was not thinking using it for SenderID (SPF is sufficient, long decided) but 
using it for DMARC purposes.

Currently,  my mailer has a SPF Reject Before Data out of the box logic.  To 
support DMARC,  do I delay SPF rejection?   One way for me to support existing 
operations and DMARCbis would be to get the DMARC policy, if any, to see if 
there is an overriding “auth=dkim” tag or maybe a “p=none” thus overriding the 
SPF reject at SMTP and continue with the payload transfer overhead where DKIM 
can be evaluated.   

That is basically it.  

DMARC has an implicit design, "To be compliant with DMARC,  Receivers SHOULD 
NOT reject with SPF before DMARC can be evaluated.”  It is predictably ignored 
by many receivers, in particular by systems long existing with SPF and have not 
put all their marbles in an DMARC yet. 

Fortunately, DMARCbis already mentions the possibility for DMARC domains to be 
aware of - SPF can be processed first and preempt any DMARC processing.  I 
believe it is sufficient and there is no real need to go further with a 
possible implementation note for adventurous explorers.

Thanks

—
HLS



> On Aug 4, 2023, at 5:27 AM, Alessandro Vesely  wrote:
> 
> On Thu 03/Aug/2023 21:15:57 + Murray S. Kucherawy wrote:
>> On Thu, Aug 3, 2023 at 10:39 AM Hector Santos > <mailto:hsan...@isdg.net>> wrote:
>>> [...]
>>> 
>>> However, at present, the most plausible use-case appears to be the addition 
>>> of delayed SPF rejection scenarios through DMARC evaluation. Essentially, 
>>> SUBMITTER/PRA serves as an optimizer and a mechanism to soften the impact 
>>> of SPF -ALL policies.
> 
> 
> I agree with Mike about SUBMITTER/PRA.  However, while I disagree with Dave's 
> proposal to use Sender: instead of From:, some kind of advice could still be 
> derived therefrom.
> 
> 
>>> The approach might work as follows:
>>> 
>>> - If SPF fails and the Submitter indicates p=reject, then reject (comes 
>>> with its acceptable problems)
>>> 
>>> - If SPF fails, the Submitter specifies p=reject and auth=dkim, then
>>> proceed to transfer the payload and evaluate DKIM
>>> 
>>> - If SPF fails and the Submitter signifies p=none, then continue with
>>> payload transfer
> 
> 
> That seems gratuitous (assuming Submitter=Sender:'s domain).  If I publish 
> p=none but SPF -all it still means reject (up to whitelists) unless the 
> receiver follows DMARC advice of disregarding SPF policy, but then that's 
> based on From:, not Sender:.
> 
> 
>>> - If SPF fails and the Submitter designates p=quarantine, then proceed
>>> with payload transfer
>>> 
>>> SUBMITTER may help align SPF with its original DMARC purpose—combining
>>> SPF+DKIM results while keeping with some level of optimization.
>> Ah, interesting.
>> I don't think this should go in the base document, since the research we
>> did for RFC 6686 suggests that deployment of SUBMITTER, at least as of that
>> document's publication, wasn't very broad.  However, if the size of the
>> problem is substantial, and this solution turns out to be potentially
>> effective, this might be fodder for the applicability statement or usage
>> guidelines document that this WG has discussed producing in the past as a
>> possible enhancement.
>> Collecting some data and doing some experimentation would be really helpful
>> toward determining the right path here, if any.
> 
> 
> Evaluating Sender: doesn't help whitelisting rejection before DATA.
> 
> The message I'm replying to has:
> 
> Return-Path: mailto:dmarc-boun...@ietf.org>>
> Sender: "dmarc" mailto:dmarc-boun...@ietf.org>>
> 
> To find the added value of Sender:, we'd be looking for a class of 
> forwarders, not mailing lists, where these identifiers differ.
> 
> 
> Best
> Ale
> -- 
> 
> 
> 
> 
> 
> 
> 
> ___
> dmarc mailing list
> dmarc@ietf.org <mailto:dmarc@ietf.org>
> https://www.ietf.org/mailman/listinfo/dmarc

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Reflections on IETF 117 Conference and DMARC Meeting

2023-08-03 Thread Hector Santos

On 8/3/2023 2:07 AM, Murray S. Kucherawy wrote:
On Mon, Jul 31, 2023 at 9:47 AM Hector Santos 
<mailto:40isdg@dmarc.ietf.org>> wrote:


   - I mentioned using the deprecated SUBMITTER/PRA
(RFC4405/RFC4407)
protocols as an implementation detail to access the author's DMARC
policy at the SMTP "MAIL FROM" stage. Wei expressed interest in
this
idea. This could also enhance the "auth=" idea to help manage local
policy SPF -ALL handling. Should SMTP immediately reject? The
PRA at
SMTP could aid this decision for SPF -ALL policies. Based on many
years of implementation, it's evident that many mailers are either
identical or are using the same software that supports
SUBMITTER/PRA,
possibly due to ongoing support for the deprecated SenderID
(RFC4406)
protocol.  [...]


Can you or Wei spell this out a little more?  What could a list 
subscriber do with this algorithm that we don't have today?


The issue we're facing in a DMARC world isn't determining who the 
original sender is, but rather that with broken signatures, we can't 
prove it to DMARC's satisfaction.  I'm not clear on how your idea 
fixes that.




The utilization of SUBMITTER/PRA protocols possibly can help manage 
situations where SPF fails before any DKIM information is accessible. 
This strategy provides SPF processors with preliminary DMARC policy 
data, potentially mitigating the impact of SPF hard-fail situations. 
The advantages of this approach are especially clear when SPF fails, 
but DMARC can help to temper the initial rejection.


Through the SUBMITTER/PRA, it's possible to ascertain the presence of 
a DMARC p=none or an auth=dkim, giving operators the choice to delay 
immediate rejection and verify DKIM instead.


In the context of a mailing list, using a SUBMITTER in the 
distribution can prove useful. For instance, a list might not need to 
rewrite if it identifies an auth=spf for the domain, allowing it to 
function as a resigner although the original domain integrity was 
broken.   There might be a  auth=arc which ARC is needed for pass 
broken 1st party signature.


I know this is out of scope, but legacy scenarios may included 
checking for the 'atps=y' tag and check for an ATPS DNS authorization 
record for the resigner domain. There still many domains who don't use 
DMARC but instead still have ADSP dkim=all to expose a mail policy - 
"Expect all my mail to be signed."


However, at present, the most plausible use-case appears to be the 
addition of delayed SPF rejection scenarios through DMARC evaluation. 
Essentially, SUBMITTER/PRA serves as an optimizer and a mechanism to 
soften the impact of SPF -ALL policies.


The approach might work as follows:

- If SPF fails and the Submitter indicates p=reject, then reject 
(comes with its acceptable problems)


- If SPF fails, the Submitter specifies p=reject and auth=dkim, then 
proceed to transfer the payload and evaluate DKIM


- If SPF fails and the Submitter signifies p=none, then continue with 
payload transfer


- If SPF fails and the Submitter designates p=quarantine, then proceed 
with payload transfer


SUBMITTER may help align SPF with its original DMARC purpose—combining 
SPF+DKIM results while keeping with some level of optimization.


--
Hector Santos,
https://santronics.com
https://winserver.com



___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


<    1   2   3   4   5   6   7   8   9   10   >