[sumo-user] background map resolution

2024-03-21 Thread Hector A Martinez via sumo-user
Dear sumo community,

I am using the tileGet.py script to pull a map from an ARCGIS service but I 
would like to avoid the bad stitching of tiles that looks less than 
professional when I give presentations in large screens.  I tried pulling only 
one tile but the resolution is very blurry eventhough I don't get the bad 
stitching.

Would you recommend changing the code in the tileGet.py to give me a better 
resolution of the background map when I zoom in to look a vehicles in the 
network?
If so, where would you recommend I do it to avoid braking the script?

Thanks,

Hector A. Martinez, P.E.
Group Leader, N253 - CENTCOM, SOUTHCOM and Joint Staff Ops
MITRE | National Security Engineering 
Center<https://www.mitre.org/centers/national-security-and-engineering-center/who-we-are>
813.207.5365

___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


Re: [dmarc-ietf] DMARC exceptions

2024-03-15 Thread Hector Santos
Doug,  since of dawn of electronic messaging, a system local policy always 
prevails. When implementing the new SMTP filters such as SPF, the more powerful 
policy was one of detecting failure. The PASS meant nothing since it may not 
pre-empt any other checking.  For us, wcSPF was the exception in the wcSMTP 
suites of filters out of the box:

- Low Code Reject/Access rules
- DNS-RBL 
- SPF
- CBV

SPF would pre-empt the final CBV check for a matching source (pass).  An SPF 
Hard Fail is an immediate 550 rejection response.   A unknown continues with 
the CBV check.

When it comes to DKIM, we calculate all the valid signatures.

When it comes to a DKIM Policy Model - well, we have DMARC.   We apply the 
protocol rules without exceptions.  If a Local System does not honor the 
results for any reason, that is ok.  But I don’t think it can define a 
consistent Local Policy OverRide and expect systems to use same local 
overrides.  

Again, my Philosophy has been Failure Detection as the key filtering strength 
for all the DKIM Policies explored.  A PASS or worst unknown/neutral means 
nothing because good and bad can both apply.  Another “Trust” Layer is 
considered at the local level.   


All the best,
Hector Santos



> On Mar 15, 2024, at 1:46 AM, Douglas Foster 
>  wrote:
> 
> DMARC is an imperfect tool, as evidenced by the mailing list problem, among 
> others.  DMARCbis has failed to integrate RFC7489 with RFC 7960, because it 
> provides no discussion of the circumstances where an evaluator should 
> override the DMARC result.  I believe DMARCbis needs a discussion about the 
> appropriate scope and characteristics of local policy.  I have developed an 
> initial draft of proposed language for that section, which appears below
> 
> Doug Foster
> 
> 
> x. Exceptions / Local Policy
> 
> A DMARC disposition policy communicates the domain owner’s recommendation for 
> handling of messages which fail to authenticate. By definition, this 
> recommendation cannot take into consideration the local interest of specific 
> receivers, or the specific flow path of any specific message.   As a result, 
> evaluators should anticipate the need to implement local policy exceptions 
> that override the DMARC recommended disposition when appropriate.   These 
> exceptions can be considered in two groups:   policy overrides and 
> authentication overrides.   This section discusses some expected override 
> scenarios, without intending to be comprehensive, so that product 
> implementers can create appropriate exception structures for these and 
> similar possible situations.
> 
> x.1 Policy Overrides
> 
> x.1.1 Override p=none
> 
> A disposition policy of “none” indicates that the domain owner suspects that 
> some evaluators may receive some legitimate and wanted messages which lack 
> authentication when received.   The evaluator may reasonably conclude that 
> its risk of allowing a message which maliciously impersonates the domain is 
> much greater than the risk of hindering a legitimate-but-unauthenticated 
> message from the domain.   In such cases, the local policy will override 
> p=none and handle the domain with p=quarantine or p=reject.
> 
> x.1.2 Override missing PSL=Y
> 
> Some PSDs have implemented DMARC policies in accordance with RFC 9901, 
> without a PSL tag because that RFC assumed that organizational domain 
> determination would be provided by the PSL.   Particularly during the early 
> rollout of this specification, evaluators should use the PSL to identify 
> DMARC policies which are intended to be treated as PSL=Y even though the 
> PSD’s policy has not yet been updated to include the PSD=Y tag.
> 
> x.1.3 Override strict alignment
> 
> A domain may publish aspf=s or adkim=s incorrectly, which the evaluator will 
> detect when legitimate and wanted messages produce a DMARC Fail result, even 
> though they would produce Pass using relaxed alignment.   In this case, the 
> evaluator overrides the strict alignment rules in the published policy and 
> applies a local policy of relaxed alignment.
> 
> x.2 Authentication Overrides
> 
> An Authentication Override provides alternate authentication when a message 
> is acceptable but the DMARC algorithm produces a result of Fail.   To ensure 
> that the exception does not create a vulnerability, the rule should include 
> at least one verified identifier with a value that indicates the trusted 
> message source, often coupled with unverified identifiers with specific 
> values the further narrow scope of the rule.
> 
> x.2.1 Mailing List messages
> 
> Mailing Lists typically add content to the Subject or Body, and replace the 
> Mail From address, while forwarding a message.   As a result, the 
> RFC5322.From address of the author can no 

Re: [dmarc-ietf] DMARCbis WGLC Issue 132 - 5.5.1 and 5.5.2 SHOULD vs MUST (was Another point for SPF advice)

2024-03-14 Thread Hector Santos

> On Mar 14, 2024, at 4:02 PM, Todd Herr 
>  wrote:
> 
> On Thu, Mar 14, 2024 at 3:25 PM Hector Santos 
> mailto:40isdg@dmarc.ietf.org>> wrote:
>>> On Mar 14, 2024, at 10:09 AM, Todd Herr 
>>> >> <mailto:40valimail@dmarc.ietf.org>> wrote:
>>> To configure SPF for DMARC, the Domain Owner MUST choose a domain to use as 
>>> the RFC5321.MailFrom domain (i.e., the Return-Path domain) for its mail 
>>> that aligns with the Author Domain, and then publish an SPF policy in DNS 
>>> for that domain. The SPF record MUST be constructed at a minimum to ensure 
>>> an SPF pass verdict for all known sources of mail for the RFC5321.MailFrom 
>>> domain.
>> 
>> A major consideration, Todd, is receivers will process SPF for SPF without 
>> DMARC (payload) considerations.  IOW, if SPF is a hardfail, we have SMTP 
>> processors who will not continue to transmit a payload (DATA).
>> 
>> DMARCBis is making a major design presumption receivers will only use SPF as 
>> a data point for a final DMARC evaluation where a potentially high overhead 
>> payload was transmitted only to be rejected anyway,  
> 
> I don't necessarily think your assertion is true here, or at least I'd submit 
> that DMARCbis and RFC 7489 aren't approaching this subject any differently.
> 
> Section 10.1 from RFC 7489, titled "Issues Specific to SPF" had two 
> paragraphs, the second of which reads:
> 
>Some receiver architectures might implement SPF in advance of any
>DMARC operations.  This means that a "-" prefix on a sender's SPF
>mechanism, such as "-all", could cause that rejection to go into
>effect early in handling, causing message rejection before any DMARC
>processing takes place.  Operators choosing to use "-all" should be
>aware of this.

Yes, I agree.  I am only reminding the community SPF can preempt DMARC with a 
restrictive hardfail policy.   Does DMARCBis integrate the tag to delay SPF 
failures?


> 
> DMARCbis contains the same two paragraphs with no change to the text, other 
> than the section is now numbered 8.1.
> 
>> 
>>> In the ticket, I propose the following new text:
>>> 
>>> ==
>>> To configure DKIM for DMARC, the Domain Owner MUST choose a DKIM-Signing 
>>> domain (i.e., the d= domain in the DKIM-Signature header) that aligns with 
>>> the Author Domain.
>>> ==
>> 
>> In order to maximize security, the Domain Owner is REQUIRED to choose a ….. 
>> 
>> Is REQUIRED the same as MUST?   I think SHOULD or MUST is fine as long as we 
>> specify the reason it is required,
> 
> I'm not understanding the comment you're making here, as I don't see the word 
> "REQUIRED" in anything I wrote.

For any protocol, there are “Protocol Requirements,”   A MUST or SHOULD is a 
“Requirement” for proper support,  So I just wanted to just note that it can 
stated another way.  Developers need a Requirements Section that allow us to 
code the logic,

Its getting pretty confusing for implementors.

—
HLS

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] DMARCbis WGLC Issue 132 - 5.5.1 and 5.5.2 SHOULD vs MUST (was Another point for SPF advice)

2024-03-14 Thread Hector Santos
> On Mar 14, 2024, at 10:09 AM, Todd Herr 
>  wrote:
> 
> 
> In the ticket, I propose the following replacement text:
> 
> ==
> Because DMARC relies on SPF [[RFC7208]] and DKIM [[RFC6376], in order to take 
> full advantage of DMARC, a Domain Owner MUST first ensure that either SPF or 
> DKIM authentication are properly configured, and SHOULD ensure that both are.

+1

> 
> To configure SPF for DMARC, the Domain Owner MUST choose a domain to use as 
> the RFC5321.MailFrom domain (i.e., the Return-Path domain) for its mail that 
> aligns with the Author Domain, and then publish an SPF policy in DNS for that 
> domain. The SPF record MUST be constructed at a minimum to ensure an SPF pass 
> verdict for all known sources of mail for the RFC5321.MailFrom domain.

A major consideration, Todd, is receivers will process SPF for SPF without 
DMARC (payload) considerations.  IOW, if SPF is a hardfail, we have SMTP 
processors who will not continue to transmit a payload (DATA).

DMARCBis is making a major design presumption receivers will only use SPF as a 
data point for a final DMARC evaluation where a potentially high overhead 
payload was transmitted only to be rejected anyway,  

> In the ticket, I propose the following new text:
> 
> ==
> To configure DKIM for DMARC, the Domain Owner MUST choose a DKIM-Signing 
> domain (i.e., the d= domain in the DKIM-Signature header) that aligns with 
> the Author Domain.
> ==

In order to maximize security, the Domain Owner is REQUIRED to choose a ….. 

Is REQUIRED the same as MUST?   I think SHOULD or MUST is fine as long as we 
specify the reason it is required,

—
HLS___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] A.5 Issues with ADSP in Operation

2024-03-14 Thread Hector Santos

> On Mar 9, 2024, at 10:05 AM, Alessandro Vesely  wrote:
> 
> Hi,
> 
> as ADSP is historical, perhaps we can strike A5 entirely.  If not, we should 
> at least eliminate bullet 5:
> 
>   5.  ADSP has no support for a slow rollout, i.e., no way to configure
>   a percentage of email on which the Mail Receiver should apply the
>   policy.  This is important for large-volume senders.
> 
> (Alternatively, we could think back about pct=...?)
> 
> 

If anything, DMARCBis should assist (provide guidance) with ADSP to DMARC 
migration considerations.

There are still folks who don’t believe in DMARC and continue to have an ADSP.  
  ADSP has two basic policies: DISCARDABLE and ALL.

ALL means lways signed by anyone. DISCARDABLE means always signed by the Author 
Domain,

DMARCbis continues to use the term “Super ADSP” in section A5.  We may be 
beyond justifications of why DMARC replaced ADSP.   Help with migration would 
be useful.

While an ADSP DISCARD policy may translate to a DMARC p=reject, an ADSP ALL 
policy may not have any DMARC equivalent unless non-alignment was a defined 
policy (in DMARC) - I don’t there is.

All the best,
Hector Santos

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] Inconsistencies in DMARC Aggregate Report XML Schema

2024-03-12 Thread Hector Santos
> On Mar 11, 2024, at 10:33 PM, Neil Anuskiewicz 
>  wrote:
> 
> Wow, the stat on how many domain operators move to enforcing reject policy 
> sans aggregate reports shocked me. Trust the force, Luke.

It should not be a surprise the client/server protocol concept of “email 
reporting” was always and I still believe is considered a taboo as it can be a 
form of abuse when not negotiated, requested or solicited. It is an 100% 
optional feature to provide a reporting address.

With DMARC policy publishing, for exploratory reasons only, I have reports sent 
to an email to newsgroup forum where I can privately review and so far, it has 
not provided any benefit beyond what is expected.  I have advocated for a 
straight text report but I probably have to write a translator. The current 
format expects JSON/XML readers. 

 Our wcDMARC verification processor does not have built-in support for 
reporting. 


All the best,
Hector Santos


___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


[jira] [Reopened] (KAFKA-16223) Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest

2024-03-09 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino reopened KAFKA-16223:
--

> Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest
> ---
>
> Key: KAFKA-16223
> URL: https://issues.apache.org/jira/browse/KAFKA-16223
> Project: Kafka
>  Issue Type: Sub-task
>  Components: connect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14683) Replace EasyMock and PowerMock with Mockito in WorkerSinkTaskTest

2024-03-09 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino resolved KAFKA-14683.
--
  Reviewer: Greg Harris
Resolution: Fixed

> Replace EasyMock and PowerMock with Mockito in WorkerSinkTaskTest
> -
>
> Key: KAFKA-14683
> URL: https://issues.apache.org/jira/browse/KAFKA-14683
> Project: Kafka
>  Issue Type: Sub-task
>  Components: connect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14683) Replace EasyMock and PowerMock with Mockito in WorkerSinkTaskTest

2024-03-09 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino resolved KAFKA-14683.
--
  Reviewer: Greg Harris
Resolution: Fixed

> Replace EasyMock and PowerMock with Mockito in WorkerSinkTaskTest
> -
>
> Key: KAFKA-14683
> URL: https://issues.apache.org/jira/browse/KAFKA-14683
> Project: Kafka
>  Issue Type: Sub-task
>  Components: connect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (KAFKA-16223) Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest

2024-03-09 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino reopened KAFKA-16223:
--

> Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest
> ---
>
> Key: KAFKA-16223
> URL: https://issues.apache.org/jira/browse/KAFKA-16223
> Project: Kafka
>  Issue Type: Sub-task
>  Components: connect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16223) Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest

2024-03-09 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino resolved KAFKA-16223.
--
Fix Version/s: 3.8.0
 Reviewer: Greg Harris
   Resolution: Fixed

> Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest
> ---
>
> Key: KAFKA-16223
> URL: https://issues.apache.org/jira/browse/KAFKA-16223
> Project: Kafka
>  Issue Type: Sub-task
>  Components: connect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16223) Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest

2024-03-09 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino resolved KAFKA-16223.
--
Fix Version/s: 3.8.0
 Reviewer: Greg Harris
   Resolution: Fixed

> Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest
> ---
>
> Key: KAFKA-16223
> URL: https://issues.apache.org/jira/browse/KAFKA-16223
> Project: Kafka
>  Issue Type: Sub-task
>  Components: connect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
> Fix For: 3.8.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Insert with Jsonb column hangs

2024-03-09 Thread hector vass
On Sat, Mar 9, 2024 at 4:10 PM Adrian Klaver 
wrote:

> On 3/9/24 08:00, kuldeep singh wrote:
> > Copy may not work in our scenario since we need to join data from
> > multiple tables & then  convert it to json using  row_to_json . This
> > json data eventually  needs to be stored in a target table .
>
> Per:
>
> https://www.postgresql.org/docs/current/sql-copy.html
>
> "
> COPY { table_name [ ( column_name [, ...] ) ] | ( query ) }
>
> <...>
>
> query
>
>  A SELECT, VALUES, INSERT, UPDATE, or DELETE command whose results
> are to be copied. Note that parentheses are required around the query.
>
>  For INSERT, UPDATE and DELETE queries a RETURNING clause must be
> provided, and the target relation must not have a conditional rule, nor
> an ALSO rule, nor an INSTEAD rule that expands to multiple statements.
> "
>
> >
> > Will it be better if we break the process into batches of like 10,000
> > rows & insert the data in its individual transactions? Or any other
> > better solution available ?
> >
> > On Sat, Mar 9, 2024 at 9:01 PM hector vass  > <mailto:hector.v...@gmail.com>> wrote:
> >
> >
> >
> > On Sat, Mar 9, 2024 at 3:02 PM kuldeep singh
> > mailto:kuldeeparor...@gmail.com>> wrote:
> >
> > Hi,
> >
> > We are inserting data close to 1M record & having a single Jsonb
> > column but query is getting stuck.
> >
> > We are using insert into select * .. , so all the operations are
> > within the DB.
> >
> > If we are running select query individually then it is returning
> > the data in 40 sec for all rows but with insert it is getting
> stuck.
> >
> > PG Version - 15.
> >
> > What could be the problem here ?
> >
> > Regards,
> > KD
> >
> >
> > insert 1M rows especially JSON that can be large, variable in size
> > and stored as blobs and indexed is not perhaps the correct way to do
> > this
> > insert performance will also depend on your tuning.  Supporting
> > transactions, users or bulk processing are 3x sides of a compromise.
> > you should perhaps consider that insert is for inserting a few rows
> > into live tables ... you might be better using copy or \copy,
> > pg_dump if you are just trying to replicate a large table
> >
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com


What Adrian Klaver said ^
discovered even this works...


create view myview as (select row_to_json from mytable);

create table newtable as select * from myview where 1=0;

copy myview to program 'psql mydb postgres -c ''copy newtable from stdin'' '
;


Re: Insert with Jsonb column hangs

2024-03-09 Thread hector vass
copy syntax can include any valid select statement

  COPY (any valid select statement joining tables and converting it
row_to_json) TO 'some_dump_file'

or can copy a view
  CREATE VIEW myview (any valid select statement joining tables and
converting it row_to_json);
  COPY myview TO 'some_dump_file'


Regards
Hector Vass
07773 352559


On Sat, Mar 9, 2024 at 4:01 PM kuldeep singh 
wrote:

> Copy may not work in our scenario since we need to join data from multiple
> tables & then  convert it to json using  row_to_json . This json data
> eventually  needs to be stored in a target table .
>
> Will it be better if we break the process into batches of like 10,000 rows
> & insert the data in its individual transactions? Or any other better
> solution available ?
>
> On Sat, Mar 9, 2024 at 9:01 PM hector vass  wrote:
>
>>
>>
>> On Sat, Mar 9, 2024 at 3:02 PM kuldeep singh 
>> wrote:
>>
>>> Hi,
>>>
>>> We are inserting data close to 1M record & having a single Jsonb column
>>> but query is getting stuck.
>>>
>>> We are using insert into select * .. , so all the operations are within
>>> the DB.
>>>
>>> If we are running select query individually then it is returning the
>>> data in 40 sec for all rows but with insert it is getting stuck.
>>>
>>> PG Version - 15.
>>>
>>> What could be the problem here ?
>>>
>>> Regards,
>>> KD
>>>
>>
>> insert 1M rows especially JSON that can be large, variable in size and
>> stored as blobs and indexed is not perhaps the correct way to do this
>> insert performance will also depend on your tuning.  Supporting
>> transactions, users or bulk processing are 3x sides of a compromise.
>> you should perhaps consider that insert is for inserting a few rows into
>> live tables ... you might be better using copy or \copy, pg_dump if you are
>> just trying to replicate a large table
>>
>>


Re: Insert with Jsonb column hangs

2024-03-09 Thread hector vass
On Sat, Mar 9, 2024 at 3:02 PM kuldeep singh 
wrote:

> Hi,
>
> We are inserting data close to 1M record & having a single Jsonb column
> but query is getting stuck.
>
> We are using insert into select * .. , so all the operations are within
> the DB.
>
> If we are running select query individually then it is returning the data
> in 40 sec for all rows but with insert it is getting stuck.
>
> PG Version - 15.
>
> What could be the problem here ?
>
> Regards,
> KD
>

insert 1M rows especially JSON that can be large, variable in size and
stored as blobs and indexed is not perhaps the correct way to do this
insert performance will also depend on your tuning.  Supporting
transactions, users or bulk processing are 3x sides of a compromise.
you should perhaps consider that insert is for inserting a few rows into
live tables ... you might be better using copy or \copy, pg_dump if you are
just trying to replicate a large table


Fwd: Getting error while upgrading

2024-03-09 Thread hector vass
On Sat, Mar 9, 2024 at 12:18 PM omkar narkar 
wrote:

> Hello Team,
>
> I am trying to upgrade my edb 10.5 community version to postgres 15.6
> version and while doing this i am getting error regarding OIDS are not
> stable across Postgresql version (sys.callback_queue_table.user_data).
> Kindly help me to get the solution of this issue.
>
> Thanks and regards,
> Omkar Narkar
>

Usually get this error if there are composite data types or data types that
cannot be translated between 10.5 and 15.6.
The clue may be in the error message just before it says 'OIDS are not
stable across Postgresql version'
You state edb 10.5 community guessing you are using pg_upgrade and going
from windows to linux ? I am impressed if you can do that, would you not
end up with collation issues?
If you are using pg_upgrade what does pg_upgrade --check say
I would dump the schema to a sql file
pg_dump -s >dumped.sql
Then run the sql one command at a time to track down where you are going to
have a problem.


Re: creating a subset DB efficiently ?

2024-03-09 Thread hector vass
On Fri, Mar 8, 2024 at 4:22 PM David Gauthier  wrote:

> Here's the situation
>
> - The DB contains data for several projects.
> - The tables of the DB contain data for all projects (data is not
> partitioned on project name or anything like that)
> - The "project" identifier (table column) exists in a few "parent" tables
> with many child... grandchild,... tables under them connected with foreign
> keys defined with "on delete cascade".  So if a record in one of the parent
> table records is deleted, all of its underlying, dependent records get
> deleted too.
> - New projects come in, and old ones need to be removed and "archived" in
> DBs of their own.  So there's a DB called "active_projects" and there's a
> DB called "project_a_archive" (identical metadata).
> - The idea is to copy the data for project "a" that's in "active_projects"
> to the "project_a_arhchive" DB AND delete the project a data out of
> "active_projects".
> - Leave "project_a_archive" up and running if someone needs to attach to
> that and get some old/archived data.
>
> The brute-force method I've been using is...
> 1)  pg_dump "active_projects" to a (huge) file then populate
> "project_a_archive" using that (I don't have the privs to create database,
> IT creates an empty one for me, so this is how I do it).
> 2) go into the "project_a_archive" DB and run... "delete from par_tbl_1
> where project <> 'a' ", "delete from par_tbl_2 where project <> 'a' ",
> etc... leaving only project "a" data in the DB.
> 3) go into the "active_projects" DB and "delete from par_tbl_1 where
> project = 'a' ", etc... removing project "a" from the "active_projects DB.
>
> Ya, not very elegant, it takes a long time and it takes a lot of
> resources.  So I'm looking for ideas on how to do this better.
>
> Related question...
> The "delete from par_tbl_a where project <> 'a' " is taking forever.  I
> fear it's because it's trying to journal everything in case I want to
> rollback.  But this is just in the archive DB and I don't mind taking the
> risk if I can speed this up outside of a transaction.  How can I run a
> delete command like this without the rollback recovery overhead ?
>


>(I don't have the privs to create database, IT creates an empty one for
me, so this is how I do it).

That's a shame.  You can do something similar with tablespaces
  Template your existing schema to create a new schema for the project
(pg_dump -s)
  Create tablespace for this new project and schema

 You can then move the physical tablespace to cheaper disk and use symbolic
links or... archive and/or back it up at the schema level with pg_dump -n

...as long as you don't put anything in the public schema all you are
really sharing is roles otherwise a bit like a separate database


[jira] [Updated] (KAFKA-16358) Update Connect Transformation documentation

2024-03-08 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino updated KAFKA-16358:
-
Description: 
When reading the [Kafka Connect 
docs|https://kafka.apache.org/documentation/#connect_included_transformation] 
for transformations, there are a few gaps that should be covered:
 * The Flatten, Cast and TimestampConverter transformations are not listed
 * HeadersFrom should be HeaderFrom
 * -InsertHeader is not documented-

  was:
When reading the [Kafka Connect 
docs|https://kafka.apache.org/documentation/#connect_included_transformation] 
for transformations, there are a few gaps that should be covered:
 * The Flatten, Cast and TimestampConverter transformations are not listed
 * HeadersFrom should be HeaderFrom
 * InsertHeader is not documented

Should be relatively easy to fix


> Update Connect Transformation documentation
> ---
>
> Key: KAFKA-16358
> URL: https://issues.apache.org/jira/browse/KAFKA-16358
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
>
> When reading the [Kafka Connect 
> docs|https://kafka.apache.org/documentation/#connect_included_transformation] 
> for transformations, there are a few gaps that should be covered:
>  * The Flatten, Cast and TimestampConverter transformations are not listed
>  * HeadersFrom should be HeaderFrom
>  * -InsertHeader is not documented-



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16358) Update Connect Transformation documentation

2024-03-08 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino reassigned KAFKA-16358:


Assignee: Hector Geraldino

> Update Connect Transformation documentation
> ---
>
> Key: KAFKA-16358
> URL: https://issues.apache.org/jira/browse/KAFKA-16358
> Project: Kafka
>  Issue Type: Bug
>  Components: connect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
>
> When reading the [Kafka Connect 
> docs|https://kafka.apache.org/documentation/#connect_included_transformation] 
> for transformations, there are a few gaps that should be covered:
>  * The Flatten, Cast and TimestampConverter transformations are not listed
>  * HeadersFrom should be HeaderFrom
>  * InsertHeader is not documented
> Should be relatively easy to fix



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16358) Update Connect Transformation documentation

2024-03-08 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-16358:


 Summary: Update Connect Transformation documentation
 Key: KAFKA-16358
 URL: https://issues.apache.org/jira/browse/KAFKA-16358
 Project: Kafka
  Issue Type: Bug
  Components: connect
Reporter: Hector Geraldino


When reading the [Kafka Connect 
docs|https://kafka.apache.org/documentation/#connect_included_transformation] 
for transformations, there are a few gaps that should be covered:
 * The Flatten, Cast and TimestampConverter transformations are not listed
 * HeadersFrom should be HeaderFrom
 * InsertHeader is not documented

Should be relatively easy to fix



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16358) Update Connect Transformation documentation

2024-03-08 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-16358:


 Summary: Update Connect Transformation documentation
 Key: KAFKA-16358
 URL: https://issues.apache.org/jira/browse/KAFKA-16358
 Project: Kafka
  Issue Type: Bug
  Components: connect
Reporter: Hector Geraldino


When reading the [Kafka Connect 
docs|https://kafka.apache.org/documentation/#connect_included_transformation] 
for transformations, there are a few gaps that should be covered:
 * The Flatten, Cast and TimestampConverter transformations are not listed
 * HeadersFrom should be HeaderFrom
 * InsertHeader is not documented

Should be relatively easy to fix



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [dmarc-ietf] Another point for SPF advice

2024-03-08 Thread Hector Santos
I believe it is correct, SHOULD strive to trusted known sources.  The final 
mechanism SHOULD be one of (hard) failure.  This is what we (ideally) strive 
for.  I believe anything weaker is a waste of computational resources, causes 
confusion using neutral or even soft fails especially with repeated 
transactions. 

All the best,
Hector Santos



> On Mar 5, 2024, at 9:29 AM, Alessandro Vesely  wrote:
> 
> Hi,
> 
> in section 5.5.1, Publish an SPF Policy for an Aligned Domain, the last 
> sentence says:
> 
>   The SPF record SHOULD be constructed
>   at a minimum to ensure an SPF pass verdict for all known sources of
>   mail for the RFC5321.MailFrom domain.
> 
> As we learnt, an SPF pass verdict has to be granted to /trusted/ sources 
> only.  An additional phrase about using the neutral qualifier ("?") for 
> public sources might also be added.
> 
> 
> Best
> Ale
> --
> 
> ___
> dmarc mailing list
> dmarc@ietf.org
> https://www.ietf.org/mailman/listinfo/dmarc

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] The sad state of SPF: research just presented at NDSS

2024-03-04 Thread Hector Santos
> On Feb 28, 2024, at 6:33 PM, Barry Leiba  wrote:
> 
> A paper was presented this morning at NDSS about the state of SPF, which is 
> worth a read by this group:
> 
> https://www.ndss-symposium.org/ndss-paper/breakspf-how-shared-infrastructures-magnify-spf-vulnerabilities-across-the-internet/
> 


Barry, Interesting.  Appreciate the security note.

Per document, 2.39% domains are the problem with CDN, HTTP Proxy, SMTP threat 
entry points.  Not an SPF issue.   If anything, add more SMTP command override 
support for immediate disconnect for GET, POST, etc, erroneous SMTP commands:

// Script:  Smtpfilter-GET.wcc:

// add code to block GetCalllerID()
Print “550 ”
HangUp()
End

// Script:  Smtpfilter-POST.wcc:

// add code to block GetCalllerID()
Print “550 ”
HangUp()
End


All the best,
Hector Santos

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [dmarc-ietf] DMARCbis WGLC Significant(ish) Issue - Section 7.6

2024-03-04 Thread Hector Santos
No rehashing, my technical opinion, clearly the semantics but both lead to:

“You SHOULD|MUST consider the documented conflicts before using the restricted 
policy p=reject”

Question. Is p=quarantine ok to use?  Or do we presume p=reject implies 
p=quarantine?’'



All the best,
Hector Santos



> On Feb 29, 2024, at 2:53 PM, Seth Blank  
> wrote:
> 
> I thought we landed on SHOULD NOT, there was strong resistance to MUST NOT
> 
> On Thu, Feb 29, 2024 at 2:48 PM Scott Kitterman  <mailto:skl...@kitterman.com>> wrote:
>> Okay.  I think 8.6 is the one in error.  You see how this is going to go, 
>> right?
>> 
>> Scott K
>> 
>> On February 29, 2024 7:45:15 PM UTC, Todd Herr 
>> > <mailto:40valimail@dmarc.ietf.org>> wrote:
>> >It is not my intent here to relitigate any issues.
>> >
>> >Rather, I believe that the text in 7.6 is wrong, likely due to an oversight
>> >on my part when the new text in 8.6 was published, and I just want to
>> >confirm that 7.6 is indeed wrong.
>> >
>> >On Thu, Feb 29, 2024 at 2:10 PM Scott Kitterman > ><mailto:skl...@kitterman.com>>
>> >wrote:
>> >
>> >> In what way is this a new issue that has not already been argued to death
>> >> in the WG?  I think for WGLC, we've already done this. We will, no doubt
>> >> get to have this conversation during the IETF last call, but for the
>> >> working group, this strikes me as exactly the type of relitigation of
>> >> issues we've been counseled to avoid.
>> >>
>> >> Scott K
>> >>
>> >> On February 29, 2024 6:54:57 PM UTC, Todd Herr > >> 40valimail@dmarc.ietf.org <mailto:40valimail@dmarc.ietf.org>> 
>> >> wrote:
>> >> >Colleagues,
>> >> >
>> >> >I've been reading DMARCbic rev -30 today with a plan to collect the first
>> >> >set of minor edits and I came across a sentence that I believe goes 
>> >> >beyond
>> >> >minor, so wanted to get a sanity check.
>> >> >
>> >> >Section 7.6, Domain Owner Actions, ends with the following sentence:
>> >> >
>> >> >In particular, this document makes explicit that domains for
>> >> >general-purpose email MUST NOT deploy a DMARC policy of p=reject.
>> >> >
>> >> >
>> >> >I don't believe this to be true, however. Rather, Section 8.6,
>> >> >Interoperability Considerations, says SHOULD NOT on the topic (e.g., "It
>> >> is
>> >> >therefore critical that domains that host users who might post messages 
>> >> >to
>> >> >mailing lists SHOULD NOT publish p=reject")
>> >> >
>> >> >Section 7.6 therefore should be updated to read "domains for
>> >> >general-purpose email SHOULD NOT deploy a DMARC policy of p=reject", yes?
>> >> >
>> >>
>> >> ___
>> >> dmarc mailing list
>> >> dmarc@ietf.org <mailto:dmarc@ietf.org>
>> >> https://www.ietf.org/mailman/listinfo/dmarc
>> >>
>> >
>> >
>> 
>> ___
>> dmarc mailing list
>> dmarc@ietf.org <mailto:dmarc@ietf.org>
>> https://www.ietf.org/mailman/listinfo/dmarc
> 
> 
> --
> Seth Blank  | Chief Technology Officer
> e: s...@valimail.com <mailto:s...@valimail.com>
> p:
> 
> This email and all data transmitted with it contains confidential and/or 
> proprietary information intended solely for the use of individual(s) 
> authorized to receive it. If you are not an intended and authorized recipient 
> you are hereby notified of any use, disclosure, copying or distribution of 
> the information included in this transmission is prohibited and may be 
> unlawful. Please immediately notify the sender by replying to this email and 
> then delete it from your system.
> ___
> dmarc mailing list
> dmarc@ietf.org
> https://www.ietf.org/mailman/listinfo/dmarc

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


[sumo-user] Trucks not stopping at the ContainerStop using traci

2024-03-04 Thread Hector A Martinez via sumo-user
Dear sumo community,

I am having trouble with getting my Trucks to stop at the ContainerStop when I 
use traci.vehicle.insertStop.

Here is my route creation:


Here is how I create the truck to use the route:
def create_sumo_truck(self, truck):
"""
Inject a new truck vehicle with a valid route ID into SUMO model.
"""
self.log(f"creating traci truck {truck.name}")
traci.vehicle.add(truck.name,
  truck.journey[0],  # routeID must exist in rou.xml 
file
  'truck',   # vtype truck_spec['vtype']
  personCapacity=1)  # single-unit truck
traci.vehicle.setSpeed(truck.name, 60)  # m/s Fast for testing

traci.vehicle.insertStop(truck.name,0,'D-Truck-Stop-NW',duration=500,flags=traci.constants.STOP_CONTAINER_STOP)

My truck slows down when it gets to the stop but keeps going and it does the 
500 second stop at the end of the edge rather than at the containerStop as I 
would like it to.

Any advice would be greatly appreciated.  Thanks,

Hector A. Martinez, P.E.
Transportation Researcher, Resilient Transportation and Logistics LTM
MITRE | National Security Engineering 
Center<https://www.mitre.org/centers/national-security-and-engineering-center/who-we-are>
813.207.5365

___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


[Bug 2055694] Re: dh_missing complains about not installed file (qat.service)

2024-03-01 Thread Hector CAO
debdiff

** Attachment added: "qatlib_24.02.0-0ubuntu1_24.02.0-0ubuntu1.1+ppa1.diff.gz"
   
https://bugs.launchpad.net/ubuntu/+source/qatlib/+bug/2055694/+attachment/5751181/+files/qatlib_24.02.0-0ubuntu1_24.02.0-0ubuntu1.1+ppa1.diff.gz

** Tags added: pe-sponsoring-request

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055694

Title:
  dh_missing complains about not installed file (qat.service)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qatlib/+bug/2055694/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055694] [NEW] dh_missing complains about not installed file (qat.service)

2024-03-01 Thread Hector CAO
Public bug reported:

qatlib version : 24.02.0-0ubuntu1

Reproduction
---

On noble (24.04):

$ apt source qatlib
$ cd qatlib-24.02.0
$ sudo apt build-dep ./
$ debuild  -us -uc -b

dh_missing: warning: usr/lib/systemd/system/qat.service exists in debian/tmp 
but is not installed to anywhere (related file: 
"quickassist/utilities/service/qat.service")
dh_missing: error: missing files, aborting

** Affects: qatlib (Ubuntu)
 Importance: Undecided
 Assignee: Hector CAO (hectorcao)
 Status: In Progress

** Changed in: qatlib (Ubuntu)
 Assignee: (unassigned) => Hector CAO (hectorcao)

** Changed in: qatlib (Ubuntu)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055694

Title:
  dh_missing complains about not installed file (qat.service)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qatlib/+bug/2055694/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055694] Re: dh_missing complains about not installed file (qat.service)

2024-03-01 Thread Hector CAO
The fix has been uploaded to :
https://launchpad.net/~hectorcao/+archive/ubuntu/lp-2055694/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055694

Title:
  dh_missing complains about not installed file (qat.service)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qatlib/+bug/2055694/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2055694] Re: dh_missing complains about not installed file (qat.service)

2024-03-01 Thread Hector CAO
If qatlib is built on host that has systemd active (not in schroot,
...), qatlib will install the qat.service file in
/usr/lib/systemd/system, we do not use (install) this file for the
package that is why dh_missing complains, the solution i propose is to
just put this file in not-installed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055694

Title:
  dh_missing complains about not installed file (qat.service)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qatlib/+bug/2055694/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2048917] Re: [needs-packaging] Intel Integrated Performance Primitives - ipp-crypto

2024-03-01 Thread Hector CAO
** Changed in: ubuntu
   Status: Fix Committed => Confirmed

** Changed in: ubuntu
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2048917

Title:
  [needs-packaging] Intel Integrated Performance Primitives - ipp-crypto

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/2048917/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2048938] Re: [needs-packaging] Intel quickassist (QAT) : openssl QAT engine

2024-03-01 Thread Hector CAO
** Changed in: ubuntu
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2048938

Title:
  [needs-packaging] Intel quickassist (QAT) : openssl QAT engine

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/2048938/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[jira] [Resolved] (KAFKA-16311) [Debezium Informix Connector Unable to Commit Processed Log Position]

2024-02-28 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino resolved KAFKA-16311.
--
Resolution: Invalid

Please try reporting this to 
[Debezium|https://github.com/debezium/debezium-connector-informix]

https://issues.redhat.com/browse/DBZ

> [Debezium Informix Connector Unable to Commit Processed Log Position]
> -
>
> Key: KAFKA-16311
> URL: https://issues.apache.org/jira/browse/KAFKA-16311
> Project: Kafka
>  Issue Type: Bug
>Reporter: Maaheen Yasin
>Priority: Blocker
> Attachments: connect logs.out
>
>
> I am using Debezium Informix Source connector and JDBC Sink connector and the 
> below versions of Informix database and KAFKA Connect.
> Informix Dynamic Server
> 14.10.FC10W1X2
> Informix JDBC Driver for Informix Dynamic Server
> 4.50.JC10
> KAFKA Version: 7.4.1-ce
>  
> *Expected Behavior:*
> All tasks of the Informix source connector are running, and all messages are 
> being published in the topic. During the DDL Execution, the informix database 
> is put under single user mode and the DDL on the table on which CDC was 
> previously enabled was executed. After the database exits from the single 
> user mode, then the connector should be able to reconnect with the source 
> database and be able to publish messages in the topic for each new event. 
> *Actual Behavior:*
> All tasks of the Informix source connector are running, and all messages are 
> being published in the topic. During the DDL Execution, the database is put 
> under single user mode and the DDL on the table on which CDC was previously 
> enabled was executed. After the database exits from the single user mode, the 
> source connector is able to reconnect with the database, however, no messages 
> are being published in the topic and the below error is being printed in the 
> KAFKA Connect Logs. 
>  
> *[2024-02-22 15:54:34,913] WARN [kafka_devs|task-0|offsets] Couldn't commit 
> processed log positions with the source database due to a concurrent 
> connector shutdown or restart 
> (io.debezium.connector.common.BaseSourceTask:349)*
>  
> The complete KAFKA Connect logs has been attached. Kindly comment on why this 
> issue is occurring and what steps should be followed to avoid this issue or 
> to resolve this issue. 
>  
> Thanks. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16311) [Debezium Informix Connector Unable to Commit Processed Log Position]

2024-02-28 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino resolved KAFKA-16311.
--
Resolution: Invalid

Please try reporting this to 
[Debezium|https://github.com/debezium/debezium-connector-informix]

https://issues.redhat.com/browse/DBZ

> [Debezium Informix Connector Unable to Commit Processed Log Position]
> -
>
> Key: KAFKA-16311
> URL: https://issues.apache.org/jira/browse/KAFKA-16311
> Project: Kafka
>  Issue Type: Bug
>Reporter: Maaheen Yasin
>Priority: Blocker
> Attachments: connect logs.out
>
>
> I am using Debezium Informix Source connector and JDBC Sink connector and the 
> below versions of Informix database and KAFKA Connect.
> Informix Dynamic Server
> 14.10.FC10W1X2
> Informix JDBC Driver for Informix Dynamic Server
> 4.50.JC10
> KAFKA Version: 7.4.1-ce
>  
> *Expected Behavior:*
> All tasks of the Informix source connector are running, and all messages are 
> being published in the topic. During the DDL Execution, the informix database 
> is put under single user mode and the DDL on the table on which CDC was 
> previously enabled was executed. After the database exits from the single 
> user mode, then the connector should be able to reconnect with the source 
> database and be able to publish messages in the topic for each new event. 
> *Actual Behavior:*
> All tasks of the Informix source connector are running, and all messages are 
> being published in the topic. During the DDL Execution, the database is put 
> under single user mode and the DDL on the table on which CDC was previously 
> enabled was executed. After the database exits from the single user mode, the 
> source connector is able to reconnect with the database, however, no messages 
> are being published in the topic and the below error is being printed in the 
> KAFKA Connect Logs. 
>  
> *[2024-02-22 15:54:34,913] WARN [kafka_devs|task-0|offsets] Couldn't commit 
> processed log positions with the source database due to a concurrent 
> connector shutdown or restart 
> (io.debezium.connector.common.BaseSourceTask:349)*
>  
> The complete KAFKA Connect logs has been attached. Kindly comment on why this 
> issue is occurring and what steps should be followed to avoid this issue or 
> to resolve this issue. 
>  
> Thanks. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[Bug 2048950] Re: [needs-packaging] Intel quickassist (QAT) : zip compression library and application utility

2024-02-27 Thread Hector CAO
Hi @lucasz, thanks for the review and great feedbacks, the package has
been uploaded with requested fixes, please take a look again

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2048950

Title:
  [needs-packaging] Intel quickassist (QAT) : zip compression library
  and application utility

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/2048950/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[jira] [Commented] (KAFKA-16223) Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest

2024-02-20 Thread Hector Geraldino (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818856#comment-17818856
 ] 

Hector Geraldino commented on KAFKA-16223:
--

One thing that worked for me when migrating the WorkerSinkTaskTest was creating 
a separate *WorkerSinkTaskMockTest* test class and splitting the migration in 
smaller batches.

Is that something you'd consider?

> Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest
> ---
>
> Key: KAFKA-16223
> URL: https://issues.apache.org/jira/browse/KAFKA-16223
> Project: Kafka
>  Issue Type: Sub-task
>  Components: connect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16223) Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest

2024-02-15 Thread Hector Geraldino (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17817778#comment-17817778
 ] 

Hector Geraldino commented on KAFKA-16223:
--

Hey [~cmukka20], I haven't started on this one just yet, as I'm still working 
on getting https://issues.apache.org/jira/browse/KAFKA-14683 to the finish line.

I'm ok with you working on this, but also we can divide an conquer to see if we 
can land this before/as part of the 3.8.0 release. wdyt?

> Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest
> ---
>
> Key: KAFKA-16223
> URL: https://issues.apache.org/jira/browse/KAFKA-16223
> Project: Kafka
>  Issue Type: Sub-task
>  Components: connect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [dmarc-ietf] dmarc-dmarcbis: add "req=dkim"

2024-02-10 Thread Hector Santos
+1

With 5617 was the DKIM=ALL policy -  anyone can sign.  Offered no authorization 
protection.

dkim=discardable  offers 1st party signaing protection — just like DMARC offers.

Both failed in validating the 3rd party signer.


All the best,
Hector Santos



> On Feb 8, 2024, at 11:26 AM, Jim Fenton  wrote:
> 
> On 6 Feb 2024, at 14:47, Murray S. Kucherawy wrote:
> 
>> On Tue, Feb 6, 2024 at 2:33 AM Jeroen Massar > 40massar...@dmarc.ietf.org> wrote:
>> 
>>> `req=dkim`: requires DKIM, messages not properly signed are then to be
>>> rejected/quarantined based on 'p' policy.
>>> 
>> 
>> This sounds like what RFC 5617 tried to do, minus the constraint that the
>> signing domain be equal to the author domain, which is one of the key
>> pieces of DMARC.  Isn't this a pretty big scope expansion?
> 
> For the record, RFC 5617 did constrain the signing domain to be the author 
> domain. From Sec. 2.7:
> 
>> An "Author Domain Signature" is a Valid Signature in which the domain name 
>> of the DKIM signing entity, i.e., the d= tag in the DKIM-Signature header 
>> field, is the same as the domain name in the Author Address.
> 
> -Jim
> 
> ___
> dmarc mailing list
> dmarc@ietf.org
> https://www.ietf.org/mailman/listinfo/dmarc

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


[sumo-user] troubleshoot warnings from sumo sim

2024-02-07 Thread Hector A Martinez via sumo-user
Good evening sumo community,

I am having a lot of warnings that I want to determine if they are significant 
to fix or if there is a way to silence them all together if they are not 
significant. I am looking to run scenarios that go for more than one day of 
rail and trucking traffic for logistics scenarios. I need to find a way to 
automatically batch fix these or some other efficient remedy.

Here are examples (I get a lot of these):
Warning: Unequal lengths of bidi lane ':1037018510_3_0' and lane 
':1037018510_0_0' (21.89 != 21.86).
Warning: Unequal lengths of bidi lane ':cluster_1036691204_1036691275_0_0' and 
lane ':cluster_1036691204_1036691275_4_0' (13.31 != 13.28).
Warning: At actuated tlLogic '110353342', linkIndex 2 has no controlling 
detector.
Warning: At actuated tlLogic 'cluster_110338816_8144618003_8389622682', 
linkIndex 5 has no controlling detector.
Warning: At actuated tlLogic 'joinedS_110522579_cluster_110387067_5907230855', 
linkIndex 21 has no controlling detector.
Warning: Vehicle 'truck_A' performs emergency stop at the end of lane 
'623957716_1' because of a red traffic light (decel=-16.76, offset=14.98), 
time=869.00.

Thanks in advance,

Hector A. Martinez, P.E.
Transportation Researcher, Resilient Transportation and Logistics LTM
MITRE | National Security Engineering 
Center<https://www.mitre.org/centers/national-security-and-engineering-center/who-we-are>
813.207.5365

___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


Re: [Ietf-dkim] Headers that should not be automatically oversigned in a DKIM signature?

2024-02-06 Thread Hector Santos
Whoa!  I wondered about that!! Lock Icon was gone and it’s not a “switch” 
indication “options” icon.

You know, a good bit of the time is a programmer getting excited with a new UI 
API and switches to it!!  Imo, there was no need for that change,  Everything 
in the options was 100% about security and privacy related.  So what if the 
laymen didn’t know what it was. The default is set for unsecured 
communications. Focus on that, the secured defaults. The 10-20% experts that 
did know this intuitively with the lock icon was did not have an issue. 
Sometimes the first inclination is the best.  Go with your GUTS.  Always works 
in the long term.


All the best,
Hector Santos



> On Feb 5, 2024, at 8:50 PM, Dave Crocker  wrote:
> Om
> On 2/5/2024 2:08 PM, Jim Fenton wrote:
>> On 5 Feb 2024, at 14:02, Dave Crocker wrote:
>>> On 2/5/2024 1:56 PM, Jim Fenton wrote:
>>>> And you will also provide citations to refereed research about what you 
>>>> just asserted as well, yes?
>>> Ahh, you want me to prove the negative. That's not exactly how these things 
>>> go.
>> You said that the URL lock symbol failed. Asking for research to back that 
>> up is not asking for you to prove the negative. 
> 
> Ahh.  Defending by attacking.  Nice.
> 
> But actually, given what I said, yes it is being asked to prove the negative. 
>  
> 
> I said it's been a failure. Failure means that after many years, it has not 
> been a success.  Were the symbol successful, we'd see reductions in user 
> understanding, awareness and resistance abuse.  
> 
> Do we have serious data that it has been?  If so, where is it?  Do we even 
> have an anecdotal sense of widespread utility?  I think not.
> 
> But wait.  There's more...
> 
> All of the following are strong indicators of failure:
> 
> "In our study, we asked a cross-section 
> <https://techxplore.com/tags/cross+section/> of 528 web users 
> <https://techxplore.com/tags/web+users/>, aged between 18 and 86 years of 
> age, a number of questions about the internet. Some 53% of them held a 
> bachelor's degree or above and 22% had a college certificate, while the 
> remainder had no further education.
> 
> One of our questions was, "On the Google Chrome browser bar, do you know what 
> the padlock icon represents/means?"
> 
> Of the 463 who responded, 63% stated they knew, or thought they knew, what 
> the padlock symbol on their web browser meant, but only 7% gave the correct 
> meaning."
> 
> https://techxplore.com/news/2023-11-idea-padlock-icon-internet-browser.html
> 
> https://www.nextgov.com/cybersecurity/2019/06/fbi-warning-lock-icon-doesnt-mean-website-safe/157629/
> 
> 'In an alert published Monday <https://www.ic3.gov/media/2019/190610.aspx>, 
> the bureau’s Internet Crime Complaint Center, or IC3, warned that scammers 
> are using the public’s trust in website certificates as part of phishing 
> campaigns.
> 
> “The presence of ‘https’ and the lock icon are supposed to indicate the web 
> traffic is encrypted and that visitors can share data safely,” the bureau 
> wrote in the alert. “Unfortunately, cyber criminals are banking on the 
> public’s trust of ‘https’ and the lock icon.” '
> 
> https://theconversation.com/the-vast-majority-of-us-have-no-idea-what-the-padlock-icon-on-our-internet-browser-is-and-its-putting-us-at-risk-216581
> 
> https://www.sciencealert.com/theres-a-tiny-icon-on-your-screen-but-almost-nobody-knows-why
> 
> https://www.theverge.com/2023/5/3/23709498/google-chrome-lock-icon-web-browser-https-security-update-redesign
> 
> https://www.howtogeek.com/890033/google-chrome-is-ditching-the-lock-icon-for-websites/
> 
> 
> 
> d/
> 
> 
> -- 
> Dave Crocker
> Brandenburg InternetWorking
> bbiw.net
> mast:@dcrocker@mastodon.social 
> <mailto:mast:@dcrocker@mastodon.social>___
> Ietf-dkim mailing list
> Ietf-dkim@ietf.org
> https://www.ietf.org/mailman/listinfo/ietf-dkim

___
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim


Re: [Ietf-dkim] Security indicators, not Headers that should not be automatically oversigned

2024-02-06 Thread Hector Santos

The "Report as Spam” button is always there.  They have normalized the practice 
for users to expect legitimate mail in spam boxes, thus causing more eyeballs 
around the junk. That is all spammers want.


https://www.wsj.com/articles/google-and-yahoo-are-cracking-down-on-inbox-spam-dont-expect-less-email-marketing-dd124c19
Google and Yahoo Are Cracking Down on Inbox Spam. Don’t Expect Less Email 
Marketing.
wsj.com


All the best,
Hector Santos



> On Feb 6, 2024, at 1:43 PM, John Levine  wrote:
> 
> It appears that Jim Fenton   said:
>> On 5 Feb 2024, at 14:02, Dave Crocker wrote:
>> 
>>> On 2/5/2024 1:56 PM, Jim Fenton wrote:
>>>> And you will also provide citations to refereed research about what you 
>>>> just asserted as well, yes?
>>> 
>>> 
>>> Ahh, you want me to prove the negative. That's not exactly how these things 
>>> go.
>> 
>> You said that the URL lock symbol failed. Asking for research to back that 
>> up is not asking for you to
>> prove the negative. I suspect there is research out there that backs up that 
>> statement, and I’m just
>> asking for the same amount of rigor that you are asking for.
> 
> In this case, Dave's right.  Here's a conference paper from Google saying 
> that only 11% of users
> understood what the lock meant.
> 
> https://research.google/pubs/it-builds-trust-with-the-customers-exploring-user-perceptions-of-the-padlock-icon-in-browser-ui/
> 
> The annual Usenix SOUPS conferences are full of papers about failed security 
> UI.  Here's this year's.
> Don't miss the one saying that Gmail's message origin indicator doesn't work, 
> 
> https://www.usenix.org/conference/soups2023/technical-sessions
> 
> R's,
> John
> 
> ___
> Ietf-dkim mailing list
> Ietf-dkim@ietf.org
> https://www.ietf.org/mailman/listinfo/ietf-dkim

___
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim


[jira] [Commented] (KAFKA-14132) Remaining PowerMock to Mockito tests

2024-02-05 Thread Hector Geraldino (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814575#comment-17814575
 ] 

Hector Geraldino commented on KAFKA-14132:
--

[~christo_lolov] I opened a separate Jira ticket to track the migration of 
KafkaConfigBackingStoreTest. Maybe this ticket can be marked as done?

> Remaining PowerMock to Mockito tests
> 
>
> Key: KAFKA-14132
> URL: https://issues.apache.org/jira/browse/KAFKA-14132
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Major
> Fix For: 3.8.0
>
>
> {color:#de350b}Some of the tests below use EasyMock as well. For those 
> migrate both PowerMock and EasyMock to Mockito.{color}
> Unless stated in brackets the tests are in the connect module.
> A list of tests which still require to be moved from PowerMock to Mockito as 
> of 2nd of August 2022 which do not have a Jira issue and do not have pull 
> requests I am aware of which are opened:
> {color:#ff8b00}InReview{color}
> {color:#00875a}Merged{color}
>  # {color:#00875a}ErrorHandlingTaskTest{color} (owner: [~shekharrajak])
>  # {color:#00875a}SourceTaskOffsetCommiterTest{color} (owner: Christo)
>  # {color:#00875a}WorkerMetricsGroupTest{color} (owner: Divij)
>  # {color:#00875a}WorkerTaskTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ErrorReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}RetryWithToleranceOperatorTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}WorkerErrantRecordReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ConnectorsResourceTest{color} (owner: [~mdedetrich-aiven])
>  # {color:#00875a}StandaloneHerderTest{color} (owner: [~mdedetrich-aiven])
>  # -KafkaConfigBackingStoreTest-  KAFKA-16223
>  # {color:#00875a}KafkaOffsetBackingStoreTest{color} (owner: Christo) 
> ([https://github.com/apache/kafka/pull/12418])
>  # {color:#00875a}KafkaBasedLogTest{color} (owner: @bachmanity ])
>  # {color:#00875a}RetryUtilTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}RepartitionTopicTest{color} (streams) (owner: Christo)
>  # {color:#00875a}StateManagerUtilTest{color} (streams) (owner: Christo)
> *The coverage report for the above tests after the change should be >= to 
> what the coverage is now.*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [Ietf-dkim] Question about lone CR / LF

2024-02-05 Thread Hector Santos

On 2/5/2024 11:50 AM, Dave Crocker wrote:


(*) Lon ago, Knuth visited UCLA when I was there, and 'structured 
programming' was a hot topic.  He did a presentation to test a 
perspective that he later wrote up.  He observed that fully 
structured programs, without gotos, could sometimes make code 
/worse/.  He shows some code without any gotos that was correct but 
extremely difficult to read and understand.  Then he showed a 
version, with two loops -- one after the other -- and inside each 
was a goto into the other.  OMG.  But this code was clear, concise 
and easy to understand.


I recall an old corporate project SE coding guideline: usage of a GOTO 
LABEL was allowed if the LABEL is within the reader's page view, i.e. 
25 lines (using 25x80 terminal standards).


--
Hector Santos,
https://santronics.com
https://winserver.com



___
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim


Re: [Ietf-dkim] Headers that should not be automatically oversigned in a DKIM signature?

2024-02-05 Thread Hector Santos
> On Feb 3, 2024, at 8:23 AM, Alessandro Vesely  wrote:
> 
> On Fri 02/Feb/2024 14:34:22 +0100 Hector Santos wrote:
>> Of course, the MUA is another issue.  What read order should be expected for 
>> Oversign headers?  Each MUA can be different although I would think streamed 
>> in data are naturally read sequentially and the first display headers found 
>> are used in the UI.
> 
> 
> Yeah, which is the opposite of DKIM specified order.


>>   Only To: is allowed to be a list.
> 
> 
> RFC 5322 specifies lists for From:, To:, Cc:, Bcc:, Reply-To:, Resent-From:, 
> Resent-To:, Resent-Cc: and Resent-Bcc:.


My comment was regarding the MUA and the order data is read. I wonder which 
MUAs will display a list for Display fields From: and Resent-*. If any.  Are 
all of these OverSign targets?  

if we go down this road, the recommendation might be to always sign all 
headers, including the missing, including ARC and trace headers and before 
signing, reorder specific headers to DKIM-ready MUA read-order standards, if 
any.

Are MUAs now doing verifications and filtering failures?  Or is it the backend, 
the host, the MDA, that is still generally responsible for doing the 
verification and mail filtering before passing it on to users?


All the best,
Hector Santos

___
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim


[jira] [Created] (KAFKA-16223) Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest

2024-02-05 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-16223:


 Summary: Replace EasyMock and PowerMock with Mockito for 
KafkaConfigBackingStoreTest
 Key: KAFKA-16223
 URL: https://issues.apache.org/jira/browse/KAFKA-16223
 Project: Kafka
  Issue Type: Sub-task
  Components: connect
Reporter: Hector Geraldino






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14132) Remaining PowerMock to Mockito tests

2024-02-05 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino updated KAFKA-14132:
-
Description: 
{color:#de350b}Some of the tests below use EasyMock as well. For those migrate 
both PowerMock and EasyMock to Mockito.{color}

Unless stated in brackets the tests are in the connect module.

A list of tests which still require to be moved from PowerMock to Mockito as of 
2nd of August 2022 which do not have a Jira issue and do not have pull requests 
I am aware of which are opened:

{color:#ff8b00}InReview{color}
{color:#00875a}Merged{color}
 # {color:#00875a}ErrorHandlingTaskTest{color} (owner: [~shekharrajak])
 # {color:#00875a}SourceTaskOffsetCommiterTest{color} (owner: Christo)
 # {color:#00875a}WorkerMetricsGroupTest{color} (owner: Divij)
 # {color:#00875a}WorkerTaskTest{color} (owner: [~yash.mayya])
 # {color:#00875a}ErrorReporterTest{color} (owner: [~yash.mayya])
 # {color:#00875a}RetryWithToleranceOperatorTest{color} (owner: [~yash.mayya])
 # {color:#00875a}WorkerErrantRecordReporterTest{color} (owner: [~yash.mayya])
 # {color:#00875a}ConnectorsResourceTest{color} (owner: [~mdedetrich-aiven])
 # {color:#00875a}StandaloneHerderTest{color} (owner: [~mdedetrich-aiven])
 # -KafkaConfigBackingStoreTest-  KAFKA-16223
 # {color:#00875a}KafkaOffsetBackingStoreTest{color} (owner: Christo) 
([https://github.com/apache/kafka/pull/12418])
 # {color:#00875a}KafkaBasedLogTest{color} (owner: @bachmanity ])
 # {color:#00875a}RetryUtilTest{color} (owner: [~yash.mayya])
 # {color:#00875a}RepartitionTopicTest{color} (streams) (owner: Christo)
 # {color:#00875a}StateManagerUtilTest{color} (streams) (owner: Christo)

*The coverage report for the above tests after the change should be >= to what 
the coverage is now.*

  was:
{color:#de350b}Some of the tests below use EasyMock as well. For those migrate 
both PowerMock and EasyMock to Mockito.{color}

Unless stated in brackets the tests are in the connect module.

A list of tests which still require to be moved from PowerMock to Mockito as of 
2nd of August 2022 which do not have a Jira issue and do not have pull requests 
I am aware of which are opened:

{color:#ff8b00}InReview{color}
{color:#00875a}Merged{color}
 # {color:#00875a}ErrorHandlingTaskTest{color} (owner: [~shekharrajak])
 # {color:#00875a}SourceTaskOffsetCommiterTest{color} (owner: Christo)
 # {color:#00875a}WorkerMetricsGroupTest{color} (owner: Divij)
 # {color:#00875a}WorkerTaskTest{color} (owner: [~yash.mayya])
 # {color:#00875a}ErrorReporterTest{color} (owner: [~yash.mayya])
 # {color:#00875a}RetryWithToleranceOperatorTest{color} (owner: [~yash.mayya])
 # {color:#00875a}WorkerErrantRecordReporterTest{color} (owner: [~yash.mayya])
 # {color:#00875a}ConnectorsResourceTest{color} (owner: [~mdedetrich-aiven])
 # {color:#00875a}StandaloneHerderTest{color} (owner: [~mdedetrich-aiven])
 # KafkaConfigBackingStoreTest 
 # {color:#00875a}KafkaOffsetBackingStoreTest{color} (owner: Christo) 
([https://github.com/apache/kafka/pull/12418])
 # {color:#00875a}KafkaBasedLogTest{color} (owner: @bachmanity ])
 # {color:#00875a}RetryUtilTest{color} (owner: [~yash.mayya])
 # {color:#00875a}RepartitionTopicTest{color} (streams) (owner: Christo)
 # {color:#00875a}StateManagerUtilTest{color} (streams) (owner: Christo)

*The coverage report for the above tests after the change should be >= to what 
the coverage is now.*


> Remaining PowerMock to Mockito tests
> 
>
> Key: KAFKA-14132
> URL: https://issues.apache.org/jira/browse/KAFKA-14132
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Major
> Fix For: 3.8.0
>
>
> {color:#de350b}Some of the tests below use EasyMock as well. For those 
> migrate both PowerMock and EasyMock to Mockito.{color}
> Unless stated in brackets the tests are in the connect module.
> A list of tests which still require to be moved from PowerMock to Mockito as 
> of 2nd of August 2022 which do not have a Jira issue and do not have pull 
> requests I am aware of which are opened:
> {color:#ff8b00}InReview{color}
> {color:#00875a}Merged{color}
>  # {color:#00875a}ErrorHandlingTaskTest{color} (owner: [~shekharrajak])
>  # {color:#00875a}SourceTaskOffsetCommiterTest{color} (owner: Christo)
>  # {color:#00875a}WorkerMetricsGroupTest{color} (owner: Divij)
>  # {color:#00875a}WorkerTaskTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ErrorReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}RetryWithToleranceOperatorTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}WorkerErrantRecordReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ConnectorsResourceTest{color} (owner: [~

[jira] [Created] (KAFKA-16223) Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest

2024-02-05 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-16223:


 Summary: Replace EasyMock and PowerMock with Mockito for 
KafkaConfigBackingStoreTest
 Key: KAFKA-16223
 URL: https://issues.apache.org/jira/browse/KAFKA-16223
 Project: Kafka
  Issue Type: Sub-task
  Components: connect
Reporter: Hector Geraldino






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16223) Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest

2024-02-05 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino reassigned KAFKA-16223:


Assignee: Hector Geraldino

> Replace EasyMock and PowerMock with Mockito for KafkaConfigBackingStoreTest
> ---
>
> Key: KAFKA-16223
> URL: https://issues.apache.org/jira/browse/KAFKA-16223
> Project: Kafka
>  Issue Type: Sub-task
>  Components: connect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14132) Remaining PowerMock to Mockito tests

2024-02-04 Thread Hector Geraldino (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17814145#comment-17814145
 ] 

Hector Geraldino commented on KAFKA-14132:
--

Hey [~bachmanity1] - as I'm starting to wrap-up the migration of the 
{_}WorkerSinkTaskTest{_}, *KafkaConfigBackingStoreTest* will be the only 
remaining test pending for migration.

Are you currently working on it? If not, I'm happy to pick this up. If you are, 
I can lend a hand and we can divide & conquer. It'd be nice if we can have this 
(and KAFKA-12199) ready for 3.8

> Remaining PowerMock to Mockito tests
> 
>
> Key: KAFKA-14132
> URL: https://issues.apache.org/jira/browse/KAFKA-14132
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Major
> Fix For: 3.8.0
>
>
> {color:#de350b}Some of the tests below use EasyMock as well. For those 
> migrate both PowerMock and EasyMock to Mockito.{color}
> Unless stated in brackets the tests are in the connect module.
> A list of tests which still require to be moved from PowerMock to Mockito as 
> of 2nd of August 2022 which do not have a Jira issue and do not have pull 
> requests I am aware of which are opened:
> {color:#ff8b00}InReview{color}
> {color:#00875a}Merged{color}
>  # {color:#00875a}ErrorHandlingTaskTest{color} (owner: [~shekharrajak])
>  # {color:#00875a}SourceTaskOffsetCommiterTest{color} (owner: Christo)
>  # {color:#00875a}WorkerMetricsGroupTest{color} (owner: Divij)
>  # {color:#00875a}WorkerTaskTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ErrorReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}RetryWithToleranceOperatorTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}WorkerErrantRecordReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ConnectorsResourceTest{color} (owner: [~mdedetrich-aiven])
>  # {color:#00875a}StandaloneHerderTest{color} (owner: [~mdedetrich-aiven])
>  # KafkaConfigBackingStoreTest (owner: [~bachmanity1])
>  # {color:#00875a}KafkaOffsetBackingStoreTest{color} (owner: Christo) 
> ([https://github.com/apache/kafka/pull/12418])
>  # {color:#00875a}KafkaBasedLogTest{color} (owner: @bachmanity ])
>  # {color:#00875a}RetryUtilTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}RepartitionTopicTest{color} (streams) (owner: Christo)
>  # {color:#00875a}StateManagerUtilTest{color} (streams) (owner: Christo)
> *The coverage report for the above tests after the change should be >= to 
> what the coverage is now.*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: USA 2024 Elections Thread

2024-02-03 Thread hector llanquin
oh so u keep with lhe tractatus. are u the tool or are u the brand



On Sat, Feb 3, 2024 at 8:12 AM grarpamp  wrote:

> "Border isn't about "amnesty" "migration" or any other term... it's
> about one thing and one thing only CORRUPT COMMIE GLOBALIST
> DEMOCRATS FUCKING OVER AMERICA."
>
> https://twitter.com/TexasLindsay_/status/1753698685223845981
> Michael Shellenberger with Russell Brand explains how the CIA aligned
> with the left and gave them talking points from the
> military/intelligence agencies to help launch a censorship scheme and
> a political agenda on a global scale. “This is military, this is
> strategic. CTIL PsyOP.”
>
> "The Deep State formed an alliance with the Democrats to take over
> America. They import millions of illegals and turn kids into LGBTQ
> because both groups overwhelmingly vote for Dems. They destroy America
> to rule America, forever. The civil war has begun and the Dems started
> it."
>
> @elonmusk
> "Biden’s strategy is very simple: 1. Get as many illegals in the
> country as possible. 2. Legalize them to create a permanent majority –
> a one-party state. That is why they are encouraging so much illegal
> immigration. Simple, yet effective."
>
> "Biden Dems can't admit this, and can't get away with
> any other explanation either, that's why the White House
> and the Democrats have never explained why they're
> just letting 10 Million invade the USA for them."
>
> "When the average American wakes up to this, the problems
> it is already causing, the perpetual $Dollar cost and bankruptcy,
> the voting cards they're already giving them, they will revolt."
>
> "They never learned from watching all the Muslims invade Europe.
> The American's awakening will be harsh indeed."
>
> "Whatever it takes to deport them all. Mobilize as though it’s a war,
> because it is a war."
>
> "People have no idea the all-in cost and degradation of supporting
> these worthless voters they're importing for decades and generations,
> it's in the tens of $Trillions. It's already way over $175B/yr.
> And that's just for food and housing. And doesn't include the 25M
> more worthless illegals on top of the new 10M. The breeding and forever
> generations of welfare. The commie k-12 indoctrination camps they
> call schools skipped over that math lesson... now you know why...
> keep em all dumb, they'll never catch on, and will clamor for more
> socialism fail."
>
> "US Border is worst on Earth, now they've got trainloads of Africans
> Muslims Chinese and unwanted thugs and terrorists from all over
> the planet streaming across. Yeah, that'll work out well for them,
> dumbfucks."
>
> "Can we send the whole Biden family back to Ireland..? "
>
> "A country with no borders is no country."
>


[Powerdevil] [Bug 434486] Powerdevil bug killing my bluetooth?

2024-02-02 Thread hector acosta
https://bugs.kde.org/show_bug.cgi?id=434486

hector acosta  changed:

   What|Removed |Added

 Status|RESOLVED|REPORTED
 Resolution|WORKSFORME  |---
 CC||hector.aco...@gmail.com

--- Comment #2 from hector acosta  ---
(In reply to Nate Graham from comment #1)
> Is this still happening in Plasma 5.27 with a newer kernel?

I can confirm this is still happening in plasma 5.27.9 and kernel 6.5.13

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: [Ietf-dkim] Question about lone CR / LF

2024-02-02 Thread Hector Santos

On 2/2/2024 12:03 AM, Murray S. Kucherawy wrote:

On Thu, Feb 1, 2024 at 10:03 AM John Levine  wrote:

It appears that Murray S. Kucherawy   said:
>-=-=-=-=-=-
>
>On Wed, Jan 31, 2024 at 5:44 PM Steffen Nurpmeso
 wrote:
>
>> But i cannot read this from RFC 6376.
>
>Sections 2.8 and 3.4.4 don't answer this?

Not really.  They say what to do with CRLF but not with a lone
CR or lone LF.


Ah, I misunderstood the question.

I agree that by the time you're talking to a DKIM (or any) filter, I 
expect that this has been handled somehow. CRLF ends a line, 
anything before that is part of the line, and WSP is just a space or 
a tab.  Past that, garbage in, garbage out.




+1.   5322/5321 EOL is CRLF



--
Hector Santos,
https://santronics.com
https://winserver.com

___
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim


Re: [Ietf-dkim] Headers that should not be automatically oversigned in a DKIM signature?

2024-02-02 Thread Hector Santos

On 2/1/2024 6:38 AM, Alessandro Vesely wrote:

On Wed 31/Jan/2024 18:34:46 +0100 Hector Santos wrote:

If I add this feature to wcDKIM, it can be introduced as:

[X] Enable DKIM Replay Protection


That'd be deceptive, as DKIM replay in Dave's sense won't be 
blocked, while there can be other effects on signature robustness.



First, thanks to your and Murray's input.

I need to review Dave's "DKIM Replay" concerns.   Legacy systems have 
many entry points to create, import/export methods, transformation, 
filling missing fields, etc. Overall, I considered the potential 
"Replay" concern was about taking an existing signed message (from a 
purported "trusted signer" ) but MUA display fields, namely, To: and 
Subject: are missing or not signed.  These can potentially be replayed 
with tampered To:, Subject fields and exported.  The multiple 
5322.From headers MUA concern was highlighted many moons ago.  Easily 
Addressed with incoming SMTP filters rejecting multi-From messages.




A better sentence could be:

[X] Prevent further additions of this field


"This" meaning there is a header selection to monitor?    See below


Note that some packages allow choices such as

[ ] Sign and oversign only if present in the message
[ ] Oversign only if already present in the h= list
[ ] Oversign anyway 


Given how our package offer the signing defaults:

UseRequiredHeadersOnly = 1   # optional, 1 - use 
RequireHeaders
RequiredHeaders    = 
From:To:Date:Message-Id:Organization:Subject:Received*:List-ID
SkipHeaders    = 
X-*:Authentication-Results:DKIM-Signature:DomainKey-Signature:Return-Path
StripHeaders   = # optional, headers 
stripped by resigners


Basically, as the message to be signed headers are read in, each are 
checked again the RequiredHeaders (when enabled).  If missing, the 
header is not signed.  The exception is From: which is always 
signed.   Signed headers are added to the "h=" fields.


So how about this, if I follow this, new namespace fields:

OversignHeader.To = # default blank
OversignHeader.Subject =  # default blank
.
.
OversignHeader.Field-Name=   # future oversign header

This allows an oversign header to be signed if missing.  If correct, 
easily to update the code.


Of course, the MUA is another issue.  What read order should be 
expected for Oversign headers?  Each MUA can be different although I 
would think streamed in data are naturally read sequentially and the 
first display headers found are used in the UI.  Only To: is allowed 
to be a list.



--
Hector Santos,
https://santronics.com
https://winserver.com



___
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim


Re: [Ietf-dkim] Headers that should not be automatically oversigned in a DKIM signature?

2024-01-31 Thread Hector Santos


> On Jan 19, 2024, at 8:41 PM, John R Levine  wrote:
> 
> Manfred said:
>> (Seems like "seal"ing would be a better term than "oversign"ing.)
> 
> We've called it oversigning for a decade now.
> 

Interesting.  

First time I have come across the term (“oversign”)  and it appears to be a 
feature with several products in the market. But checking the RFC, unless I 
missed it, it’s not a RFC defined term.  Replay is the term used.

To me, the term connotes “redundant signing” beyond what is necessary or 
desired for a particular signing rule.   If I add this feature to wcDKIM, it 
can be introduced as:

[X] Enable DKIM Replay Protection

The F1 help will indicate the addition of headers, i.e.  To:, Subject:, etc. as 
empty field values are used to enforce the hashing binding of these potentially 
missing headers to the signature. If enabled, then these specific headers 
MUST be included in the list of headers to be signed and the headers MUST 
exist.  If not, the headers with empty values will be hash bound to the 
signature.

Is that “Oversigning?”

Perhaps. Imo, it is redundant header(s) signing when it may not be required for 
certain DKIM signing routes.  

What is most important is what it is suppose to help address - DKIM Replay 
hacks.

All the best,
Hector Santos




___
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim


Re: [sumo-user] [EXT] AW: SUMO-gui background options

2024-01-31 Thread Hector A Martinez via sumo-user
Thank you Robert!

I am trying to follow your advice here but when I run Generate on the osmWizard 
I am getting this error that I haven’t seen before:
Error with SSL certificate, try 'pip install -U certifi'.

Then when I run it I get this:
Requirement already satisfied: certifi in 
c:\path-to\packages\pythonsoftwarefoundation.python.3.11_qbz5n2kfra8p0\localcache\local-packages\python311\site-packages
 (2023.11.17)

So not sure what is happening here.

Any help will be greatly appreciated.  Thanks,

--H

From: sumo-user  On Behalf Of Robert.Hilbrich--- 
via sumo-user
Sent: Tuesday, January 30, 2024 12:36 PM
To: sumo-user@eclipse.org
Cc: robert.hilbr...@dlr.de
Subject: Re: [sumo-user] [EXT] AW: SUMO-gui background options

Hi Hector, I just wanted to let you know, that we did some work regarding the 
use of osm as a background image. The discussion and current situation is 
described [1]. Solution 4) should work for your scenario – but you may need to 
regenerate

Hi Hector,
I just wanted to let you know, that we did some work regarding the use of osm 
as a background image. The discussion and current situation is described [1]. 
Solution 4) should work for your scenario – but you may need to regenerate the 
trips/routes since the reprojection to web mercator may result in the removal 
of edges. It is not perfect yet and we are working on 5), but for the moment, 
that is the best we can do.
Best regards,
Robert

[1]: https://github.com/eclipse-sumo/sumo/issues/14241#issuecomment-1911819715

From: sumo-user 
mailto:sumo-user-boun...@eclipse.org>> On Behalf 
Of Hector A Martinez via sumo-user
Sent: Monday, January 22, 2024 5:27 PM
To: Sumo project User discussions 
mailto:sumo-user@eclipse.org>>
Cc: Hector A Martinez mailto:hmarti...@mitre.org>>
Subject: Re: [sumo-user] [EXT] AW: SUMO-gui background options

Thank you for your response but when I try this, here is the error I get from 
openstreetmap.org

[cid:image001.png@01DA543D.3DF5FB40]

If you have any guidance related to the OSM website  you are referring to below 
and any instructions on how to stich the map gracefully and to open it on 
SUMO-gui will be greatly appreciated. Thanks again,

--H

From: sumo-user 
mailto:sumo-user-boun...@eclipse.org>> On Behalf 
Of The div via sumo-user
Sent: Friday, January 19, 2024 5:49 PM
To: Sumo project User discussions 
mailto:sumo-user@eclipse.org>>
Cc: The div mailto:d...@thoda.uk>>
Subject: Re: [sumo-user] [EXT] AW: SUMO-gui background options

To get an OSM map - use the share button and you should see a download option 
the download size/map scale are coupled so you need to experiment or 
alternatively pull tiles for a detailed large area and load each separately in 
sumo-gui - easy

To get an OSM map - use the share button and you should see a download option

  *   the download size/map scale are coupled so you need to experiment or 
alternatively pull tiles for a detailed large area and load each separately in 
sumo-gui - easy enough to get teh overlap exact with the UI

___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


Re: [sumo-user] [EXT] AW: SUMO-gui background options

2024-01-22 Thread Hector A Martinez via sumo-user
Thank you for your response but when I try this, here is the error I get from 
openstreetmap.org

[cid:image001.png@01DA4D25.61CACBA0]

If you have any guidance related to the OSM website  you are referring to below 
and any instructions on how to stich the map gracefully and to open it on 
SUMO-gui will be greatly appreciated. Thanks again,

--H

From: sumo-user  On Behalf Of The div via 
sumo-user
Sent: Friday, January 19, 2024 5:49 PM
To: Sumo project User discussions 
Cc: The div 
Subject: Re: [sumo-user] [EXT] AW: SUMO-gui background options

To get an OSM map - use the share button and you should see a download option 
the download size/map scale are coupled so you need to experiment or 
alternatively pull tiles for a detailed large area and load each separately in 
sumo-gui - easy

To get an OSM map - use the share button and you should see a download option

  *   the download size/map scale are coupled so you need to experiment or 
alternatively pull tiles for a detailed large area and load each separately in 
sumo-gui - easy enough to get teh overlap exact with the UI

___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


[ceph-users] Re: Degraded PGs on EC pool when marking an OSD out

2024-01-22 Thread Hector Martin
On 2024/01/22 19:06, Frank Schilder wrote:
> You seem to have a problem with your crush rule(s):
> 
> 14.3d ... [18,17,16,3,1,0,NONE,NONE,12]
> 
> If you really just took out 1 OSD, having 2xNONE in the acting set indicates 
> that your crush rule can't find valid mappings. You might need to tune crush 
> tunables: 
> https://docs.ceph.com/en/reef/rados/troubleshooting/troubleshooting-pg/?highlight=crush%20gives%20up#troubleshooting-pgs

Look closely: that's the *acting* (second column) OSD set, not the *up*
(first column) OSD set. It's supposed to be the *previous* set of OSDs
assigned to that PG, but inexplicably some OSDs just "fall off" when the
PGs get remapped around.

Simply waiting lets the data recover. At no point are any of my PGs
actually missing OSDs according to the current cluster state, and CRUSH
always finds a valid mapping. Rather the problem is that the *previous*
set of OSDs just loses some entries some for some reason.

The same problem happens when I *add* an OSD to the cluster. For
example, right now, osd.15 is out. This is the state of one pg:

14.3d   1044   0 0  00
157307567310   0  1630 0  1630
active+clean  2024-01-22T20:15:46.684066+0900 15550'1630
15550:16184  [18,17,16,3,1,0,11,14,12]  18
[18,17,16,3,1,0,11,14,12]  18 15550'1629
2024-01-22T20:15:46.683491+0900  0'0
2024-01-08T15:18:21.654679+0900  02
periodic scrub scheduled @ 2024-01-31T07:34:27.297723+0900
10430

Note the OSD list ([18,17,16,3,1,0,11,14,12])

Then I bring osd.15 in and:

14.3d   1044   0  1077  00
157307567310   0  1630 0  1630
active+recovery_wait+undersized+degraded+remapped
2024-01-22T22:52:22.700096+0900 15550'1630 15554:16163
[15,17,16,3,1,0,11,14,12]  15[NONE,17,16,3,1,0,11,14,12]
 17 15550'1629  2024-01-22T20:15:46.683491+0900
0'0  2024-01-08T15:18:21.654679+0900  02
 periodic scrub scheduled @ 2024-01-31T02:31:53.342289+0900
 10430

So somehow osd.18 "vanished" from the acting list
([NONE,17,16,3,1,0,11,14,12]) as it is being replaced by 15 in the new
up list ([15,17,16,3,1,0,11,14,12]). The data is in osd.18, but somehow
Ceph forgot.

> 
> It is possible that your low OSD count causes the "crush gives up too soon" 
> issue. You might also consider to use a crush rule that places exactly 3 
> shards per host (examples were in posts just last week). Otherwise, it is not 
> guaranteed that "... data remains available if a whole host goes down ..." 
> because you might have 4 chunks on one of the hosts and fall below min_size 
> (the failure domain of your crush rule for the EC profiles is OSD).

That should be what my CRUSH rule does. It picks 3 hosts then picks 3
OSDs per host (IIUC). And oddly enough everything works for the other EC
pool even though it shares the same CRUSH rule (just ignoring one OSD
from it).

> To test if your crush rules can generate valid mappings, you can pull the 
> osdmap of your cluster and use osdmaptool to experiment with it without risk 
> of destroying anything. It allows you to try different crush rules and 
> failure scenarios on off-line but real cluster meta-data.

CRUSH steady state isn't the issue here, it's the dynamic state when
moving data that is the problem :)

> 
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
> 
> 
> From: Hector Martin 
> Sent: Friday, January 19, 2024 10:12 AM
> To: ceph-users@ceph.io
> Subject: [ceph-users] Degraded PGs on EC pool when marking an OSD out
> 
> I'm having a bit of a weird issue with cluster rebalances with a new EC
> pool. I have a 3-machine cluster, each machine with 4 HDD OSDs (+1 SSD).
> Until now I've been using an erasure coded k=5 m=3 pool for most of my
> data. I've recently started to migrate to a k=5 m=4 pool, so I can
> configure the CRUSH rule to guarantee that data remains available if a
> whole host goes down (3 chunks per host, 9 total). I also moved the 5,3
> pool to this setup, although by nature I know its PGs will become
> inactive if a host goes down (need at least k+1 OSDs to be up).
> 
> I've only just started migrating data to the 5,4 pool, but I've noticed
> that any time I trigger any kind of backfilling (e.g. take one OSD out),
> a bunch of PGs in the 5,4 pool become degraded (instead of just
> misplaced/backfilling). This always seems to happen on that pool only,
> and the object count is a significant fraction of the total pool object
> count (it's not just "a few recently written objects while PGs were
>

Re: [dmarc-ietf] DMARC with multi-valued RFC5322.From

2024-01-19 Thread Hector Santos

> On Jan 19, 2024, at 10:19 AM, Todd Herr 
>  wrote:
> 
> 
> Perhaps the way forward for DMARC is to look for a Sender header when there 
> is more than one RFC5322.From domain and use that for DMARC processing, with 
> the stipulation that messages that don't contain such a Sender header are 
> invalid and should be rejected? 

Todd,  +1

I like this idea.  The 5322.Sender is required for a 2+ address Mailbox-list.

https://www.ietf.org/archive/id/draft-ietf-emailcore-rfc5322bis-09.html#section-3.6.2

This also feeds an RFC5322 validator with a new rule to make sure Sender exist 
for a 2+ address mailbox-list and also open the door to using Sender for DMARC 
purposes and if you could, reference RFC5322 section 3.6.2   

In the name of integration and codification of layered protocols, since 
RFC5322bis is still active, perhaps it can revisit the 5322.From ABNF and/or 
have something more strongly to say about it regarding 2+ address mailbox-list. 
 Perhaps it should be deprecated.  It would better match the current DMARCBis 
semantics and security-related concerns on 5322.From with multiple addresses.


All the best,
Hector Santos


___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [sumo-user] [EXT] AW: SUMO-gui background options

2024-01-19 Thread Hector A Martinez via sumo-user
Good morning Mirko,

I used this script in the link below:
python tools/tileGet.py -n test.net.xml -t 10

And I got this error message:

Traceback (most recent call last):

  File "C:\Program Files (x86)\Eclipse\Sumo\tools\tileGet.py", line 227, in 


get()

  File "C:\Program Files (x86)\Eclipse\Sumo\tools\tileGet.py", line 189, in get

west, south = net.convertXY2LonLat(*bboxNet[0])

  ^

  File "C:\Program Files (x86)\Eclipse\Sumo\tools\sumolib\net\__init__.py", 
line 502, in convertXY2LonLat

return self.getGeoProj()(x, y, inverse=True)

   ^

  File "C:\Program Files (x86)\Eclipse\Sumo\tools\sumolib\net\__init__.py", 
line 471, in getGeoProj

import pyproj

ModuleNotFoundError: No module named 'pyproj'
Thanks in advance for any guidance,

--Hector

From: Mirko Barthauer 
Sent: Thursday, January 4, 2024 9:52 AM
To: Hector A Martinez 
Cc: SUMO-User mailing list, . 
Subject: [EXT] AW: SUMO-gui background options

Hi Hector, you can download satellite image tiles ready to import with our 
helper script tileGet. py (see docs). You'll need a Google Maps key though. The 
script supports other background sources like MapQuest and ArcGIS as well. Best 
regards


Hi Hector,



you can download satellite image tiles ready to import with our helper script 
tileGet.py (see docs<https://sumo.dlr.de/docs/Tools/Misc.html#tilegetpy>). 
You'll need a Google Maps key though. The script supports other background 
sources like MapQuest and ArcGIS as well.



Best regards

Mirko









-Original-Nachricht-

Betreff: SUMO-gui background options

Datum: 2024-01-04T15:43:47+0100

Von: "Hector A Martinez" mailto:hmarti...@mitre.org>>

An: "Mirko Barthauer" mailto:m.bartha...@t-online.de>>






Good morning Mirko,

Is there a way to make the sumo-gui background be for example the Google Map 
layer?

If so, where can I find the instructions on how to do it?

Thanks,

Hector A. Martinez, P.E.
Transportation Researcher, Resilient Transportation and Logistics LTM
MITRE | National Security Engineering 
Center<https://www.mitre.org/centers/national-security-and-engineering-center/who-we-are>
813.207.5365





___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


[ceph-users] Degraded PGs on EC pool when marking an OSD out

2024-01-19 Thread Hector Martin
   5630

The first PG when I put the OSD back in:

14.3c812   0 0  00
119250277580   0  1088 0  1088
active+clean  2024-01-19T18:07:18.079295+0900 15440'1088
15489:10792  [18,17,16,1,3,2,11,14,12]  18
[18,17,16,1,3,2,11,14,12]  18  14537'432
2024-01-12T11:25:54.168048+0900  0'0
2024-01-08T15:18:21.654679+0900  02
periodic scrub scheduled @ 2024-01-21T09:41:43.026836+0900
 2410

As far as I know PGs are not supposed to actually become *degraded* when
merely moving data around without any OSDs going down. Am I doing
something wrong here? Any idea why this is affecting one pool and not
both, even though they are almost identical in setup? It's as if, for
this one pool, marking an OSD out has the effect of making its data
unavailable entirely, instead of merely backfill to other OSDs (the OSD
shows up as NONE in the above dump).

OSD tree:

ID   CLASS  WEIGHTTYPE NAME  STATUS  REWEIGHT  PRI-AFF
 -1 89.13765  root default
-13 29.76414  host flamingo
 11hdd   7.27739  osd.11 up   1.0  1.0
 12hdd   7.27739  osd.12 up   1.0  1.0
 13hdd   7.27739  osd.13 up   1.0  1.0
 14hdd   7.2  osd.14 up   1.0  1.0
  8ssd   0.73198  osd.8  up   1.0  1.0
-10 29.84154  host heart
  0hdd   7.27739  osd.0  up   1.0  1.0
  1hdd   7.27739  osd.1  up   1.0  1.0
  2hdd   7.27739  osd.2  up   1.0  1.0
  3hdd   7.27739  osd.3  up   1.0  1.0
  9ssd   0.73198  osd.9  up   1.0  1.0
 -30  host hub
 -7 29.53197  host soleil
 15hdd   7.2  osd.15 up 0  1.0
 16hdd   7.2  osd.16 up   1.0  1.0
 17hdd   7.2  osd.17 up   1.0  1.0
 18hdd   7.2  osd.18 up   1.0  1.0
 10ssd   0.73198  osd.10 up   1.0  1.0

(I'm in the middle of doing some reprovisioning so 15 is out, this
happens any time I take any OSD out)

# ceph --version
ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)

- Hector
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


Re: [dmarc-ietf] DMARC with multi-valued RFC5322.From

2024-01-18 Thread Hector Santos
Hi, 

As a long time implementer and integrator of IETF protocols, my mail 
engineering view ….

The thing is RFC 822, 2822 and 5322 allows for a single 5322.From header to 
have multiple addresses:

from = "From:" mailbox-list CRLF
mailbox-list = (mailbox *("," mailbox)) / obs-mbox-list

So it is intentional? Obviously, there was a time and  mail “group-ware” 
application scenario where it applied so it was engineered and written into the 
822 specification. 

But were there any client MUA that supports it?   I (Santronics Software) never 
added it in any of my MUAs, which were USER and SYSOP-based.  Even if not, it 
is still technically possible to create a legally formatted RFC822, 2822, 5322 
message and send it via SMTP.

Now comes DKIM and its DKIM Policy Modeling add-ons…..

DKIM signing requires the hash binding of the entire content of the 5322.From 
header.   There is no modifications expected before signing.  Note:  While 
Rewrite is a kludge solution to a domain redirection problem, it is not the 
same but I can see where it can fit here.

ALL DKIM Policy Models (the add-ons over DKIM-BASE) starting with SSP, DSAP, 
ADSP and now DMARC provided guidelines to support 1st party signature. 
Unfortunately, they failed on the authorization of a 3rd party signer scenario. 

So it means, at least one of the authors domain should match/align with the 
signer domain per DMARC logic.

This sounds logical to me, albeit more complexity in the codes that reads and 
processes the headers.  We don’t have any MUAs or bots that have a need or 
support for multiple authors.  That need is called Mailing List.  But for DKIM 
Policy models, it should be allowed as long as there is an aligned/matching 
signer domain in the From header mailbox-list.

However, if I have been following this thread, DMARCBis was updated to ignore 
these multi-from messages for DMARC purposes because they (erroneously) 
presumed they should be rejected, i.e. never make it to a signer or verifier.

I am not sure that is correct.


All the best,
Hector Santos


> On Jan 18, 2024, at 10:59 AM, Emil Gustafsson 
>  wrote:
> 
> I have a data point.
> When we (Google) did an experiment/analysis of this a couple of years ago the 
> conclusion was
> a) multi-value From are relatively rare and mostly look like abuse or 
> mistakes rather than intentional.
> b) Users generally don't care about those messages if they end up in spam.
> 
> So...
> Is the volume measurable? -  yes but very small
> Are there legitimate emails? - yes but users don't seem to care about these 
> messages
> 
> Based on the data I have, I would be in favor of an update that essentially 
> makes multivalued From Invalid rather than a corner case that needs to be 
> handled.
> 
> /E
> 
> On Thu, Jan 18, 2024 at 12:41 AM Steven M Jones  wrote:
> On 1/17/24 2:56 AM, Alessandro Vesely wrote:
> > [ Discussion of  what to do with multi-valued From: in messages ]
> >
> > However, since DMARC bears the blame of banning multi-valued From:, it 
> > is appropriate for it to say something about the consequences and 
> > possible workarounds.
> 
> DMARC doesn't ban multi-valued From:, but the language of section 6.6.1 
> is confusing because we were documenting the practice of implementers up 
> to that time as much as being prescriptive. If anything, it highlights 
> the need for the clearer language that Todd quoted earlier in this thread.
> 
> Has a measurable volume of legitimate messages with multi-valued From: 
> headers been reported in the wild? Is there a real-world problem that 
> needs to be solved?
> 
> --Steve.
> 
> 
> ___
> dmarc mailing list
> dmarc@ietf.org
> https://www.ietf.org/mailman/listinfo/dmarc
> ___
> dmarc mailing list
> dmarc@ietf.org
> https://www.ietf.org/mailman/listinfo/dmarc

___
dmarc mailing list
dmarc@ietf.org
https://www.ietf.org/mailman/listinfo/dmarc


Re: [sumo-user] [EXT] AW: Changing rail to be bidirectional using netconvert

2024-01-18 Thread Hector A Martinez via sumo-user
Mirko,

Never mind this question below.

I identified the problem with my script.  I was running netconvert from its bin 
location and of course it wasn’t letting the system save the new file in that 
location.  I fixed it and I am up and running again.  Will let you know if I 
bump into any issues with the new file.

Thanks,

--H

From: sumo-user  On Behalf Of Hector A Martinez 
via sumo-user
Sent: Thursday, January 18, 2024 11:42 AM
To: Mirko Barthauer ; Sumo project User discussions 

Cc: Hector A Martinez 
Subject: Re: [sumo-user] [EXT] AW: Changing rail to be bidirectional using 
netconvert

Thank you Mirko! I think this might do the trick but I am getting this error 
when I run it on windows command line: Error: Could not build output file 'net. 
net. xml' (Permission denied). Quitting (on error). How do I point the output 
file location

Thank you Mirko!

I think this might do the trick but I am getting this error when I run it on 
windows command line:

Error: Could not build output file 'net.net.xml' (Permission denied).
Quitting (on error).

How do I point the output file location to a user location so that I doesn’t 
need elevated permissions?

Thanks,

--Hector

From: Mirko Barthauer mailto:m.bartha...@t-online.de>>
Sent: Thursday, January 18, 2024 3:20 AM
To: Sumo project User discussions 
mailto:sumo-user@eclipse.org>>
Cc: Hector A Martinez mailto:hmarti...@mitre.org>>
Subject: [EXT] AW: [sumo-user] Changing rail to be bidirectional using 
netconvert

Dear Hector, maybe it is not explained clearly enough in the documentation. The 
option --railway. topology. all-bidi. input-file can be used to restrict the 
set of rail edges you want to make bidirectional. This means the required file 
is a selection


Dear Hector,



maybe it is not explained clearly enough in the documentation. The option 
--railway.topology.all-bidi.input-file can be used to restrict the set of rail 
edges you want to make bidirectional. This means the required file is a 
selection file you can generate with netedit (select edges and then choose 
"Save" from the select frame/mode).



if you want to make all rail edges bidirectional, just call



netconvert --railway.topology.all-bidi true -s 
C:\pathtofile\test_osm_in.net.xml.gz

If you want only a subset of edges to be bidirectional, call



netconvert --railway.topology.all-bidi.input-file selection.txt  -s 
C:\pathtofile\test_osm_in.net.xml.gz



Best regards

Mirko







-Original-Nachricht-

Betreff: [sumo-user] Changing rail to be bidirectional using netconvert

Datum: 2024-01-17T20:50:48+0100

Von: "Hector A Martinez via sumo-user" 
mailto:sumo-user@eclipse.org>>

An: "sumo-user@eclipse.org<mailto:sumo-user@eclipse.org>" 
mailto:sumo-user@eclipse.org>>






Sumo team,

I am trying to convert all of my rail network to be bidirectional using 
netconvert without affecting my roadway network. This is for 
commodity/container movement, not people movement. I am using a file I 
generated using OSM wizard.

I used this script:
netconvert --railway.topology.all-bidi.input-file 
C:\pathtofile\test_osm_in.net.xml.gz

This is the error I get.

Error: No nodes loaded.
Quitting (on error).

I recognize that the network file is a .gz file but both netedit and sumo open 
the network file as is and it has everything to include the nodes. Netedit 
crashes on me all the time while I am making changes to the file so I need to 
do this rail changes quicker using netconvert. I welcome any advice that will 
point me in the right direction.  Thanks,

--Hector


From: Mirko Barthauer mailto:m.bartha...@t-online.de>>
Sent: Friday, January 12, 2024 3:50 AM
To: Hector A Martinez mailto:hmarti...@mitre.org>>
Subject: AW: [EXT] AW: Adding Containers using TraCI

Hi Hector, you can try to process your network with netconvert using the 
PlainXML format: convert your network to PlainXML analyse your network with a 
script and write the missing edges in a new file in PlainXML format convert 
back to the normal


Hi Hector,



you can try to process your network with netconvert using the PlainXML 
format<https://sumo.dlr.de/docs/Networks/PlainXML.html>:

  *   convert<https://sumo.dlr.de/docs/Networks/Export.html#plain> your network 
to PlainXML
  *   analyse your network with a script and write the missing edges in a new 
file in PlainXML format
  *   convert back to the normal SUMO format with netconvert by supplying the 
PlainXML files using the respective input options (-n,-e,-x,-i)

Please write to the mailing list next time, so that everybody can answer the 
question (or at least learn from it).



Best regards

Mirko







___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


Re: [sumo-user] Changing rail to be bidirectional using netconvert

2024-01-18 Thread Hector A Martinez via sumo-user
Thanks for answer Pablo!

Mirko sent me a new script to use that seems to be working.  My pathtofile 
doesn’t have any of those special characters.

Now I think I just have to figure out why I am getting the Permission Denied 
for the output file resulting from the conversion.  Thanks again,

--H

From: pablo.alvarezlo...@dlr.de 
Sent: Wednesday, January 17, 2024 7:08 PM
To: sumo-user@eclipse.org
Cc: Hector A Martinez 
Subject: [EXT] AW: Changing rail to be bidirectional using netconvert

Hi Hector, has the "pathtofile" an space or a strange character like ñ, ä or 
similar? Regards Von: sumo-user  im Auftrag 
von Hector A Martinez via sumo-user  Gesendet: 


Hi Hector,



has the "pathtofile" an space or a strange character like ñ, ä or similar?



Regards


Von: sumo-user 
mailto:sumo-user-boun...@eclipse.org>> im 
Auftrag von Hector A Martinez via sumo-user 
mailto:sumo-user@eclipse.org>>
Gesendet: Mittwoch, 17. Januar 2024 20:50:24
An: sumo-user@eclipse.org<mailto:sumo-user@eclipse.org>
Cc: Hector A Martinez
Betreff: [sumo-user] Changing rail to be bidirectional using netconvert

Sumo team,

I am trying to convert all of my rail network to be bidirectional using 
netconvert without affecting my roadway network. This is for 
commodity/container movement, not people movement. I am using a file I 
generated using OSM wizard.

I used this script:
netconvert --railway.topology.all-bidi.input-file 
C:\pathtofile\test_osm_in.net.xml.gz

This is the error I get.

Error: No nodes loaded.
Quitting (on error).

I recognize that the network file is a .gz file but both netedit and sumo open 
the network file as is and it has everything to include the nodes. Netedit 
crashes on me all the time while I am making changes to the file so I need to 
do this rail changes quicker using netconvert. I welcome any advice that will 
point me in the right direction.  Thanks,

--Hector


From: Mirko Barthauer mailto:m.bartha...@t-online.de>>
Sent: Friday, January 12, 2024 3:50 AM
To: Hector A Martinez mailto:hmarti...@mitre.org>>
Subject: AW: [EXT] AW: Adding Containers using TraCI

Hi Hector, you can try to process your network with netconvert using the 
PlainXML format: convert your network to PlainXML analyse your network with a 
script and write the missing edges in a new file in PlainXML format convert 
back to the normal


Hi Hector,



you can try to process your network with netconvert using the PlainXML 
format<https://sumo.dlr.de/docs/Networks/PlainXML.html>:

  *   convert<https://sumo.dlr.de/docs/Networks/Export.html#plain> your network 
to PlainXML
  *   analyse your network with a script and write the missing edges in a new 
file in PlainXML format
  *   convert back to the normal SUMO format with netconvert by supplying the 
PlainXML files using the respective input options (-n,-e,-x,-i)

Please write to the mailing list next time, so that everybody can answer the 
question (or at least learn from it).



Best regards

Mirko






___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


Re: [sumo-user] [EXT] AW: Changing rail to be bidirectional using netconvert

2024-01-18 Thread Hector A Martinez via sumo-user
Thank you Mirko!

I think this might do the trick but I am getting this error when I run it on 
windows command line:

Error: Could not build output file 'net.net.xml' (Permission denied).
Quitting (on error).

How do I point the output file location to a user location so that I doesn’t 
need elevated permissions?

Thanks,

--Hector

From: Mirko Barthauer 
Sent: Thursday, January 18, 2024 3:20 AM
To: Sumo project User discussions 
Cc: Hector A Martinez 
Subject: [EXT] AW: [sumo-user] Changing rail to be bidirectional using 
netconvert

Dear Hector, maybe it is not explained clearly enough in the documentation. The 
option --railway. topology. all-bidi. input-file can be used to restrict the 
set of rail edges you want to make bidirectional. This means the required file 
is a selection


Dear Hector,



maybe it is not explained clearly enough in the documentation. The option 
--railway.topology.all-bidi.input-file can be used to restrict the set of rail 
edges you want to make bidirectional. This means the required file is a 
selection file you can generate with netedit (select edges and then choose 
"Save" from the select frame/mode).



if you want to make all rail edges bidirectional, just call



netconvert --railway.topology.all-bidi true -s 
C:\pathtofile\test_osm_in.net.xml.gz

If you want only a subset of edges to be bidirectional, call



netconvert --railway.topology.all-bidi.input-file selection.txt  -s 
C:\pathtofile\test_osm_in.net.xml.gz



Best regards

Mirko







-Original-Nachricht-

Betreff: [sumo-user] Changing rail to be bidirectional using netconvert

Datum: 2024-01-17T20:50:48+0100

Von: "Hector A Martinez via sumo-user" 
mailto:sumo-user@eclipse.org>>

An: "sumo-user@eclipse.org<mailto:sumo-user@eclipse.org>" 
mailto:sumo-user@eclipse.org>>






Sumo team,

I am trying to convert all of my rail network to be bidirectional using 
netconvert without affecting my roadway network. This is for 
commodity/container movement, not people movement. I am using a file I 
generated using OSM wizard.

I used this script:
netconvert --railway.topology.all-bidi.input-file 
C:\pathtofile\test_osm_in.net.xml.gz

This is the error I get.

Error: No nodes loaded.
Quitting (on error).

I recognize that the network file is a .gz file but both netedit and sumo open 
the network file as is and it has everything to include the nodes. Netedit 
crashes on me all the time while I am making changes to the file so I need to 
do this rail changes quicker using netconvert. I welcome any advice that will 
point me in the right direction.  Thanks,

--Hector


From: Mirko Barthauer mailto:m.bartha...@t-online.de>>
Sent: Friday, January 12, 2024 3:50 AM
To: Hector A Martinez mailto:hmarti...@mitre.org>>
Subject: AW: [EXT] AW: Adding Containers using TraCI

Hi Hector, you can try to process your network with netconvert using the 
PlainXML format: convert your network to PlainXML analyse your network with a 
script and write the missing edges in a new file in PlainXML format convert 
back to the normal


Hi Hector,



you can try to process your network with netconvert using the PlainXML 
format<https://sumo.dlr.de/docs/Networks/PlainXML.html>:

  *   convert<https://sumo.dlr.de/docs/Networks/Export.html#plain> your network 
to PlainXML
  *   analyse your network with a script and write the missing edges in a new 
file in PlainXML format
  *   convert back to the normal SUMO format with netconvert by supplying the 
PlainXML files using the respective input options (-n,-e,-x,-i)

Please write to the mailing list next time, so that everybody can answer the 
question (or at least learn from it).



Best regards

Mirko







___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


[sumo-user] Changing rail to be bidirectional using netconvert

2024-01-17 Thread Hector A Martinez via sumo-user
Sumo team,

I am trying to convert all of my rail network to be bidirectional using 
netconvert without affecting my roadway network. This is for 
commodity/container movement, not people movement. I am using a file I 
generated using OSM wizard.

I used this script:
netconvert --railway.topology.all-bidi.input-file 
C:\pathtofile\test_osm_in.net.xml.gz

This is the error I get.

Error: No nodes loaded.
Quitting (on error).

I recognize that the network file is a .gz file but both netedit and sumo open 
the network file as is and it has everything to include the nodes. Netedit 
crashes on me all the time while I am making changes to the file so I need to 
do this rail changes quicker using netconvert. I welcome any advice that will 
point me in the right direction.  Thanks,

--Hector


From: Mirko Barthauer 
Sent: Friday, January 12, 2024 3:50 AM
To: Hector A Martinez 
Subject: AW: [EXT] AW: Adding Containers using TraCI

Hi Hector, you can try to process your network with netconvert using the 
PlainXML format: convert your network to PlainXML analyse your network with a 
script and write the missing edges in a new file in PlainXML format convert 
back to the normal


Hi Hector,



you can try to process your network with netconvert using the PlainXML 
format<https://sumo.dlr.de/docs/Networks/PlainXML.html>:

  *   convert<https://sumo.dlr.de/docs/Networks/Export.html#plain> your network 
to PlainXML
  *   analyse your network with a script and write the missing edges in a new 
file in PlainXML format
  *   convert back to the normal SUMO format with netconvert by supplying the 
PlainXML files using the respective input options (-n,-e,-x,-i)

Please write to the mailing list next time, so that everybody can answer the 
question (or at least learn from it).



Best regards

Mirko






___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


Re: Mimic ALIAS in Postgresql?

2024-01-16 Thread hector vass
On Tue, 16 Jan 2024, 17:21 Ron Johnson,  wrote:

> Some RDBMSs have CREATE ALIAS, which allows you to refer to a table by a
> different name (while also referring to it by the original name).
>
> We have an application running on DB2/UDB which (for reasons wholly
> unknown to me, and probably also to the current developer) extensively uses
> this with two schemas: MTUSER and MTQRY.  For example, sometimes refer to
> MTUSER.sometable and other times refer to it as MYQRY.sometable.
>
> My goal is to present a way to migrate from UDB to PG with as few
> application changes as possible.  Thus, the need to mimic aliases.
>
> Maybe updatable views?
> CREATE VIEW mtqry.sometable AS SELECT * FROM mtuser.sometable;
>


I think views will work.  Alternative might be interpose a proxy to rewrite
the SQL.  https://www.galliumdata.com/ gives you an idea of what this might
look like although could do a lite version yourself.



>


Re: Seeking a Terminal Emulator on Debian for "Passthrough" Printing

2024-01-13 Thread Richard Hector

On 14/01/24 03:59, Greg Wooledge wrote:

I have dealt with terminals with passthrough printers before, but it
was three decades ago, and I've certainly never heard of a printer
communicating *back* to the host over this channel


I've also set up passthrough printers on terminals - which were hanging 
off muxes ... it's a serial connection, so bidirectional commumination 
should be fine, and more recent printers would make use of that.


And in fact, when we ran out of mux ports, we even hung an extra 
terminal off the passthrough port, so bidirectional worked :-)


These were physical serial terminals, of course - I don't remember 
having to get a terminal emulator to do this. It also wasn't on Linux - 
some were on SCO, and the others might have been on some kind of 
mainframe - a government department. We weren't involved in that side of it.


Richard



Re: find question

2024-01-13 Thread Richard Hector

On 30/12/23 01:27, Greg Wooledge wrote:

On Fri, Dec 29, 2023 at 10:56:52PM +1300, Richard Hector wrote:

find $dir -mtime +7 -delete


"$dir" should be quoted.


Got it, thanks.


Will that fail to delete higher directories, because the deletion of files
updated the mtime?

Or does it get all the mtimes first, and use those?


It doesn't delete directories recursively.

unicorn:~$ mkdir -p /tmp/foo/bar
unicorn:~$ touch /tmp/foo/bar/file
unicorn:~$ find /tmp/foo -name bar -delete
find: cannot delete ‘/tmp/foo/bar’: Directory not empty


Understood.


But I suppose you're asking "What if it deletes both the file and the
directory, because they both qualify?"

In that case, you should use the -depth option, so that it deletes
the deepest items first.

unicorn:~$ find /tmp/foo -depth -delete
unicorn:~$ ls /tmp/foo
ls: cannot access '/tmp/foo': No such file or directory

Without -depth, it would try to delete the directory first, and that
would fail because the directory's not empty.

-depth must appear AFTER the pathnames, but BEFORE any other arguments
such as -mtime or -name.


Except that from the man page, -delete implies -depth. Maybe that's a 
GNUism; I don't know.



And how precise are those times? If I'm running a cron job that deletes
7-day-old directories then creates a new one less than a second later, will
that reliably get the stuff that's just turned 7 days old?


The POSIX documentation describes it pretty well:

-mtime n  The primary shall evaluate as true if the  file  modification
  time  subtracted  from  the  initialization  time, divided by
  86400 (with any remainder discarded), is n.

To qualify for -mtime +7, a file's age as calculated above must be at
least 8 days.  (+7 means more than 7.  It does not mean 7 or more.)


So 7 days and one second doesn't count as "more than 7 days"? It 
truncates the value to integer days before comparing?


Ah, yes, I see that now under -atime. Confusing. Thanks for pushing me 
to investigate :-)



It's not uncommon for the POSIX documentation of a command to be superior
to the GNU documentation of that same command, especially a GNU man page.
GNU info pages are often better, but GNU man pages tend to be lacking.


Understood, thanks. Though it might be less correct where GNUisms exist.

That leaves the question: When using -delete (and -depth), does the 
deletion of files within a directory update the mtime of that directory, 
thereby rendering the directory inelegible for deletion when it would 
have been before? Or is the mtime of that directory recorded before the 
contents are processed?


I just did a quick test (using -mmin -1 instead), and it did delete the 
whole lot.


So I'm still unclear why sometimes the top-level directory (or a 
directory within it) gets left behind. I've just noticed that one of the 
directories (not the one in $dir) contains a '@' symbol; I don't know if 
that affects it?


I'm tempted to avoid the problem by only using find for the top-level 
directory, and exec'ing "rm -r(f)" on it. I'm sure you'll tell me there 
are problems with that, too :-)


Apologies for the slow response - sometimes the depression kicks in and 
I don't get back to a problem for a while :-(


Cheers,
Richard



[sumo-user] netedit crashes

2024-01-12 Thread Hector A Martinez via sumo-user
Dear sumo community,

I am having a problem with Netedit. It crashes when I try to edit a rail 
network I created using OSMWizard tool.  It is for a small region of may be 
15-20 miles square.

When I try to make changes to the rail edges, Netedit crashes.

Any advice on how to avoid this?

Thanks,

--Hector
___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


Re: [VOTE] KIP-1004: Enforce tasks.max property in Kafka Connect

2024-01-02 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
+1 (non-binding)

Thanks Chris!

From: dev@kafka.apache.org At: 01/02/24 11:49:18 UTC-5:00To:  
dev@kafka.apache.org
Subject: Re: [VOTE] KIP-1004: Enforce tasks.max property in Kafka Connect

Hi all,

Happy New Year! Wanted to give this a bump now that the holidays are over
for a lot of us. Looking forward to people's thoughts!

Cheers,

Chris

On Mon, Dec 4, 2023 at 10:36 AM Chris Egerton  wrote:

> Hi all,
>
> I'd like to call for a vote on KIP-1004, which adds enforcement for the
> tasks.max connector property in Kafka Connect.
>
> The KIP:
> 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1004%3A+Enforce+tasks.max+
property+in+Kafka+Connect
>
> The discussion thread:
> https://lists.apache.org/thread/scx75cjwm19jyt19wxky41q9smf5nx6d
>
> Cheers,
>
> Chris
>




Re: Wokeism is Doomed

2023-12-30 Thread hector llanquin
Y’all such bunch of babies  complaining about shit only your gram pa would 
bother. This whole mailing list is a deception. Your quarrels are just as old 
as your biology and is just a shame for your tendons to keep typing these 
boomerandums. It’s been years of scrolling through these lines and all I can 
find is propaganda induced paranoia. Iguess y’all fed by the same spoon, right?


> 
> On Oct 18, 2023, at 11:37 PM, grarpamp  wrote:
> 
> In the future, Crypto at the Speed of Light, will pull out
> of and bankrupt these stupid companies in less than 24hrs.
> Some wannabe UN WEF'r Pol maggot says stupid shit,
> same thing... click, poof, gone... no more paycheck for you.
> 
> Turns out, nobody but the Leftist Screechers (hired tools)
> wants gay parades, trannies in schools, Antifa, BLM, etc.
> 
> What fun all this U-Turn, Hypocrisy, Blowback, and
> Failed ideologies. Keynesians and Pols are next!
> 
> 
> Victoria's Secret Ditches 'Wokeness'; Wants To Make Sexy Great Again
> 
> Victoria's Secret's shift towards "woke" trends in fashion in recent
> years, which included showcasing trans and plus-size models instead of
> their traditional sexy models, sparked a wave of discontent among
> customers, resulting in a significant revenue drop, crashing stock
> price, and Victoria's Secret Fashion Show ratings hitting rock bottom.
> 
> The American lingerie chain spent the last several years
> 'Bud-Light-ing' itself with transgender models like Valentina Sampaio
> and super woke soccer player Megan Rapinoe while abandoning its iconic
> sexy models.
> 
> Trans model Valentina Sampaio
> 
> Soccer player Megan Rapinoe
> 
> What changed in the last two decades?
> 
> One can only guess who might have pushed 'wokeness' on the brand...
> 
> As consumers ditched Victoria's Secret, investors panic-dumped shares
> to record low prices.
> 
> ... because sales plunged a stunning $1.8 billion since 2018 (revenue
> for its last full fiscal year fell 6.5%, with net income down nearly
> half). Consumers went elsewhere - maybe there was a conservative brand
> that took market share.
> 
> Earlier this year, CEO Amy Hauk was booted out of the company amid
> 'woke' controversies. Now CNN reports the brand has given up on
> out-woking others and wants to 'Make "Sexiness" Great Again.'
> 
>Victoria's Secret: The Tour '23, an attempt to revive the runway
> show format that launched last month fell somewhere in between the
> personification of male lust of the brand's aughts-era heyday and the
> inclusive utopia promoted by its many disruptors.
> 
>But in a presentation to investors in New York last week, it was
> clear which version of the brand Victoria's Secret executives see as
> its future.
> 
>"Sexiness can be inclusive," said Greg Unis, brand president of
> Victoria's Secret and Pink, the company's sub-brand targeting younger
> consumers. "Sexiness can celebrate the diverse experiences of our
> customers and that's what we're focused on."
> 
> Chief executive Martin Waters also disclosed that woke initiatives
> were not profitable for the company, stating, "Despite everyone's best
> endeavours, it's not been enough to carry the day."
> 
> This comes as the ESG bubble is imploding. And Larry Fink's BlackRock
> has ditched the term ESG.
> 
> BlackRock has faced intense criticism from Republican lawmakers who
> accuse the firm of violating its fiduciary duty by putting wokeness
> ahead of investment returns.
> 
> BlackRock, Vanguard, and State Street hold about 15 and 20% of the
> outstanding shares of S 500 companies and can have enormous direct
> power in corporate decision-making.
> 
> Some in corporate America are beginning to realize the challenges ESG
> poses for business sustainability.
> 
> However, if these companies want to go woke, and then go broke - so be
> it. There is a parallel economy that is exploding.


find question

2023-12-29 Thread Richard Hector

Hi all,

When using:

find $dir -mtime +7 -delete

Will that fail to delete higher directories, because the deletion of 
files updated the mtime?


Or does it get all the mtimes first, and use those?

And how precise are those times? If I'm running a cron job that deletes 
7-day-old directories then creates a new one less than a second later, 
will that reliably get the stuff that's just turned 7 days old? Or will 
there be a race condition depending on how quickly cron starts the 
script, which could be different each time?


Is there a better way to do this?

Cheers,
Richard



Re: lists

2023-12-20 Thread Richard Hector

On 21/12/23 11:55, Pocket wrote:


On 12/20/23 17:37, gene heskett wrote:

On 12/20/23 12:05, Pocket wrote:


On 12/20/23 11:51, gene heskett wrote:

On 12/20/23 08:30, Pocket wrote:
If I get one bounce email I am banned, I will never get to even 10% 
as 2% and I am gone.
That may be a side effect that your provider should address, or as 
suggested by others, change providers.



Actually I can not change as the ISP has exclusive rights to the high 
speed internet in the area I reside in.


No other providers are allowed.


You could use an email provider that is not your ISP.

Richard



Re: [sumo-user] [EXT] AW: [new user] - my train is not picking up the container at the container stop

2023-12-15 Thread Hector A Martinez via sumo-user
Thank you Mirko for your message!

I reviewed the documentation and I still don’t know why my Train does not pick 
up my container at the stop.

Would you be kind to look at this code and let me know what I am doing wrong 
please? I am using containerFlow and the train refuses to pick up the container.

http://www.w3.org/2001/XMLSchema-instance
xsi:noNamespaceSchemaLocation=http://sumo.dlr.de/xsd/routes_file.xsd>













  

  

   
 









Thanks,

--H

From: Mirko Barthauer 
Sent: Tuesday, December 12, 2023 11:30 AM
To: Sumo project User discussions 
Cc: Hector A Martinez 
Subject: [EXT] AW: [sumo-user] [new user] - my train is not picking up the 
container at the container stop

Dear Hector, you can find the documentation about containers here. There are 
some examples using containerFlow definitions in our test suite to download, e. 
g. this one. It is important that you comply with the rules for transports 
given in


Dear Hector,



you can find the documentation about containers 
here<https://sumo.dlr.de/docs/Specification/Containers.html>. There are some 
examples using containerFlow definitions in our test suite to download, e.g. 
this 
one<https://sumo.dlr.de/extractTest.php?path=sumo/basic/containerFlow/transport>.
 It is important that you comply with the rules for 
transports<https://sumo.dlr.de/docs/Specification/Containers.html#transports> 
given in the documentation.



Best regards

Mirko









-Original-Nachricht-

Betreff: [sumo-user] [new user] - my train is not picking up the container at 
the container stop

Datum: 2023-12-12T15:01:00+0100

Von: "Hector A Martinez via sumo-user" 
mailto:sumo-user@eclipse.org>>

An: "sumo-user@eclipse.org<mailto:sumo-user@eclipse.org>" 
mailto:sumo-user@eclipse.org>>






Dear SUMO experts,

I am having problems with my containerFlow code in my simulation and I cant 
figure out what is wrong from the online documentation. I am testing logistics 
scenario where the train picks up Containers from a Stop and drops it at the 
destination stop.

My rail is not picking up the container from my containerStop. Can you please 
point me to the document/blog that may help me fix this?

Thanks in advance,

Hector A. Martinez, P.E.
Transportation Researcher, Resilient Transportation and Logistics LTM
MITRE | National Security Engineering 
Center<https://www.mitre.org/centers/national-security-and-engineering-center/who-we-are>
813.207.5365




___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


[sumo-user] Newbie question - activate edge visualization options

2023-12-14 Thread Hector A Martinez via sumo-user
SUMO NetEdit experts,

I am having problems finding this feature in NetEdit: “activate edge 
visualization option spread bidirectional railways/roads”

I am struggling to get my rail network to route from Origin to Destination 
using trips and after I added bidirectional tracks on my entire network, they 
are all overlapping and cant figure out how to make sure all the connections 
are correct without separating the tracks that go on opposite directions.  Any 
guidance will be greatly appreciated.  Thanks,

Hector A. Martinez, P.E.
Transportation Researcher, Resilient Transportation and Logistics LTM
MITRE | National Security Engineering 
Center<https://www.mitre.org/centers/national-security-and-engineering-center/who-we-are>
813.207.5365


___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


Re: [sumo-user] [EXT] AW: [new user] - my train is not picking up the container at the container stop

2023-12-14 Thread Hector A Martinez via sumo-user
Thank you Mirko for your help.

I continue to have problems. I can get the Train to pick up my container at the 
container stop.

Here is my code:
http://www.w3.org/2001/XMLSchema-instance 
xsi:noNamespaceSchemaLocation=http://sumo.dlr.de/xsd/routes_file.xsd>












 
  

 



Any advice?

Thanks,

--H

From: Mirko Barthauer 
Sent: Tuesday, December 12, 2023 11:30 AM
To: Sumo project User discussions 
Cc: Hector A Martinez 
Subject: [EXT] AW: [sumo-user] [new user] - my train is not picking up the 
container at the container stop

Dear Hector, you can find the documentation about containers here. There are 
some examples using containerFlow definitions in our test suite to download, e. 
g. this one. It is important that you comply with the rules for transports 
given in


Dear Hector,



you can find the documentation about containers 
here<https://sumo.dlr.de/docs/Specification/Containers.html>. There are some 
examples using containerFlow definitions in our test suite to download, e.g. 
this 
one<https://sumo.dlr.de/extractTest.php?path=sumo/basic/containerFlow/transport>.
 It is important that you comply with the rules for 
transports<https://sumo.dlr.de/docs/Specification/Containers.html#transports> 
given in the documentation.



Best regards

Mirko









-Original-Nachricht-

Betreff: [sumo-user] [new user] - my train is not picking up the container at 
the container stop

Datum: 2023-12-12T15:01:00+0100

Von: "Hector A Martinez via sumo-user" 
mailto:sumo-user@eclipse.org>>

An: "sumo-user@eclipse.org<mailto:sumo-user@eclipse.org>" 
mailto:sumo-user@eclipse.org>>






Dear SUMO experts,

I am having problems with my containerFlow code in my simulation and I cant 
figure out what is wrong from the online documentation. I am testing logistics 
scenario where the train picks up Containers from a Stop and drops it at the 
destination stop.

My rail is not picking up the container from my containerStop. Can you please 
point me to the document/blog that may help me fix this?

Thanks in advance,

Hector A. Martinez, P.E.
Transportation Researcher, Resilient Transportation and Logistics LTM
MITRE | National Security Engineering 
Center<https://www.mitre.org/centers/national-security-and-engineering-center/who-we-are>
813.207.5365




___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


[sumo-user] [new user] - my train is not picking up the container at the container stop

2023-12-12 Thread Hector A Martinez via sumo-user
Dear SUMO experts,

I am having problems with my containerFlow code in my simulation and I cant 
figure out what is wrong from the online documentation. I am testing logistics 
scenario where the train picks up Containers from a Stop and drops it at the 
destination stop.

My rail is not picking up the container from my containerStop. Can you please 
point me to the document/blog that may help me fix this?

Thanks in advance,

Hector A. Martinez, P.E.
Transportation Researcher, Resilient Transportation and Logistics LTM
MITRE | National Security Engineering 
Center<https://www.mitre.org/centers/national-security-and-engineering-center/who-we-are>
813.207.5365

___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


[nysbirds-l] Mountain Bluebird on Cedar Beach, Suffolk County

2023-12-08 Thread Hector Cordero
Hi all,

Francisco Rodriguez saw and photographed a Mountain Bluebird on Cedar
Beach (Suffolk County) by the overlook today. He posted as a drab
Eastern Bluebird on Twitter (X) and based on his photo I identified
the bird as a rufous-type Mountain Bluebird. He will add many photos
to eBird soon.

40.635118,-73.333543

Good luck,

Hector.

Hector Cordero
Biologist, Birding Guide and Conservation Photographer
www.corderonature.com

--

(copy & paste any URL below, then modify any text "_DOT_" to a period ".")

NYSbirds-L List Info:
NortheastBirding_DOT_com/NYSbirdsWELCOME_DOT_htm
NortheastBirding_DOT_com/NYSbirdsRULES_DOT_htm
NortheastBirding_DOT_com/NYSbirdsSubscribeConfigurationLeave_DOT_htm

ARCHIVES:
1) mail-archive_DOT_com/nysbirds-l@cornell_DOT_edu/maillist_DOT_html
2) surfbirds_DOT_com/birdingmail/Group/NYSBirds-L
3) birding_DOT_aba_DOT_org/maillist/NY01

Please submit your observations to eBird:
ebird_DOT_org/content/ebird/

--


Ricardo Ribalda Delgado: Advocate

2023-12-07 Thread Hector Oron (via nm.debian.org)
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

For nm.debian.org, at 2023-12-07:
I support Ricardo Ribalda Delgado 's request to become a 
Debian Developer, uploading.
I have known Ricardo many years now, we met at a Linux conference over than 13 
years ago. Ricardo has been Debian user for years as well and it was about 5 
years ago when I first sponsored some packages (yavta and xc3prog) Ricardo was 
using for his daily work. After that many updates after, Ricardo became DM and 
maintaining many more packages, such ugrep, virtme, virtme-ng; and he started 
to help me out releasingb4 and now working on GNU GDB stuff. I consider Ricardo 
as having sufficient technical competence.

I have personally worked with Ricardo Ribalda Delgado 
(key 9EC3BB66E2FC129A6F90B39556A0D81F9F782DA9) for 5 years, and I know Ricardo 
Ribalda Delgado
can be trusted to be a full member of Debian, and have unsupervised, 
unrestricted upload rights, right now.
-BEGIN PGP SIGNATURE-

iQIzBAEBCgAdFiEE6Q8IiVReeMgqnedOryKDqnbirHsFAmVxqfEACgkQryKDqnbi
rHsgqhAAsWn7nJKCpZeGXhRy78Xwhn0AupyDzt9Jkn6MWvRoU3ajXZ2Bnwb/Expu
GvJgq5kgKVq24yAazFqRWhBDSp14SYicTD1zurNsxF7vu7/sCVFkuzHtbLmZ3jRE
yT+9BoOTaBvEZx3xUAhO8urqTh0B8bJEAVvGpncerMy2MPQoj2kMsXKTq3WzopBa
stso/qlIh23WQiLIVmuMW2IxcH0UzXgRmAiO7QyyiQJX3RZjOAzFxgnWs0QniB0N
AGsI33LsGfw8tWlD8FUBSO4LB7TUjXTZVPaXUEIM71SJfRvjubPw9zfUoa2APQ8X
i80P/FQBNw6uyNiNhKVRDHjxyWTPwlWpNRGaKCJ+oAhq4FVRm5EuWWLE0gSljegZ
mVDG+F2WygHUCgE8fCD/njmq3jyEv2KUgL87mf01gcYaBZX+AyUHdXbJpEhnTnrn
t4404vUA7GkIHAHOSa5QaW2464qFbhjnYVB0kE/3+jeWH3vPD0U7xiSQhj4omiPB
NEg0F6STNbj3MWpBTGcw/hjCb1yXUsej3QKLFQZ87CYAQjjTS1QmyeKumM0BMs6I
zokmCdHvw/16D/Jznkb+iabdkUk/w3Bh+hy38lN91LYDTRjH2uX/+w5Dj6gpUvIw
RX75W8Vc/9jGNLl3cpd3B2yT2vTF2QejhVbPyjeDzOcenRBwSqA=
=LhK4
-END PGP SIGNATURE-

Hector Oron (via nm.debian.org)

For details and to comment, visit https://nm.debian.org/process/1236/
-- 
https://nm.debian.org/process/1236/



Re: sid

2023-11-29 Thread Richard Hector

On 28/11/23 04:52, Michael Thompson wrote:

[lots of stuff]

Quick question - are you subscribed to the list? I notice you've replied 
a couple of times to your own emails, but not to any of the people 
who've offered suggestions. It's probably a good idea to subscribe, or 
at least check the archives:


https://lists.debian.org/debian-user/recent

Secondly, you say:

"I sent a big email a couple of days ago, which covered how you might 
work around that, but so far, it has not been fixed.

By my reckoning, it's been 6 days now."

Filing a bug may well be useful, but it should be done through the 
proper channels, not via a post on debian-user.


https://www.debian.org/Bugs/Reporting

Cheers,
Richard



[KPipeWire] [Bug 476187] OpenH264 codec support

2023-11-28 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=476187

--- Comment #4 from Hector Martin  ---
Considering it also only lists CBP for *decoding* and yet it can obviously
decode higher profiles (otherwise it would be useless for the web), I think
that feature list is clearly not exhaustive.

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: Get back the number of columns of a result-set prior to JSON aggregation

2023-11-28 Thread hector vass
I think you are just trying to get the number of columns in the underlying
table, no real cost to read the metadata


select count(id), (select count(attrelid) from pg_attribute where attrelid=
't1'::regclass and attnum>0) , json_agg(t) from t1 t;

select count(id), (select count(attrelid) from pg_attribute where attrelid=
't2'::regclass and attnum>0) , json_agg(t) from t2 t;



Regards
Hector Vass
07773 352559


On Tue, Nov 28, 2023 at 12:12 PM Dominique Devienne 
wrote:

> Hi. I've got a nice little POC using PostgreSQL to implement a REST API
> server.
> This uses json_agg(t) to generate the JSON of tables (or subqueries in
> general),
> which means I always get back a single row (and column, before I added the
> count(t.*)).
>
> But I'd like to get statistics on the number of rows aggregated (easy,
> count(*)),
> but also the number of columns of those rows! And I'm stuck for the
> latter...
>
> Is there a (hopefully efficient) way to get back the cardinality of a
> select-clause basically?
> Obviously programmatically I can get the row and column count from the
> result-set,
> but I see the result of json_agg() myself, while I want the value prior to
> json_agg().
>
> Is there a way to achieve this?
>
> Thanks, --DD
>
> PS: In the example below, would return 1 for the 1st query, and 2 for the
> 2nd.
>
> ```
> migrated=> create table t1 (id integer);
> CREATE TABLE
> migrated=> insert into t1 values (1), (2);
> INSERT 0 2
> migrated=> create table t2 (id integer, name text);
> CREATE TABLE
> migrated=> insert into t2 values (1, 'one'), (2, 'two');
> INSERT 0 2
> migrated=> select count(t.*), json_agg(t) from t1 t;
>  count |  json_agg
> ---+-
>  2 | [{"id":1}, +
>|  {"id":2}]
> (1 row)
>
>
> migrated=> select count(t.*), json_agg(t) from t2 t;
>  count | json_agg
> ---+--
>  2 | [{"id":1,"name":"one"}, +
>|  {"id":2,"name":"two"}]
> (1 row)
> ```
>


Re: How to eliminate extra "NOT EXISTS"-query here?

2023-11-28 Thread hector vass
Not equivalent to the use of NOT ARRAY and entirely possible I have
misunderstood the requirement ...do you have some more test cases the non
array solution does not work for

Regards
Hector Vass
07773 352559


On Mon, Nov 27, 2023 at 9:29 AM Dominique Devienne 
wrote:

> On Sat, Nov 25, 2023 at 5:53 PM hector vass  wrote:
>
>> Not sure you need to use array why not simple table joins, so a table
>> with your criteria x y z t joined to stuff to give you candidates that do
>> match, then left join with coalesce to add the 'd'
>>
>> select
>>
>> --a.id,b.test_id,
>>
>> coalesce(a.id,b.test_id) as finalresult
>>
>> from test a
>>
>> left join (
>>
>> select
>>
>> test_id
>>
>> from stuff a
>>
>> inner join (values ('x'),('y'),('z'),('t')) b (v) using(v)
>>
>> group by 1
>>
>> )b on(a.id=b.test_id);
>>
>
> Hi Hector. Hopefully this is not a stupid question...
>
> How is that equivalent from the `NOT ARRAY ... <@ ...` though?
> The inner-join-distinct above will return test_id's on any match, but you
> can't know if all array values are matches. Which is different from
>
> > Is the first array contained by the second
>
> from the <@ operator, no?
> I'm unfamiliar with these operators, so am I missing something?
> Just trying to understand the logic here. Thanks, --DD
>


Re: [PATCH] usb: xhci: Replace terrible formatting with different terrible formatting

2023-11-26 Thread Hector Martin
On 2023/11/23 8:50, Marek Vasut wrote:
> Replace one type of terrible code formatting with a different
> type of terrible code formatting. No functional change.
> 
> Signed-off-by: Marek Vasut 
> ---
> Cc: Bin Meng 
> Cc: Hector Martin 
> ---
>  drivers/usb/host/xhci-ring.c | 18 +++---
>  1 file changed, 7 insertions(+), 11 deletions(-)
> 

Lol.

Reviewed-by: Hector Martin 

> diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
> index dabe6cf86af..be3e35102d6 100644
> --- a/drivers/usb/host/xhci-ring.c
> +++ b/drivers/usb/host/xhci-ring.c
> @@ -543,9 +543,8 @@ static void reset_ep(struct usb_device *udev, int 
> ep_index)
>   if (!event)
>   return;
>  
> - BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
> - != udev->slot_id || GET_COMP_CODE(le32_to_cpu(
> - event->event_cmd.status)) != COMP_SUCCESS);
> + BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags)) != 
> udev->slot_id ||
> +GET_COMP_CODE(le32_to_cpu(event->event_cmd.status)) != 
> COMP_SUCCESS);
>   xhci_acknowledge_event(ctrl);
>  }
>  
> @@ -578,8 +577,7 @@ static void abort_td(struct usb_device *udev, int 
> ep_index)
>   field = le32_to_cpu(event->trans_event.flags);
>   BUG_ON(TRB_TO_SLOT_ID(field) != udev->slot_id);
>   BUG_ON(TRB_TO_EP_INDEX(field) != ep_index);
> - BUG_ON(GET_COMP_CODE(le32_to_cpu(event->trans_event.transfer_len
> - != COMP_STOP)));
> + 
> BUG_ON(GET_COMP_CODE(le32_to_cpu(event->trans_event.transfer_len != 
> COMP_STOP)));
>   xhci_acknowledge_event(ctrl);
>  
>   event = xhci_wait_for_event(ctrl, TRB_COMPLETION);
> @@ -593,9 +591,8 @@ static void abort_td(struct usb_device *udev, int 
> ep_index)
>  
>   comp = GET_COMP_CODE(le32_to_cpu(event->event_cmd.status));
>   BUG_ON(type != TRB_COMPLETION ||
> - TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
> - != udev->slot_id || (comp != COMP_SUCCESS && comp
> - != COMP_CTX_STATE));
> + TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags)) != 
> udev->slot_id ||
> + (comp != COMP_SUCCESS && comp != COMP_CTX_STATE));
>   xhci_acknowledge_event(ctrl);
>  
>   addr = xhci_trb_virt_to_dma(ring->enq_seg,
> @@ -605,9 +602,8 @@ static void abort_td(struct usb_device *udev, int 
> ep_index)
>   if (!event)
>   return;
>  
> - BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags))
> - != udev->slot_id || GET_COMP_CODE(le32_to_cpu(
> - event->event_cmd.status)) != COMP_SUCCESS);
> + BUG_ON(TRB_TO_SLOT_ID(le32_to_cpu(event->event_cmd.flags)) != 
> udev->slot_id ||
> +GET_COMP_CODE(le32_to_cpu(event->event_cmd.status)) != 
> COMP_SUCCESS);
>   xhci_acknowledge_event(ctrl);
>  }
>  

- Hector


Re: How to eliminate extra "NOT EXISTS"-query here?

2023-11-25 Thread hector vass
Not sure you need to use array why not simple table joins, so a table with
your criteria x y z t joined to stuff to give you candidates that do match,
then left join with coalesce to add the 'd'

select

--a.id,b.test_id,

coalesce(a.id,b.test_id) as finalresult

from test a

left join (

select

test_id

from stuff a

inner join (values ('x'),('y'),('z'),('t')) b (v) using(v)

group by 1

)b on(a.id=b.test_id);


Regards
Hector Vass



On Sat, Nov 25, 2023 at 4:08 PM Tom Lane  wrote:

> Andreas Joseph Krogh  writes:
> > -- This works, but I'd rather not do the extra EXISTS
> > select * from test t
> > WHERE (NOT ARRAY ['x', 'y', 'z', 't']::varchar[] <@ (select
> array_agg(s.v) from
> > stuffs WHERE s.test_id = t.id)
> > OR NOT EXISTS (
> > select * from stuff s where s.test_id = t.id
> > )
> >  )
> > ;
>
> > So, I want to return all entries in test not having any of ARRAY ['x',
> 'y',
> > 'z', 't'] referenced in the table stuff, and I'd like to have test.id="d"
>
> > returned as well, but in order to do that I need to execute the “or not
> > exists”-query. Is it possible to avoid that?
>
> Probably not directly, but perhaps you could improve the performance of
> this query by converting the sub-selects into a left join:
>
> select * from test t
>   left join
> (select s.test_id, array_agg(s.v) as arr from stuffs group by
> s.test_id) ss
>   on ss.test_id = t.id
> WHERE (NOT ARRAY ['x', 'y', 'z', 't']::varchar[] <@ ss.arr)
>   OR ss.test_id IS NULL;
>
> Another possibility is
>
> ...
> WHERE (ARRAY ['x', 'y', 'z', 't']::varchar[] <@ ss.arr) IS NOT TRUE
>
> but I don't think that's more readable really, and it will save little.
>
> In either case, this would result in computing array_agg once for
> each group of test_id values in "stuffs", while your original computes
> a similar aggregate for each row in "test".  So whether this is better
> depends on the relative sizes of the tables, although my proposal
> avoids random access to "stuffs" so it will have some advantage.
>
> regards, tom lane
>
>
>


Re: [PATCH v2 06/17] iommu: Add iommu_fwspec_alloc/dealloc()

2023-11-23 Thread Hector Martin
On 2023/11/22 1:00, Jason Gunthorpe wrote:
> On Tue, Nov 21, 2023 at 03:47:48PM +0900, Hector Martin wrote:
>>> Which is sensitive only to !NULL fwspec, and if EPROBE_DEFER is
>>> returned fwspec will be freed and dev->iommu->fwspec will be NULL
>>> here.
>>>
>>> In the NULL case it does a 'bus probe' with a NULL fwspec and all the
>>> fwspec drivers return immediately from their probe functions.
>>>
>>> Did I miss something?
>>
>> apple_dart is not a fwspec driver and doesn't do that :-)
> 
> It implements of_xlate that makes it a driver using the fwspec probe
> path.
> 
> The issue is in apple-dart. Its logic for avoiding bus probe vs
> fwspec probe is not correct.
> 
> It does:
> 
> static int apple_dart_of_xlate(struct device *dev, struct of_phandle_args 
> *args)
> {
>  [..]
>   dev_iommu_priv_set(dev, cfg);
> 
> 
> Then:
> 
> static struct iommu_device *apple_dart_probe_device(struct device *dev)
> {
>   struct apple_dart_master_cfg *cfg = dev_iommu_priv_get(dev);
>   struct apple_dart_stream_map *stream_map;
>   int i;
> 
>   if (!cfg)
>   return ERR_PTR(-ENODEV);
> 
> Which leaks the cfg memory on rare error cases and wrongly allows the
> driver to probe without a fwspec, which I think is what you are
> hitting.
> 
> It should be
> 
>if (!dev_iommu_fwspec_get(dev) || !cfg)
>   return ERR_PTR(-ENODEV);
> 
> To ensure the driver never probes on the bus path.
> 
> Clearing the dev->iommu in the core code has the side effect of
> clearing (and leaking) the cfg which would hide this issue.

Aha! Yes, that makes it work with only the first change. I'll throw the
apple-dart fix into our tree (and submit it once I get to clearing out
the branch; the affected consumer driver isn't upstream yet so this
isn't particularly urgent).

- Hector

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH v2 06/17] iommu: Add iommu_fwspec_alloc/dealloc()

2023-11-23 Thread Hector Martin
On 2023/11/22 1:00, Jason Gunthorpe wrote:
> On Tue, Nov 21, 2023 at 03:47:48PM +0900, Hector Martin wrote:
>>> Which is sensitive only to !NULL fwspec, and if EPROBE_DEFER is
>>> returned fwspec will be freed and dev->iommu->fwspec will be NULL
>>> here.
>>>
>>> In the NULL case it does a 'bus probe' with a NULL fwspec and all the
>>> fwspec drivers return immediately from their probe functions.
>>>
>>> Did I miss something?
>>
>> apple_dart is not a fwspec driver and doesn't do that :-)
> 
> It implements of_xlate that makes it a driver using the fwspec probe
> path.
> 
> The issue is in apple-dart. Its logic for avoiding bus probe vs
> fwspec probe is not correct.
> 
> It does:
> 
> static int apple_dart_of_xlate(struct device *dev, struct of_phandle_args 
> *args)
> {
>  [..]
>   dev_iommu_priv_set(dev, cfg);
> 
> 
> Then:
> 
> static struct iommu_device *apple_dart_probe_device(struct device *dev)
> {
>   struct apple_dart_master_cfg *cfg = dev_iommu_priv_get(dev);
>   struct apple_dart_stream_map *stream_map;
>   int i;
> 
>   if (!cfg)
>   return ERR_PTR(-ENODEV);
> 
> Which leaks the cfg memory on rare error cases and wrongly allows the
> driver to probe without a fwspec, which I think is what you are
> hitting.
> 
> It should be
> 
>if (!dev_iommu_fwspec_get(dev) || !cfg)
>   return ERR_PTR(-ENODEV);
> 
> To ensure the driver never probes on the bus path.
> 
> Clearing the dev->iommu in the core code has the side effect of
> clearing (and leaking) the cfg which would hide this issue.

Aha! Yes, that makes it work with only the first change. I'll throw the
apple-dart fix into our tree (and submit it once I get to clearing out
the branch; the affected consumer driver isn't upstream yet so this
isn't particularly urgent).

- Hector



Re: [PATCH v2 06/17] iommu: Add iommu_fwspec_alloc/dealloc()

2023-11-20 Thread Hector Martin



On 2023/11/19 23:13, Jason Gunthorpe wrote:
> On Sun, Nov 19, 2023 at 06:19:43PM +0900, Hector Martin wrote:
>>>> +static int iommu_fwspec_assign_iommu(struct iommu_fwspec *fwspec,
>>>> +   struct device *dev,
>>>> +   struct fwnode_handle *iommu_fwnode)
>>>> +{
>>>> +  const struct iommu_ops *ops;
>>>> +
>>>> +  if (fwspec->iommu_fwnode) {
>>>> +  /*
>>>> +   * fwspec->iommu_fwnode is the first iommu's fwnode. In the rare
>>>> +   * case of multiple iommus for one device they must point to the
>>>> +   * same driver, checked via same ops.
>>>> +   */
>>>> +  ops = iommu_ops_from_fwnode(iommu_fwnode);
>>>
>>> This carries over a related bug from the original code: If a device has
>>> two IOMMUs and the first one probes but the second one defers, ops will
>>> be NULL here and the check will fail with EINVAL.
>>>
>>> Adding a check for that case here fixes it:
>>>
>>> if (!ops)
>>> return driver_deferred_probe_check_state(dev);
> 
> Yes!
> 
>>> With that, for the whole series:
>>>
>>> Tested-by: Hector Martin 
>>>
>>> I can't specifically test for the probe races the series intends to fix
>>> though, since that bug we only hit extremely rarely. I'm just testing
>>> that nothing breaks.
>>
>> Actually no, this fix is not sufficient. If the first IOMMU is ready
>> then the xlate path allocates dev->iommu, which then
>> __iommu_probe_device takes as a sign that all IOMMUs are ready and does
>> the device init.
> 
> It doesn't.. The code there is:
> 
>   if (!fwspec && dev->iommu)
>   fwspec = dev->iommu->fwspec;
>   if (fwspec)
>   ops = fwspec->ops;
>   else
>   ops = dev->bus->iommu_ops;
>   if (!ops) {
>   ret = -ENODEV;
>   goto out_unlock;
>   }
> 
> Which is sensitive only to !NULL fwspec, and if EPROBE_DEFER is
> returned fwspec will be freed and dev->iommu->fwspec will be NULL
> here.
> 
> In the NULL case it does a 'bus probe' with a NULL fwspec and all the
> fwspec drivers return immediately from their probe functions.
> 
> Did I miss something?

apple_dart is not a fwspec driver and doesn't do that :-)

> 
>> Then when the xlate comes along again after suceeding
>> with the second IOMMU, __iommu_probe_device sees the device is already
>> in a group and never initializes the second IOMMU, leaving the device
>> with only one IOMMU.
> 
> This should be fixed by the first hunk to check every iommu and fail?
> 
> BTW, do you have a systems with same device attached to multiple
> iommus?

Yes, Apple ARM64 machines all have multiple ganged IOMMUs for certain
devices (USB and ISP). We also attach all display IOMMUs to the global
virtual display-subsystem device to handle framebuffer mappings, instead
of trying to dynamically map them to a bunch of individual display
controllers (which is a lot more painful). That last one is what
reliably reproduces this problem, display breaks without both previous
patches ever since we started supporting more than one display output.
The first one is not enough.

> I've noticed another bug here, many drivers don't actually support
> differing iommu instances and nothing seems to check it..

apple-dart does (as long as all the IOMMUs are using that driver).

> 
> Thanks,
> Jason
> 
> 

- Hector

___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc


Re: [PATCH v2 06/17] iommu: Add iommu_fwspec_alloc/dealloc()

2023-11-20 Thread Hector Martin



On 2023/11/19 23:13, Jason Gunthorpe wrote:
> On Sun, Nov 19, 2023 at 06:19:43PM +0900, Hector Martin wrote:
>>>> +static int iommu_fwspec_assign_iommu(struct iommu_fwspec *fwspec,
>>>> +   struct device *dev,
>>>> +   struct fwnode_handle *iommu_fwnode)
>>>> +{
>>>> +  const struct iommu_ops *ops;
>>>> +
>>>> +  if (fwspec->iommu_fwnode) {
>>>> +  /*
>>>> +   * fwspec->iommu_fwnode is the first iommu's fwnode. In the rare
>>>> +   * case of multiple iommus for one device they must point to the
>>>> +   * same driver, checked via same ops.
>>>> +   */
>>>> +  ops = iommu_ops_from_fwnode(iommu_fwnode);
>>>
>>> This carries over a related bug from the original code: If a device has
>>> two IOMMUs and the first one probes but the second one defers, ops will
>>> be NULL here and the check will fail with EINVAL.
>>>
>>> Adding a check for that case here fixes it:
>>>
>>> if (!ops)
>>> return driver_deferred_probe_check_state(dev);
> 
> Yes!
> 
>>> With that, for the whole series:
>>>
>>> Tested-by: Hector Martin 
>>>
>>> I can't specifically test for the probe races the series intends to fix
>>> though, since that bug we only hit extremely rarely. I'm just testing
>>> that nothing breaks.
>>
>> Actually no, this fix is not sufficient. If the first IOMMU is ready
>> then the xlate path allocates dev->iommu, which then
>> __iommu_probe_device takes as a sign that all IOMMUs are ready and does
>> the device init.
> 
> It doesn't.. The code there is:
> 
>   if (!fwspec && dev->iommu)
>   fwspec = dev->iommu->fwspec;
>   if (fwspec)
>   ops = fwspec->ops;
>   else
>   ops = dev->bus->iommu_ops;
>   if (!ops) {
>   ret = -ENODEV;
>   goto out_unlock;
>   }
> 
> Which is sensitive only to !NULL fwspec, and if EPROBE_DEFER is
> returned fwspec will be freed and dev->iommu->fwspec will be NULL
> here.
> 
> In the NULL case it does a 'bus probe' with a NULL fwspec and all the
> fwspec drivers return immediately from their probe functions.
> 
> Did I miss something?

apple_dart is not a fwspec driver and doesn't do that :-)

> 
>> Then when the xlate comes along again after suceeding
>> with the second IOMMU, __iommu_probe_device sees the device is already
>> in a group and never initializes the second IOMMU, leaving the device
>> with only one IOMMU.
> 
> This should be fixed by the first hunk to check every iommu and fail?
> 
> BTW, do you have a systems with same device attached to multiple
> iommus?

Yes, Apple ARM64 machines all have multiple ganged IOMMUs for certain
devices (USB and ISP). We also attach all display IOMMUs to the global
virtual display-subsystem device to handle framebuffer mappings, instead
of trying to dynamically map them to a bunch of individual display
controllers (which is a lot more painful). That last one is what
reliably reproduces this problem, display breaks without both previous
patches ever since we started supporting more than one display output.
The first one is not enough.

> I've noticed another bug here, many drivers don't actually support
> differing iommu instances and nothing seems to check it..

apple-dart does (as long as all the IOMMUs are using that driver).

> 
> Thanks,
> Jason
> 
> 

- Hector



Re: [PATCH 0/8] USB fixes: xHCI error handling

2023-11-20 Thread Hector Martin



On 2023/11/20 21:15, Marek Vasut wrote:
> On 11/20/23 11:45, Hector Martin wrote:
>>
>>
>> On 2023/11/20 11:09, Marek Vasut wrote:
>>> On 11/20/23 00:17, Shantur Rathore wrote:
>>>> On Sun, Nov 19, 2023 at 8:08 PM Marek Vasut  wrote:
>>>>>
>>>>> On 10/27/23 01:16, Hector Martin wrote:
>>>>>> This series is the first of a few bundles of USB fixes we have been
>>>>>> carrying downstream on the Asahi U-Boot branch for a few months.
>>>>>>
>>>>>> Most importantly, this related set of patches makes xHCI error/stall
>>>>>> recovery more robust (or work at all in some cases). There are also a
>>>>>> couple patches fixing other xHCI bugs and adding better debug logs.
>>>>>>
>>>>>> I believe this should fix this Fedora bug too:
>>>>>>
>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=2244305
>>>>>
>>>>> Was there ever a V2 of these patches I might've missed ?
>>>>
>>>> Is it this one?
>>>> https://patchwork.ozlabs.org/project/uboot/list/?series=379807
>>>
>>> I think so, thanks.
>>>
>>> And uh, my question therefore it, is there a V3 which addresses the 3/8
>>> and 8/8 comment ?
>>
>> Not yet, no. Sorry, I probably won't have time to work on this in a
>> while, currently busy with other stuff.
> 
> I can probably fix the patches up myself if that is fine with you, I'd 
> really like to get these fixes into the release soon. Would that be OK 
> with you ?
> 

Of course, I would appreciate that :)

- Hector


[systemsettings] [Bug 477283] System settings marks all "*.utf8" locales as invalid/unsupported, and glibc does not expose "*.UTF-8" variants

2023-11-20 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=477283

Hector Martin  changed:

   What|Removed |Added

 Status|NEEDSINFO   |REPORTED
 Resolution|WAITINGFORINFO  |---
 CC||mar...@marcan.st

--- Comment #2 from Hector Martin  ---
Yes, UTF-8 works (not utf8, not utf-8, not UTF8).

-- 
You are receiving this mail because:
You are watching all bug changes.

Re:[DISCUSS] KIP-1004: Enforce tasks.max property in Kafka Connect

2023-11-20 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Thanks for the KIP Chris, adding this check makes total sense.

I do have one question. The second paragraph in the Public Interfaces section 
states:

"If the connector generated excessive tasks after being reconfigured, then any 
existing tasks for the connector will be allowed to continue running, unless 
that existing set of tasks also exceeds the tasks.max property."

Would not failing the connector land us in the second scenario of 'Rejected 
Alternatives'?

From: dev@kafka.apache.org At: 11/11/23 00:27:44 UTC-5:00To:  
dev@kafka.apache.org
Subject: [DISCUSS] KIP-1004: Enforce tasks.max property in Kafka Connect

Hi all,

I'd like to open up KIP-1004 for discussion:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-1004%3A+Enforce+tasks.max+
property+in+Kafka+Connect

As a brief summary: this KIP proposes that the Kafka Connect runtime start
failing connectors that generate a greater number of tasks than the
tasks.max property, with an optional emergency override that can be used to
continue running these (probably-buggy) connectors if absolutely necessary.

I'll be taking time off most of the next three weeks, so response latency
may be a bit higher than usual, but I wanted to kick off the discussion in
case we can land this in time for the upcoming 3.7.0 release.

Cheers,

Chris




Re: [PATCH 0/8] USB fixes: xHCI error handling

2023-11-20 Thread Hector Martin



On 2023/11/20 11:09, Marek Vasut wrote:
> On 11/20/23 00:17, Shantur Rathore wrote:
>> On Sun, Nov 19, 2023 at 8:08 PM Marek Vasut  wrote:
>>>
>>> On 10/27/23 01:16, Hector Martin wrote:
>>>> This series is the first of a few bundles of USB fixes we have been
>>>> carrying downstream on the Asahi U-Boot branch for a few months.
>>>>
>>>> Most importantly, this related set of patches makes xHCI error/stall
>>>> recovery more robust (or work at all in some cases). There are also a
>>>> couple patches fixing other xHCI bugs and adding better debug logs.
>>>>
>>>> I believe this should fix this Fedora bug too:
>>>>
>>>> https://bugzilla.redhat.com/show_bug.cgi?id=2244305
>>>
>>> Was there ever a V2 of these patches I might've missed ?
>>
>> Is it this one?
>> https://patchwork.ozlabs.org/project/uboot/list/?series=379807
> 
> I think so, thanks.
> 
> And uh, my question therefore it, is there a V3 which addresses the 3/8 
> and 8/8 comment ?

Not yet, no. Sorry, I probably won't have time to work on this in a
while, currently busy with other stuff.


- Hector


[Openvpn-users] 2FA question

2023-11-19 Thread Richard Hector

Hi all,

I've been experimenting with 2FA - with IPFire as the server, but I 
don't think that's relevant to my question.


My understanding is that OpenVPN renegotiates keys every few minutes. It 
appears that when this happens, I also need to enter a new token. If 
that's true, it makes using 2FA rather impractical, or at least irritating.


Have I understood this correctly? Or am I missing something?

Thanks,
Richard


___
Openvpn-users mailing list
Openvpn-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/openvpn-users


Re: [PATCH v2 06/17] iommu: Add iommu_fwspec_alloc/dealloc()

2023-11-19 Thread Hector Martin



On 2023/11/19 17:10, Hector Martin wrote:
> On 2023/11/15 23:05, Jason Gunthorpe wrote:
>> Allow fwspec to exist independently from the dev->iommu by providing
>> functions to allow allocating and freeing the raw struct iommu_fwspec.
>>
>> Reflow the existing paths to call the new alloc/dealloc functions.
>>
>> Reviewed-by: Jerry Snitselaar 
>> Signed-off-by: Jason Gunthorpe 
>> ---
>>  drivers/iommu/iommu.c | 82 ---
>>  include/linux/iommu.h | 11 +-
>>  2 files changed, 72 insertions(+), 21 deletions(-)
>>
>> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
>> index 18a82a20934d53..86bbb9e75c7e03 100644
>> --- a/drivers/iommu/iommu.c
>> +++ b/drivers/iommu/iommu.c
>> @@ -361,10 +361,8 @@ static void dev_iommu_free(struct device *dev)
>>  struct dev_iommu *param = dev->iommu;
>>  
>>  dev->iommu = NULL;
>> -if (param->fwspec) {
>> -fwnode_handle_put(param->fwspec->iommu_fwnode);
>> -kfree(param->fwspec);
>> -}
>> +if (param->fwspec)
>> +iommu_fwspec_dealloc(param->fwspec);
>>  kfree(param);
>>  }
>>  
>> @@ -2920,10 +2918,61 @@ const struct iommu_ops *iommu_ops_from_fwnode(struct 
>> fwnode_handle *fwnode)
>>  return ops;
>>  }
>>  
>> +static int iommu_fwspec_assign_iommu(struct iommu_fwspec *fwspec,
>> + struct device *dev,
>> + struct fwnode_handle *iommu_fwnode)
>> +{
>> +const struct iommu_ops *ops;
>> +
>> +if (fwspec->iommu_fwnode) {
>> +/*
>> + * fwspec->iommu_fwnode is the first iommu's fwnode. In the rare
>> + * case of multiple iommus for one device they must point to the
>> + * same driver, checked via same ops.
>> + */
>> +ops = iommu_ops_from_fwnode(iommu_fwnode);
> 
> This carries over a related bug from the original code: If a device has
> two IOMMUs and the first one probes but the second one defers, ops will
> be NULL here and the check will fail with EINVAL.
> 
> Adding a check for that case here fixes it:
> 
>   if (!ops)
>   return driver_deferred_probe_check_state(dev);
> 
> With that, for the whole series:
> 
> Tested-by: Hector Martin 
> 
> I can't specifically test for the probe races the series intends to fix
> though, since that bug we only hit extremely rarely. I'm just testing
> that nothing breaks.

Actually no, this fix is not sufficient. If the first IOMMU is ready
then the xlate path allocates dev->iommu, which then
__iommu_probe_device takes as a sign that all IOMMUs are ready and does
the device init. Then when the xlate comes along again after suceeding
with the second IOMMU, __iommu_probe_device sees the device is already
in a group and never initializes the second IOMMU, leaving the device
with only one IOMMU.

This patch fixes it, but honestly, at this point I have no idea how to
"properly" fix this. There is *way* too much subtlety in this whole
codepath.

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 2477dec29740..2e4baf0572e7 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2935,6 +2935,12 @@ int iommu_fwspec_of_xlate(struct iommu_fwspec
*fwspec, struct device *dev,
int ret;

ret = iommu_fwspec_assign_iommu(fwspec, dev, iommu_fwnode);
+   if (ret == -EPROBE_DEFER) {
+   mutex_lock(_probe_device_lock);
+   if (dev->iommu)
+   dev_iommu_free(dev);
+   mutex_unlock(_probe_device_lock);
+   }
if (ret)
return ret;

> 
>> +if (fwspec->ops != ops)
>> +return -EINVAL;
>> +return 0;
>> +}
>> +
>> +if (!fwspec->ops) {
>> +ops = iommu_ops_from_fwnode(iommu_fwnode);
>> +if (!ops)
>> +return driver_deferred_probe_check_state(dev);
>> +fwspec->ops = ops;
>> +}
>> +
>> +of_node_get(to_of_node(iommu_fwnode));
>> +fwspec->iommu_fwnode = iommu_fwnode;
>> +return 0;
>> +}
>> +
>> +struct iommu_fwspec *iommu_fwspec_alloc(void)
>> +{
>> +struct iommu_fwspec *fwspec;
>> +
>> +fwspec = kzalloc(sizeof(*fwspec), GFP_KERNEL);
>> +if (!fwspec)
>> +return ERR_PTR(-ENOMEM);
>> +return fwspec;
>> +}
>> +
>> +void iommu_fwspec_dealloc(struct iommu_fwspec *fws

Re: [PATCH v2 06/17] iommu: Add iommu_fwspec_alloc/dealloc()

2023-11-19 Thread Hector Martin



On 2023/11/19 17:10, Hector Martin wrote:
> On 2023/11/15 23:05, Jason Gunthorpe wrote:
>> Allow fwspec to exist independently from the dev->iommu by providing
>> functions to allow allocating and freeing the raw struct iommu_fwspec.
>>
>> Reflow the existing paths to call the new alloc/dealloc functions.
>>
>> Reviewed-by: Jerry Snitselaar 
>> Signed-off-by: Jason Gunthorpe 
>> ---
>>  drivers/iommu/iommu.c | 82 ---
>>  include/linux/iommu.h | 11 +-
>>  2 files changed, 72 insertions(+), 21 deletions(-)
>>
>> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
>> index 18a82a20934d53..86bbb9e75c7e03 100644
>> --- a/drivers/iommu/iommu.c
>> +++ b/drivers/iommu/iommu.c
>> @@ -361,10 +361,8 @@ static void dev_iommu_free(struct device *dev)
>>  struct dev_iommu *param = dev->iommu;
>>  
>>  dev->iommu = NULL;
>> -if (param->fwspec) {
>> -fwnode_handle_put(param->fwspec->iommu_fwnode);
>> -kfree(param->fwspec);
>> -}
>> +if (param->fwspec)
>> +iommu_fwspec_dealloc(param->fwspec);
>>  kfree(param);
>>  }
>>  
>> @@ -2920,10 +2918,61 @@ const struct iommu_ops *iommu_ops_from_fwnode(struct 
>> fwnode_handle *fwnode)
>>  return ops;
>>  }
>>  
>> +static int iommu_fwspec_assign_iommu(struct iommu_fwspec *fwspec,
>> + struct device *dev,
>> + struct fwnode_handle *iommu_fwnode)
>> +{
>> +const struct iommu_ops *ops;
>> +
>> +if (fwspec->iommu_fwnode) {
>> +/*
>> + * fwspec->iommu_fwnode is the first iommu's fwnode. In the rare
>> + * case of multiple iommus for one device they must point to the
>> + * same driver, checked via same ops.
>> + */
>> +ops = iommu_ops_from_fwnode(iommu_fwnode);
> 
> This carries over a related bug from the original code: If a device has
> two IOMMUs and the first one probes but the second one defers, ops will
> be NULL here and the check will fail with EINVAL.
> 
> Adding a check for that case here fixes it:
> 
>   if (!ops)
>   return driver_deferred_probe_check_state(dev);
> 
> With that, for the whole series:
> 
> Tested-by: Hector Martin 
> 
> I can't specifically test for the probe races the series intends to fix
> though, since that bug we only hit extremely rarely. I'm just testing
> that nothing breaks.

Actually no, this fix is not sufficient. If the first IOMMU is ready
then the xlate path allocates dev->iommu, which then
__iommu_probe_device takes as a sign that all IOMMUs are ready and does
the device init. Then when the xlate comes along again after suceeding
with the second IOMMU, __iommu_probe_device sees the device is already
in a group and never initializes the second IOMMU, leaving the device
with only one IOMMU.

This patch fixes it, but honestly, at this point I have no idea how to
"properly" fix this. There is *way* too much subtlety in this whole
codepath.

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 2477dec29740..2e4baf0572e7 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2935,6 +2935,12 @@ int iommu_fwspec_of_xlate(struct iommu_fwspec
*fwspec, struct device *dev,
int ret;

ret = iommu_fwspec_assign_iommu(fwspec, dev, iommu_fwnode);
+   if (ret == -EPROBE_DEFER) {
+   mutex_lock(_probe_device_lock);
+   if (dev->iommu)
+   dev_iommu_free(dev);
+   mutex_unlock(_probe_device_lock);
+   }
if (ret)
return ret;

> 
>> +if (fwspec->ops != ops)
>> +return -EINVAL;
>> +return 0;
>> +}
>> +
>> +if (!fwspec->ops) {
>> +ops = iommu_ops_from_fwnode(iommu_fwnode);
>> +if (!ops)
>> +return driver_deferred_probe_check_state(dev);
>> +fwspec->ops = ops;
>> +}
>> +
>> +of_node_get(to_of_node(iommu_fwnode));
>> +fwspec->iommu_fwnode = iommu_fwnode;
>> +return 0;
>> +}
>> +
>> +struct iommu_fwspec *iommu_fwspec_alloc(void)
>> +{
>> +struct iommu_fwspec *fwspec;
>> +
>> +fwspec = kzalloc(sizeof(*fwspec), GFP_KERNEL);
>> +if (!fwspec)
>> +return ERR_PTR(-ENOMEM);
>> +return fwspec;
>> +}
>> +
>> +void iommu_fwspec_dealloc(struct iommu_fwspec *fws

Re: [PATCH v2 06/17] iommu: Add iommu_fwspec_alloc/dealloc()

2023-11-19 Thread Hector Martin
On 2023/11/15 23:05, Jason Gunthorpe wrote:
> Allow fwspec to exist independently from the dev->iommu by providing
> functions to allow allocating and freeing the raw struct iommu_fwspec.
> 
> Reflow the existing paths to call the new alloc/dealloc functions.
> 
> Reviewed-by: Jerry Snitselaar 
> Signed-off-by: Jason Gunthorpe 
> ---
>  drivers/iommu/iommu.c | 82 ---
>  include/linux/iommu.h | 11 +-
>  2 files changed, 72 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 18a82a20934d53..86bbb9e75c7e03 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -361,10 +361,8 @@ static void dev_iommu_free(struct device *dev)
>   struct dev_iommu *param = dev->iommu;
>  
>   dev->iommu = NULL;
> - if (param->fwspec) {
> - fwnode_handle_put(param->fwspec->iommu_fwnode);
> - kfree(param->fwspec);
> - }
> + if (param->fwspec)
> + iommu_fwspec_dealloc(param->fwspec);
>   kfree(param);
>  }
>  
> @@ -2920,10 +2918,61 @@ const struct iommu_ops *iommu_ops_from_fwnode(struct 
> fwnode_handle *fwnode)
>   return ops;
>  }
>  
> +static int iommu_fwspec_assign_iommu(struct iommu_fwspec *fwspec,
> +  struct device *dev,
> +  struct fwnode_handle *iommu_fwnode)
> +{
> + const struct iommu_ops *ops;
> +
> + if (fwspec->iommu_fwnode) {
> + /*
> +  * fwspec->iommu_fwnode is the first iommu's fwnode. In the rare
> +  * case of multiple iommus for one device they must point to the
> +  * same driver, checked via same ops.
> +  */
> + ops = iommu_ops_from_fwnode(iommu_fwnode);

This carries over a related bug from the original code: If a device has
two IOMMUs and the first one probes but the second one defers, ops will
be NULL here and the check will fail with EINVAL.

Adding a check for that case here fixes it:

if (!ops)
return driver_deferred_probe_check_state(dev);

With that, for the whole series:

Tested-by: Hector Martin 

I can't specifically test for the probe races the series intends to fix
though, since that bug we only hit extremely rarely. I'm just testing
that nothing breaks.

> + if (fwspec->ops != ops)
> + return -EINVAL;
> + return 0;
> + }
> +
> + if (!fwspec->ops) {
> + ops = iommu_ops_from_fwnode(iommu_fwnode);
> + if (!ops)
> + return driver_deferred_probe_check_state(dev);
> + fwspec->ops = ops;
> + }
> +
> + of_node_get(to_of_node(iommu_fwnode));
> + fwspec->iommu_fwnode = iommu_fwnode;
> + return 0;
> +}
> +
> +struct iommu_fwspec *iommu_fwspec_alloc(void)
> +{
> + struct iommu_fwspec *fwspec;
> +
> + fwspec = kzalloc(sizeof(*fwspec), GFP_KERNEL);
> + if (!fwspec)
> + return ERR_PTR(-ENOMEM);
> + return fwspec;
> +}
> +
> +void iommu_fwspec_dealloc(struct iommu_fwspec *fwspec)
> +{
> + if (!fwspec)
> + return;
> +
> + if (fwspec->iommu_fwnode)
> + fwnode_handle_put(fwspec->iommu_fwnode);
> + kfree(fwspec);
> +}
> +
>  int iommu_fwspec_init(struct device *dev, struct fwnode_handle *iommu_fwnode,
> const struct iommu_ops *ops)
>  {
>   struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> + int ret;
>  
>   if (fwspec)
>   return ops == fwspec->ops ? 0 : -EINVAL;
> @@ -2931,29 +2980,22 @@ int iommu_fwspec_init(struct device *dev, struct 
> fwnode_handle *iommu_fwnode,
>   if (!dev_iommu_get(dev))
>   return -ENOMEM;
>  
> - fwspec = kzalloc(sizeof(*fwspec), GFP_KERNEL);
> - if (!fwspec)
> - return -ENOMEM;
> + fwspec = iommu_fwspec_alloc();
> + if (IS_ERR(fwspec))
> + return PTR_ERR(fwspec);
>  
> - of_node_get(to_of_node(iommu_fwnode));
> - fwspec->iommu_fwnode = iommu_fwnode;
>   fwspec->ops = ops;
> + ret = iommu_fwspec_assign_iommu(fwspec, dev, iommu_fwnode);
> + if (ret) {
> + iommu_fwspec_dealloc(fwspec);
> + return ret;
> + }
> +
>   dev_iommu_fwspec_set(dev, fwspec);
>   return 0;
>  }
>  EXPORT_SYMBOL_GPL(iommu_fwspec_init);
>  
> -void iommu_fwspec_free(struct device *dev)
> -{
> - struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> -
> - if (fwspec) {
> -

Re: [PATCH v2 06/17] iommu: Add iommu_fwspec_alloc/dealloc()

2023-11-19 Thread Hector Martin
On 2023/11/15 23:05, Jason Gunthorpe wrote:
> Allow fwspec to exist independently from the dev->iommu by providing
> functions to allow allocating and freeing the raw struct iommu_fwspec.
> 
> Reflow the existing paths to call the new alloc/dealloc functions.
> 
> Reviewed-by: Jerry Snitselaar 
> Signed-off-by: Jason Gunthorpe 
> ---
>  drivers/iommu/iommu.c | 82 ---
>  include/linux/iommu.h | 11 +-
>  2 files changed, 72 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index 18a82a20934d53..86bbb9e75c7e03 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -361,10 +361,8 @@ static void dev_iommu_free(struct device *dev)
>   struct dev_iommu *param = dev->iommu;
>  
>   dev->iommu = NULL;
> - if (param->fwspec) {
> - fwnode_handle_put(param->fwspec->iommu_fwnode);
> - kfree(param->fwspec);
> - }
> + if (param->fwspec)
> + iommu_fwspec_dealloc(param->fwspec);
>   kfree(param);
>  }
>  
> @@ -2920,10 +2918,61 @@ const struct iommu_ops *iommu_ops_from_fwnode(struct 
> fwnode_handle *fwnode)
>   return ops;
>  }
>  
> +static int iommu_fwspec_assign_iommu(struct iommu_fwspec *fwspec,
> +  struct device *dev,
> +  struct fwnode_handle *iommu_fwnode)
> +{
> + const struct iommu_ops *ops;
> +
> + if (fwspec->iommu_fwnode) {
> + /*
> +  * fwspec->iommu_fwnode is the first iommu's fwnode. In the rare
> +  * case of multiple iommus for one device they must point to the
> +  * same driver, checked via same ops.
> +  */
> + ops = iommu_ops_from_fwnode(iommu_fwnode);

This carries over a related bug from the original code: If a device has
two IOMMUs and the first one probes but the second one defers, ops will
be NULL here and the check will fail with EINVAL.

Adding a check for that case here fixes it:

if (!ops)
return driver_deferred_probe_check_state(dev);

With that, for the whole series:

Tested-by: Hector Martin 

I can't specifically test for the probe races the series intends to fix
though, since that bug we only hit extremely rarely. I'm just testing
that nothing breaks.

> + if (fwspec->ops != ops)
> + return -EINVAL;
> + return 0;
> + }
> +
> + if (!fwspec->ops) {
> + ops = iommu_ops_from_fwnode(iommu_fwnode);
> + if (!ops)
> + return driver_deferred_probe_check_state(dev);
> + fwspec->ops = ops;
> + }
> +
> + of_node_get(to_of_node(iommu_fwnode));
> + fwspec->iommu_fwnode = iommu_fwnode;
> + return 0;
> +}
> +
> +struct iommu_fwspec *iommu_fwspec_alloc(void)
> +{
> + struct iommu_fwspec *fwspec;
> +
> + fwspec = kzalloc(sizeof(*fwspec), GFP_KERNEL);
> + if (!fwspec)
> + return ERR_PTR(-ENOMEM);
> + return fwspec;
> +}
> +
> +void iommu_fwspec_dealloc(struct iommu_fwspec *fwspec)
> +{
> + if (!fwspec)
> + return;
> +
> + if (fwspec->iommu_fwnode)
> + fwnode_handle_put(fwspec->iommu_fwnode);
> + kfree(fwspec);
> +}
> +
>  int iommu_fwspec_init(struct device *dev, struct fwnode_handle *iommu_fwnode,
> const struct iommu_ops *ops)
>  {
>   struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> + int ret;
>  
>   if (fwspec)
>   return ops == fwspec->ops ? 0 : -EINVAL;
> @@ -2931,29 +2980,22 @@ int iommu_fwspec_init(struct device *dev, struct 
> fwnode_handle *iommu_fwnode,
>   if (!dev_iommu_get(dev))
>   return -ENOMEM;
>  
> - fwspec = kzalloc(sizeof(*fwspec), GFP_KERNEL);
> - if (!fwspec)
> - return -ENOMEM;
> + fwspec = iommu_fwspec_alloc();
> + if (IS_ERR(fwspec))
> + return PTR_ERR(fwspec);
>  
> - of_node_get(to_of_node(iommu_fwnode));
> - fwspec->iommu_fwnode = iommu_fwnode;
>   fwspec->ops = ops;
> + ret = iommu_fwspec_assign_iommu(fwspec, dev, iommu_fwnode);
> + if (ret) {
> + iommu_fwspec_dealloc(fwspec);
> + return ret;
> + }
> +
>   dev_iommu_fwspec_set(dev, fwspec);
>   return 0;
>  }
>  EXPORT_SYMBOL_GPL(iommu_fwspec_init);
>  
> -void iommu_fwspec_free(struct device *dev)
> -{
> - struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> -
> - if (fwspec) {
> -

Re: Default DNS lookup command?

2023-11-12 Thread Richard Hector

On 31/10/23 16:27, Max Nikulin wrote:

On 30/10/2023 14:03, Richard Hector wrote:

On 24/10/23 06:01, Max Nikulin wrote:

getent -s dns hosts zircon

Ah, thanks. But I don't feel too bad about not finding that ... 
'service' is not defined in that file, 'dns' doesn't occur, and 
searching for 'hosts' doesn't give anything useful either. I guess 
reading nsswitch.conf(5) is required.


Do you mean that "hosts" entry in your /etc/nsswitch.conf lacks "dns"? 
Even systemd nss plugins recommend to keep it as a fallback. If you get 
no results then your resolver or DNS server may not be configured to 
resolve single-label names. Try some full name


     getent -s dns ahosts debian.org


Sorry for the confusion (and delay) - I think I was referring to the 
getent man page, rather than the config file.


Richard



Re: systemd service oddness with openvpn

2023-11-12 Thread Richard Hector

On 12/11/23 04:47, Kamil Jońca wrote:

Richard Hector  writes:


Hi all,

I have a machine that runs as an openvpn server. It works fine; the
VPN stays up.


Are you sure? Have you client conneted and so on?


Yes. I can ssh to the machines at the other end.


However, after running for a while, I get these repeatedly in syslog:

Nov 07 12:17:24 ovpn2 openvpn[213741]: Options error: In [CMD-LINE]:1:
Error opening configuration file: opvn2.conf

Here you have something like typo (opvn2c.conf - I would expect ovpn2.conf)


Bingo - I was confused by the extra c, but that's not what you were 
referring to.


The logrotate postrotate line has

systemctl restart openvpn-server@opvn2

which is the source of the misspelling.

So it's trying to restart the wrong service.

To be honest, I haven't been very happy with the way the services get 
made up on the fly like that, only to fail ... it's bitten me in other 
ways before.


Thank you very much :-)

Richard



Re: systemd service oddness with openvpn

2023-11-11 Thread Richard Hector

On 7/11/23 12:41, Richard Hector wrote:

Hi all,

I have a machine that runs as an openvpn server. It works fine; the VPN 
stays up.


However, after running for a while, I get these repeatedly in syslog:


I don't know if anyone's watching, but ...

It appears that this happens when logrotate restarts openvpn. I just 
have "systemctl restart openvpn-server@ovpn2" in my postrotate section 
in the logrotate config.


I've seen other people recommend using 'copytrucate' instead of 
restarting openvpn, and others suggesting that openvpn should be 
configured to log via syslog, or that it should just log to stdout, and 
init (systemd) can then capture it.


I'm not sure what's the best option here - and I still don't know why 
restarting it causes this failure.


Richard



<    1   2   3   4   5   6   7   8   9   10   >