[Desktop-packages] [Bug 2006641] Re: hwacc branches fail to build

2023-02-09 Thread Hector CAO
** Changed in: chromium-browser (Ubuntu)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Desktop
Packages, which is subscribed to chromium-browser in Ubuntu.
https://bugs.launchpad.net/bugs/2006641

Title:
  hwacc branches fail to build

Status in chromium-browser package in Ubuntu:
  In Progress

Bug description:
  Currently the hwacc branches of snap_from_source fail to build.

  This is caused by the fact that the guide-0.9.1 patches are against
  107 chromium, and no longer apply cleanly.

  build logs here: 
  
https://launchpad.net/~chromium-team/+snap/chromium-snap-from-source-hwacc-beta

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/chromium-browser/+bug/2006641/+subscriptions


-- 
Mailing list: https://launchpad.net/~desktop-packages
Post to : desktop-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~desktop-packages
More help   : https://help.launchpad.net/ListHelp


Bug#1030843: needrestart: lxc exception

2023-02-08 Thread Richard Hector
Package: needrestart
Version: 3.5-4+deb11u2
Severity: wishlist

Dear Maintainer,

Can needrestart leave lxc.service unselected by default? It restarts all
the containers, which is often not desirable ... I can deselect it if I
notice, but I don't always.

[automatic system info omitted - running reportbug on a different
machine]

Package: needrestart
Version: 3.5-4+deb11u2

Architecture: amd64 (x86_64)
Kernel: Linux 5.10.0-21-amd64

Cheers,
Richard



[jira] [Commented] (KAFKA-14132) Remaining PowerMock to Mockito tests

2023-02-06 Thread Hector Geraldino (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17684947#comment-17684947
 ] 

Hector Geraldino commented on KAFKA-14132:
--

Perfect. Entered https://issues.apache.org/jira/browse/KAFKA-14683 and 
https://issues.apache.org/jira/browse/KAFKA-14684 to track progress

> Remaining PowerMock to Mockito tests
> 
>
> Key: KAFKA-14132
> URL: https://issues.apache.org/jira/browse/KAFKA-14132
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Major
>
> {color:#de350b}Some of the tests below use EasyMock as well. For those 
> migrate both PowerMock and EasyMock to Mockito.{color}
> Unless stated in brackets the tests are in the connect module.
> A list of tests which still require to be moved from PowerMock to Mockito as 
> of 2nd of August 2022 which do not have a Jira issue and do not have pull 
> requests I am aware of which are opened:
> {color:#ff8b00}InReview{color}
> {color:#00875a}Merged{color}
>  # {color:#00875a}ErrorHandlingTaskTest{color} (owner: [~shekharrajak])
>  # {color:#00875a}SourceTaskOffsetCommiterTest{color} (owner: Christo)
>  # {color:#00875a}WorkerMetricsGroupTest{color} (owner: Divij)
>  # {color:#00875a}WorkerTaskTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ErrorReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}RetryWithToleranceOperatorTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}WorkerErrantRecordReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ConnectorsResourceTest{color} (owner: [~mdedetrich-aiven])
>  # {color:#ff8b00}StandaloneHerderTest{color} (owner: [~mdedetrich-aiven]) 
> ([https://github.com/apache/kafka/pull/12728])
>  # KafkaConfigBackingStoreTest (owner: [~mdedetrich-aiven])
>  # {color:#00875a}KafkaOffsetBackingStoreTest{color} (owner: Christo) 
> ([https://github.com/apache/kafka/pull/12418])
>  # {color:#ff8b00}KafkaBasedLogTest{color} (owner: [~mdedetrich-aiven])
>  # RetryUtilTest (owner: [~mdedetrich-aiven] )
>  # {color:#00875a}RepartitionTopicTest{color} (streams) (owner: Christo)
>  # {color:#00875a}StateManagerUtilTest{color} (streams) (owner: Christo)
> *The coverage report for the above tests after the change should be >= to 
> what the coverage is now.*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14684) Replace EasyMock and PowerMock with Mockito in WorkerSinkTaskThreadedTest

2023-02-06 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-14684:


 Summary: Replace EasyMock and PowerMock with Mockito in 
WorkerSinkTaskThreadedTest
 Key: KAFKA-14684
 URL: https://issues.apache.org/jira/browse/KAFKA-14684
 Project: Kafka
  Issue Type: Sub-task
  Components: KafkaConnect
Reporter: Hector Geraldino
Assignee: Hector Geraldino






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14132) Remaining PowerMock to Mockito tests

2023-02-06 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino updated KAFKA-14132:
-
Description: 
{color:#de350b}Some of the tests below use EasyMock as well. For those migrate 
both PowerMock and EasyMock to Mockito.{color}

Unless stated in brackets the tests are in the connect module.

A list of tests which still require to be moved from PowerMock to Mockito as of 
2nd of August 2022 which do not have a Jira issue and do not have pull requests 
I am aware of which are opened:

{color:#ff8b00}InReview{color}
{color:#00875a}Merged{color}
 # {color:#00875a}ErrorHandlingTaskTest{color} (owner: [~shekharrajak])
 # {color:#00875a}SourceTaskOffsetCommiterTest{color} (owner: Christo)
 # {color:#00875a}WorkerMetricsGroupTest{color} (owner: Divij)
 # {color:#00875a}WorkerTaskTest{color} (owner: [~yash.mayya])
 # {color:#00875a}ErrorReporterTest{color} (owner: [~yash.mayya])
 # {color:#00875a}RetryWithToleranceOperatorTest{color} (owner: [~yash.mayya])
 # {color:#00875a}WorkerErrantRecordReporterTest{color} (owner: [~yash.mayya])
 # {color:#00875a}ConnectorsResourceTest{color} (owner: [~mdedetrich-aiven])
 # {color:#ff8b00}StandaloneHerderTest{color} (owner: [~mdedetrich-aiven]) 
([https://github.com/apache/kafka/pull/12728])
 # KafkaConfigBackingStoreTest (owner: [~mdedetrich-aiven])
 # {color:#00875a}KafkaOffsetBackingStoreTest{color} (owner: Christo) 
([https://github.com/apache/kafka/pull/12418])
 # {color:#ff8b00}KafkaBasedLogTest{color} (owner: [~mdedetrich-aiven])
 # RetryUtilTest (owner: [~mdedetrich-aiven] )
 # {color:#00875a}RepartitionTopicTest{color} (streams) (owner: Christo)
 # {color:#00875a}StateManagerUtilTest{color} (streams) (owner: Christo)

*The coverage report for the above tests after the change should be >= to what 
the coverage is now.*

  was:
{color:#de350b}Some of the tests below use EasyMock as well. For those migrate 
both PowerMock and EasyMock to Mockito.{color}

Unless stated in brackets the tests are in the connect module.

A list of tests which still require to be moved from PowerMock to Mockito as of 
2nd of August 2022 which do not have a Jira issue and do not have pull requests 
I am aware of which are opened:

{color:#ff8b00}InReview{color}
{color:#00875a}Merged{color}
 # {color:#00875a}ErrorHandlingTaskTest{color} (owner: [~shekharrajak])
 # {color:#00875a}SourceTaskOffsetCommiterTest{color} (owner: Christo)
 # {color:#00875a}WorkerMetricsGroupTest{color} (owner: Divij)
 # WorkerSinkTaskTest (owner: ??) 
 # WorkerSinkTaskThreadedTest (owner: ??)
 # {color:#00875a}WorkerTaskTest{color} (owner: [~yash.mayya])
 # {color:#00875a}ErrorReporterTest{color} (owner: [~yash.mayya])
 # {color:#00875a}RetryWithToleranceOperatorTest{color} (owner: [~yash.mayya])
 # {color:#00875a}WorkerErrantRecordReporterTest{color} (owner: [~yash.mayya])
 # {color:#00875a}ConnectorsResourceTest{color} (owner: [~mdedetrich-aiven])
 # {color:#ff8b00}StandaloneHerderTest{color} (owner: [~mdedetrich-aiven]) 
([https://github.com/apache/kafka/pull/12728])
 # KafkaConfigBackingStoreTest (owner: [~mdedetrich-aiven])
 # {color:#00875a}KafkaOffsetBackingStoreTest{color} (owner: Christo) 
([https://github.com/apache/kafka/pull/12418])
 # {color:#ff8b00}KafkaBasedLogTest{color} (owner: [~mdedetrich-aiven])
 # RetryUtilTest (owner: [~mdedetrich-aiven] )
 # {color:#00875a}RepartitionTopicTest{color} (streams) (owner: Christo)
 # {color:#00875a}StateManagerUtilTest{color} (streams) (owner: Christo)

*The coverage report for the above tests after the change should be >= to what 
the coverage is now.*


> Remaining PowerMock to Mockito tests
> 
>
> Key: KAFKA-14132
> URL: https://issues.apache.org/jira/browse/KAFKA-14132
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Major
>
> {color:#de350b}Some of the tests below use EasyMock as well. For those 
> migrate both PowerMock and EasyMock to Mockito.{color}
> Unless stated in brackets the tests are in the connect module.
> A list of tests which still require to be moved from PowerMock to Mockito as 
> of 2nd of August 2022 which do not have a Jira issue and do not have pull 
> requests I am aware of which are opened:
> {color:#ff8b00}InReview{color}
> {color:#00875a}Merged{color}
>  # {color:#00875a}ErrorHandlingTaskTest{color} (owner: [~shekharrajak])
>  # {color:#00875a}SourceTaskOffsetCommiterTest{color} (owner: Christo)
>  # {color:#00875a}WorkerMetricsGroupTest{color} (owner: Divij)
>  # {color:#00875a}WorkerTaskTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ErrorReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}RetryWithToleranceOperatorTest{color} (owner

[jira] [Created] (KAFKA-14684) Replace EasyMock and PowerMock with Mockito in WorkerSinkTaskThreadedTest

2023-02-06 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-14684:


 Summary: Replace EasyMock and PowerMock with Mockito in 
WorkerSinkTaskThreadedTest
 Key: KAFKA-14684
 URL: https://issues.apache.org/jira/browse/KAFKA-14684
 Project: Kafka
  Issue Type: Sub-task
  Components: KafkaConnect
Reporter: Hector Geraldino
Assignee: Hector Geraldino






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14683) Replace EasyMock and PowerMock with Mockito in WorkerSinkTaskTest

2023-02-06 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-14683:


 Summary: Replace EasyMock and PowerMock with Mockito in 
WorkerSinkTaskTest
 Key: KAFKA-14683
 URL: https://issues.apache.org/jira/browse/KAFKA-14683
 Project: Kafka
  Issue Type: Sub-task
  Components: KafkaConnect
Reporter: Hector Geraldino
Assignee: Hector Geraldino






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14683) Replace EasyMock and PowerMock with Mockito in WorkerSinkTaskTest

2023-02-06 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-14683:


 Summary: Replace EasyMock and PowerMock with Mockito in 
WorkerSinkTaskTest
 Key: KAFKA-14683
 URL: https://issues.apache.org/jira/browse/KAFKA-14683
 Project: Kafka
  Issue Type: Sub-task
  Components: KafkaConnect
Reporter: Hector Geraldino
Assignee: Hector Geraldino






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14132) Remaining PowerMock to Mockito tests

2023-02-03 Thread Hector Geraldino (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17683961#comment-17683961
 ] 

Hector Geraldino commented on KAFKA-14132:
--

Hey [~divijvaidya],

I'm putting together a KIP around 
[https://github.com/apache/kafka/pull/13193|https://github.com/apache/kafka/pull/13193,]
 in which I'll propose adding new metric counters for filtered (skipped) 
records.  The POC is code complete, but for it to be ready I need to add new 
test cases on *WorkerSinkTaskTest* and *WorkerSinkTaskThreadedTest* unit tests. 
These two are assigned to you, and I was wondering if you're OK with me picking 
them up, this will unblock my other work. I did the same for 
https://issues.apache.org/jira/browse/KAFKA-14659 (had to raise 
[https://github.com/apache/kafka/pull/13191] before adding tests for that bug 
fix)

Wdyt?

> Remaining PowerMock to Mockito tests
> 
>
> Key: KAFKA-14132
> URL: https://issues.apache.org/jira/browse/KAFKA-14132
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Christo Lolov
>Assignee: Christo Lolov
>Priority: Major
>
> {color:#de350b}Some of the tests below use EasyMock as well. For those 
> migrate both PowerMock and EasyMock to Mockito.{color}
> Unless stated in brackets the tests are in the connect module.
> A list of tests which still require to be moved from PowerMock to Mockito as 
> of 2nd of August 2022 which do not have a Jira issue and do not have pull 
> requests I am aware of which are opened:
> {color:#ff8b00}InReview{color}
> {color:#00875a}Merged{color}
>  # {color:#00875a}ErrorHandlingTaskTest{color} (owner: [~shekharrajak])
>  # {color:#00875a}SourceTaskOffsetCommiterTest{color} (owner: Christo)
>  # {color:#00875a}WorkerMetricsGroupTest{color} (owner: Divij)
>  # WorkerSinkTaskTest (owner: Divij) *WIP* 
>  # WorkerSinkTaskThreadedTest (owner: Divij) *WIP*
>  # {color:#00875a}WorkerTaskTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ErrorReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}RetryWithToleranceOperatorTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}WorkerErrantRecordReporterTest{color} (owner: [~yash.mayya])
>  # {color:#00875a}ConnectorsResourceTest{color} (owner: [~mdedetrich-aiven])
>  # {color:#ff8b00}StandaloneHerderTest{color} (owner: [~mdedetrich-aiven]) 
> ([https://github.com/apache/kafka/pull/12728])
>  # KafkaConfigBackingStoreTest (owner: [~mdedetrich-aiven])
>  # {color:#00875a}KafkaOffsetBackingStoreTest{color} (owner: Christo) 
> ([https://github.com/apache/kafka/pull/12418])
>  # {color:#ff8b00}KafkaBasedLogTest{color} (owner: [~mdedetrich-aiven])
>  # RetryUtilTest (owner: [~mdedetrich-aiven] )
>  # {color:#00875a}RepartitionTopicTest{color} (streams) (owner: Christo)
>  # {color:#00875a}StateManagerUtilTest{color} (streams) (owner: Christo)
> *The coverage report for the above tests after the change should be >= to 
> what the coverage is now.*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14659) source-record-write-[rate|total] metrics include filtered records

2023-02-02 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino reassigned KAFKA-14659:


Assignee: Hector Geraldino

> source-record-write-[rate|total] metrics include filtered records
> -
>
> Key: KAFKA-14659
> URL: https://issues.apache.org/jira/browse/KAFKA-14659
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Chris Beard
>Assignee: Hector Geraldino
>Priority: Minor
>
> Source tasks in Kafka connect offer two sets of metrics (documented in 
> [ConnectMetricsRegistry.java|https://github.com/apache/kafka/blob/72cfc994f5675be349d4494ece3528efed290651/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/ConnectMetricsRegistry.java#L173-L191]):
> ||Metric||Description||
> |source-record-poll-rate|The average per-second number of records 
> produced/polled (before transformation) by this task belonging to the named 
> source connector in this worker.|
> |source-record-write-rate|The average per-second number of records output 
> from the transformations and written to Kafka for this task belonging to the 
> named source connector in this worker. This is after transformations are 
> applied and excludes any records filtered out by the transformations.|
> There are also corresponding "-total" metrics that capture the total number 
> of records polled and written for the metrics above, respectively.
> In short, the "poll" metrics capture the number of messages sourced 
> pre-transformation/filtering, and the "write" metrics should capture the 
> number of messages ultimately written to Kafka post-transformation/filtering. 
> However, the implementation of the {{source-record-write-*}}  metrics 
> _includes_ records filtered out by transformations (and also records that 
> result in produce failures with the config {{{}errors.tolerance=all{}}}).
> h3. Details
> In 
> [AbstractWorkerSourceTask.java|https://github.com/apache/kafka/blob/a382acd31d1b53cd8695ff9488977566083540b1/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/AbstractWorkerSourceTask.java#L389-L397],
>  each source record is passed through the transformation chain where it is 
> potentially filtered out, checked to see if it was in fact filtered out, and 
> if so it is accounted for in the internal metrics via 
> {{{}counter.skipRecord(){}}}.
> {code:java}
> for (final SourceRecord preTransformRecord : toSend) { 
> retryWithToleranceOperator.sourceRecord(preTransformRecord);
> final SourceRecord record = 
> transformationChain.apply(preTransformRecord);
> final ProducerRecord producerRecord = 
> convertTransformedRecord(record);
> if (producerRecord == null || retryWithToleranceOperator.failed()) {  
>   
> counter.skipRecord();
> recordDropped(preTransformRecord);
> continue;
> }
> ...
> {code}
> {{SourceRecordWriteCounter.skipRecord()}} is implemented as follows:
> {code:java}
> 
> public SourceRecordWriteCounter(int batchSize, SourceTaskMetricsGroup 
> metricsGroup) {
> assert batchSize > 0;
> assert metricsGroup != null;
> this.batchSize = batchSize;
> counter = batchSize;
> this.metricsGroup = metricsGroup;
> }
> public void skipRecord() {
> if (counter > 0 && --counter == 0) {
> finishedAllWrites();
> }
> }
> 
> private void finishedAllWrites() {
> if (!completed) {
> metricsGroup.recordWrite(batchSize - counter);
> completed = true;
> }
> }
> {code}
> For example: If a batch starts with 100 records, {{batchSize}} and 
> {{counter}} will both be initialized to 100. If all 100 records get filtered 
> out, {{counter}} will be decremented 100 times, and 
> {{{}finishedAllWrites(){}}}will record the value 100 to the underlying 
> {{source-record-write-*}}  metrics rather than 0, the correct value according 
> to the documentation for these metrics.
> h3. Solutions
> Assuming the documentation correctly captures the intent of the 
> {{source-record-write-*}}  metrics, it seems reasonable to fix these metrics 
> such that filtered records do not get counted.
> It may also be useful to add additional metrics to capture the rate and total 
> number of records filtered out by transformations, which would require a KIP.
> I'm not sure what the best way of accounting for produce failures in the case 
> of {{errors.tolerance=all}} is yet. Maybe these failures deserve their own 
> new metrics?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-14060) Replace EasyMock and PowerMock with Mockito in AbstractWorkerSourceTaskTest

2023-02-02 Thread Hector Geraldino (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17683418#comment-17683418
 ] 

Hector Geraldino commented on KAFKA-14060:
--

Hey [~ChrisEgerton], 

I wrote a patch for https://issues.apache.org/jira/browse/KAFKA-14659, but when 
I was about to write tests for it I found out that this test was still using 
PowerMock/EasyMock, which is a blocker. I assigned the ticket to myself, and 
plan to raise a PR in a day or two.

> Replace EasyMock and PowerMock with Mockito in AbstractWorkerSourceTaskTest
> ---
>
> Key: KAFKA-14060
> URL: https://issues.apache.org/jira/browse/KAFKA-14060
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Hector Geraldino
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14060) Replace EasyMock and PowerMock with Mockito in AbstractWorkerSourceTaskTest

2023-02-01 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino reassigned KAFKA-14060:


Assignee: Hector Geraldino  (was: Chris Egerton)

> Replace EasyMock and PowerMock with Mockito in AbstractWorkerSourceTaskTest
> ---
>
> Key: KAFKA-14060
> URL: https://issues.apache.org/jira/browse/KAFKA-14060
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Chris Egerton
>Assignee: Hector Geraldino
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[DISCUSS] KIP-901: Add connectorDeleted flag when stopping tasks

2023-01-24 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi everyone,

I've submitted KIP-901, which adds an overloaded Task#stop(boolean 
connectorDeleted) method to the public Kafka Connect APIs:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-901%3A+Add+connectorDeleted+flag+when+stopping+tasks

This KIP can be seen as a companion (or counterpart) of KIP-883, which aims to 
provide the same feature but at the Connector level (there's a separate 
discussion thread on this list). The KIP also supersedes "KIP-419: Safely 
notify Kafka Connect SourceTask is stopped". 

The main goal is to let the task being stopped that it is due to the connector 
being deleted, so the task can perform any cleanup actions as part of the 
connector's deletion process.

I look forward for your feedback and comments.

Thanks!
Hector

[jira] [Updated] (KAFKA-14651) Add connectorDeleted flag when stopping tasks

2023-01-24 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino updated KAFKA-14651:
-
Description: 
Jira ticket for 
[KIP-901|https://cwiki.apache.org/confluence/display/KAFKA/KIP-901%3A+Add+connectorDeleted+flag+when+stopping+tasks]

It would be useful for Connectors to know when its instance is being deleted. 
This will give a chance to connector tasks to perform any cleanup routines 
before as part of the connector removal process.

  was:It would be useful for Connectors to know when its instance is being 
deleted. This will give a chance to connectors to perform any cleanup tasks 
(e.g. deleting external resources, or deleting offsets) before the connector is 
completely removed from the cluster.


> Add connectorDeleted flag when stopping tasks
> -
>
> Key: KAFKA-14651
> URL: https://issues.apache.org/jira/browse/KAFKA-14651
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
>
> Jira ticket for 
> [KIP-901|https://cwiki.apache.org/confluence/display/KAFKA/KIP-901%3A+Add+connectorDeleted+flag+when+stopping+tasks]
> It would be useful for Connectors to know when its instance is being deleted. 
> This will give a chance to connector tasks to perform any cleanup routines 
> before as part of the connector removal process.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14651) Add connectorDeleted flag when stopping tasks

2023-01-24 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-14651:


 Summary: Add connectorDeleted flag when stopping tasks
 Key: KAFKA-14651
 URL: https://issues.apache.org/jira/browse/KAFKA-14651
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Hector Geraldino
Assignee: Hector Geraldino


It would be useful for Connectors to know when its instance is being deleted. 
This will give a chance to connectors to perform any cleanup tasks (e.g. 
deleting external resources, or deleting offsets) before the connector is 
completely removed from the cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14651) Add connectorDeleted flag when stopping tasks

2023-01-24 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-14651:


 Summary: Add connectorDeleted flag when stopping tasks
 Key: KAFKA-14651
 URL: https://issues.apache.org/jira/browse/KAFKA-14651
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Hector Geraldino
Assignee: Hector Geraldino


It would be useful for Connectors to know when its instance is being deleted. 
This will give a chance to connectors to perform any cleanup tasks (e.g. 
deleting external resources, or deleting offsets) before the connector is 
completely removed from the cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HDFS-16895) NamenodeHeartbeatService should use credentials of logged in user

2023-01-20 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HDFS-16895:
---

 Summary: NamenodeHeartbeatService should use credentials of logged 
in user
 Key: HDFS-16895
 URL: https://issues.apache.org/jira/browse/HDFS-16895
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Reporter: Hector Sandoval Chaverri


NamenodeHeartbeatService has been found to log the errors when querying 
protected Namenode JMX APIs. We have been able to work around this by running 
kinit with the DFS_ROUTER_KEYTAB_FILE_KEY and DFS_ROUTER_KERBEROS_PRINCIPAL_KEY 
on the router.

While investigating a solution, we found that doing the request as part of a  
UserGroupInformation.getLoginUser.doAs() call doesn't require to kinit before.

The error logged is:
{noformat}
2022-08-16 21:35:00,265 ERROR 
org.apache.hadoop.hdfs.server.federation.router.FederationUtil: Cannot parse 
JMX output for Hadoop:service=NameNode,name=FSNamesystem* from server 
ltx1-yugiohnn03-ha1.grid.linkedin.com:50070
org.apache.hadoop.security.authentication.client.AuthenticationException: Error 
while authenticating with endpoint: 
http://ltx1-yugiohnn03-ha1.grid.linkedin.com:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem*
at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.wrapExceptionWithMessage(KerberosAuthenticator.java:232)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:219)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:350)
at 
org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:186)
at 
org.apache.hadoop.hdfs.server.federation.router.FederationUtil.getJmx(FederationUtil.java:82)
at 
org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateJMXParameters(NamenodeHeartbeatService.java:352)
at 
org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:295)
at 
org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:218)
at 
org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:172)
at 
org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:360)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:204)
... 15 more
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)
at 
sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at 
sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
at 
sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at 
sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:336)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:310)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422

[jira] [Created] (HDFS-16895) NamenodeHeartbeatService should use credentials of logged in user

2023-01-20 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HDFS-16895:
---

 Summary: NamenodeHeartbeatService should use credentials of logged 
in user
 Key: HDFS-16895
 URL: https://issues.apache.org/jira/browse/HDFS-16895
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Reporter: Hector Sandoval Chaverri


NamenodeHeartbeatService has been found to log the errors when querying 
protected Namenode JMX APIs. We have been able to work around this by running 
kinit with the DFS_ROUTER_KEYTAB_FILE_KEY and DFS_ROUTER_KERBEROS_PRINCIPAL_KEY 
on the router.

While investigating a solution, we found that doing the request as part of a  
UserGroupInformation.getLoginUser.doAs() call doesn't require to kinit before.

The error logged is:
{noformat}
2022-08-16 21:35:00,265 ERROR 
org.apache.hadoop.hdfs.server.federation.router.FederationUtil: Cannot parse 
JMX output for Hadoop:service=NameNode,name=FSNamesystem* from server 
ltx1-yugiohnn03-ha1.grid.linkedin.com:50070
org.apache.hadoop.security.authentication.client.AuthenticationException: Error 
while authenticating with endpoint: 
http://ltx1-yugiohnn03-ha1.grid.linkedin.com:50070/jmx?qry=Hadoop:service=NameNode,name=FSNamesystem*
at sun.reflect.GeneratedConstructorAccessor55.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.wrapExceptionWithMessage(KerberosAuthenticator.java:232)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:219)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:350)
at 
org.apache.hadoop.hdfs.web.URLConnectionFactory.openConnection(URLConnectionFactory.java:186)
at 
org.apache.hadoop.hdfs.server.federation.router.FederationUtil.getJmx(FederationUtil.java:82)
at 
org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateJMXParameters(NamenodeHeartbeatService.java:352)
at 
org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.getNamenodeStatusReport(NamenodeHeartbeatService.java:295)
at 
org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.updateState(NamenodeHeartbeatService.java:218)
at 
org.apache.hadoop.hdfs.server.federation.router.NamenodeHeartbeatService.periodicInvoke(NamenodeHeartbeatService.java:172)
at 
org.apache.hadoop.hdfs.server.federation.router.PeriodicService$1.run(PeriodicService.java:178)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.doSpnegoSequence(KerberosAuthenticator.java:360)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:204)
... 15 more
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed 
to find any Kerberos tgt)
at 
sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
at 
sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
at 
sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
at 
sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
at 
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:336)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator$1.run(KerberosAuthenticator.java:310)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422

RE: Are ZFS snapshots unsafe when PGSQL is spreading through multiple zpools?

2023-01-18 Thread HECTOR INGERTO
I wanted to understand the underlying issue.

I use ZFS snapshots instead of a “correct” backup because with only two 
machines it allows me to have backups in the main machine and in the secondary 
too that acts as hotspare at the same time.

To accomplish the same I would need 3 nodes. The main, the replica hotspare and 
the proper backup.



De: Laurenz Albe<mailto:laurenz.a...@cybertec.at>
Enviado: miércoles, 18 de enero de 2023 11:02
Para: HECTOR INGERTO<mailto:hector_...@hotmail.com>; Magnus 
Hagander<mailto:mag...@hagander.net>
CC: pgsql-gene...@postgresql.org 
<mailto:pgsql-gene...@postgresql.org>
Asunto: Re: Are ZFS snapshots unsafe when PGSQL is spreading through multiple 
zpools?

On Tue, 2023-01-17 at 15:22 +, HECTOR INGERTO wrote:
> > Another case: a transaction COMMITs, and a slightly later transaction reads 
> > the data
> > and sets a hint bit.  If the snapshot of the file system with the data 
> > directory in it
> > is slightly later than the snapshot of the file system with "pg_wal", the 
> > COMMIT might
> > not be part of the snapshot, but the hint bit could be.
> >
> > Then these uncommitted data could be visible if you recover from the 
> > snapshot.
>
> Thank you all. I have it clearer now.
>
> As a last point. Making the snapshot to the WAL dataset first or last would 
> make any difference?

Imagine you run DROP TABLE.  During the implicit COMMIT at the end of the 
statement,
the files behind the table are deleted.  If the snapshot of "pg_wal" is earlier 
than
the snapshot of the data files, you end up with a table that is not yet dropped,
but the files are gone.

I won't try to find an example if you now ask what if no checkpoint ends 
between the
statements, the snapshot on "pg_wal" is earlier and we don't run DROP TABLE.

Why do you go to all this effort rather than performing a correct backup?

Yours,
Laurenz Albe



Re: Setting up bindfs mount in LXC container

2023-01-17 Thread Richard Hector

On 18/01/23 16:38, Max Nikulin wrote:

On 18/01/2023 03:52, Richard Hector wrote:

On 17/01/23 23:52, Max Nikulin wrote:


lxc.idmap = u 0 10 1000
lxc.idmap = u 1000 1000 1

lxc.mount.entry = /home/richard/sitename/doc_root 
srv/sitename/doc_root none bind,optional,create=dir


My goal is not to map container users to host users, but to allow a 
container user (human user) to access a directory as another container 
user (non-human owner of files). This should also be doable for 
multiple human users for the same site.


Do you mean mapping several users (human and service ones) from a single 
container to the same host UID? The approach I suggested works for 1:1 
mapping. Another technique is group permissions and ACLs, but I would 
not call it straightforward. A user may create a file that belongs to 
wrong group or inaccessible by another user.


I'll use more detail :-)

I have a Wordpress site. The directory /srv/sitename/doc_root, and most 
of the directories under it, are owned by user 'sitename'.


PHP runs as 'sitename-run', which has access (via group 'sitename') to 
read all of that, but not write it. Some subdirectories, eg 
.../doc_root/wp-content/uploads, are group-writeable so that it can save 
things there.


An authorised site maintainer, eg me ('richard') (but there may be any 
number of others), needs to be able to write under /srv/sitename, so I 
use bindfs to mount /srv/sitename under /home/richard/sitename, which 
presents it as owned by me, and translates the ownership back to 
'sitename' when I write to it. So each human user sees the site as owned 
by them, but it's all mapped to 'sitename' on the fly.


These users I guess map to host users, but I'm not particularly 
interested in that ... actually I should care more, because it actually 
maps to a real but unrelated user id on the host, which could have bad 
implications - but I think that's a separate issue.


I'm not ignoring the rest of your message; I'll look at that separately :-)

Cheers,
Richard



Re: Setting up bindfs mount in LXC container

2023-01-17 Thread Richard Hector

On 17/01/23 23:52, Max Nikulin wrote:

On 17/01/2023 04:06, Richard Hector wrote:


I'm using bindfs in my web LXC containers to allow particular users to 
write to their site docroot as the correct user.


I am not familiar with bindfs, so I may miss something important for 
your use case.


First of all I am unsure why you prefer bindfs instead of mapping some 
container users to host users using namespaces. With the following 
configuration 1000 inside a container and on the host is the same UID:


lxc.idmap = u 0 10 1000
lxc.idmap = u 1000 1000 1
lxc.idmap = u 1001 101001 64535
lxc.idmap = g 0 10 1000
lxc.idmap = g 1000 1000 1
lxc.idmap = g 1001 101001 64535

lxc.mount.entry = /home/richard/sitename/doc_root /srv/sitename/doc_root 
none bind,optional,create=dir


Disclaimer - I haven't actually tried any of your suggestions yet.

My goal is not to map container users to host users, but to allow a 
container user (human user) to access a directory as another container 
user (non-human owner of files). This should also be doable for multiple 
human users for the same site.



In /usr/local/bin/fuse.hook:


I would look into lxcfs hook for inspiration


Interesting; will do. Not sure exactly where to start, but will get there.


In /usr/local/bin/fuse.hook.s2:

lxc-device -n ${LXC_NAME} add /dev/fuse


Is there any reason why it can not be done using lxc.mount.entry in the 
container config?


Is that usable for adding a device file? The only way I found to do that 
is using lxc-device from outside the container. mknod inside doesn't work.



lxc-attach -n ${LXC_NAME} /usr/local/bin/bindfs_mount


I would consider adding a systemd unit inside container. Unsure if could 
be done using an udev rule.


That might be better, but it does need to rely on the device existing first.

If I don't use the at job, but run those commands manually after boot, 
it works fine with no error messages.


Unsure if it is relevant, but it is better to run lxc-start and 
lxc-attach as a systemd unit with Delegate=yes configuration, either a 
temporary one (systemd-run) or configured as a service. It ensures 
proper cgroup and scope. Otherwise some cryptic errors may happen.


So even for running stuff manually, run it from systemd? Interesting, 
will investigate further. I wasn't aware of systemd-run.


Thanks,
Richard



RE: Are ZFS snapshots unsafe when PGSQL is spreading through multiple zpools?

2023-01-17 Thread HECTOR INGERTO
> Another case: a transaction COMMITs, and a slightly later transaction reads 
> the data
> and sets a hint bit.  If the snapshot of the file system with the data 
> directory in it
> is slightly later than the snapshot of the file system with "pg_wal", the 
> COMMIT might
> not be part of the snapshot, but the hint bit could be.
>
> Then these uncommitted data could be visible if you recover from the snapshot.
>
> Yours,
> Laurenz Albe

Thank you all. I have it clearer now.

As a last point. Making the snapshot to the WAL dataset first or last would 
make any difference?




Setting up bindfs mount in LXC container

2023-01-16 Thread Richard Hector

Hi all,

I'm using bindfs in my web LXC containers to allow particular users to 
write to their site docroot as the correct user.


Getting this to work has been really hacky, and while it does seem to 
work, I get log messages saying it didn't ...


In /var/lib/lxc//config:

lxc.hook.start-host = /usr/local/bin/fuse.hook


In /usr/local/bin/fuse.hook:

#!/bin/bash
at now + 1 minute <>/var/log/lxc/${LXC_NAME}-hook-error.log
/usr/local/bin/fuse.hook.s2
END


In /usr/local/bin/fuse.hook.s2:

lxc-device -n ${LXC_NAME} add /dev/fuse
lxc-attach -n ${LXC_NAME} /usr/local/bin/bindfs_mount


In /usr/local/bin/bindfs_mount (in the container):

#!/bin/bash
file='/usr/local/etc/bindfs_mounts'
while read line; do
  mount "${line}"
done < "${file}"


In /usr/local/etc/bindfs_mounts (in the container):

/home/richard//doc_root


In /etc/fstab (in the container) (single line wrapped by MUA):

/srv//doc_root /home/richard//doc_root fuse.bindfs 
noauto,--force-user=richard,--force-group=richard,--create-for-user=,--create-for-group= 
0 0



I'm sure shell experts (or LXC experts) will tell me this 2-stage 
process is unnecessary, or that there is a better way to do it, but IIRC 
it doesn't work if lxc is waiting for the hook to finish; other stuff 
needs to happen before the device creation works.



At boot, however, I get these messages emailed from the at job (3 lines, 
wrapped by MUA):


lxc-device: : commands.c: lxc_cmd_add_bpf_device_cgroup: 
1185 Message too long - Failed to add new bpf device cgroup rule
lxc-device: : lxccontainer.c: add_remove_device_node: 
4657 set_cgroup_item failed while adding the device node
lxc-device: : tools/lxc_device.c: main: 153 Failed to add 
/dev/fuse to 



The device file is created correctly, and the mount work.

Oh - and interestingly, this only seems to happen when the host boots. 
If I just reboot (or shutdown and start) the container, it works fine.


It doesn't matter if I increase the delay on the at job.

If I don't use the at job, but run those commands manually after boot, 
it works fine with no error messages.


Any hints?

I suspect my limited understanding of cgroups is contributing to my 
problems ...


Cheers,
Richard



[kscreenlocker] [Bug 428424] Open Laptop Lid doesn't turn on Display (Wayland)

2023-01-16 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=428424

Hector Martin  changed:

   What|Removed |Added

 CC||mar...@marcan.st

--- Comment #18 from Hector Martin  ---
Are we conflating two issues here? What I see on Apple Silicon machines is that
KDE is configured to *suspend* on lid close (or you manually suspend it prior
to closing the lid) then indeed it wakes up and turns the screen on when you
open the lid. However, if it is configured to *turn off the screen* on lid
close, the screen doesn't turn on when you open it until you press a key.

-- 
You are receiving this mail because:
You are watching all bug changes.

RE: Are ZFS snapshots unsafe when PGSQL is spreading through multiple zpools?

2023-01-16 Thread HECTOR INGERTO
> The database relies on the data being consistent when it performs crash 
> recovery.
> Imagine that a checkpoint is running while you take your snapshot.  The 
> checkpoint
> syncs a data file with a new row to disk.  Then it writes a WAL record and 
> updates
> the control file.  Now imagine that the table with the new row is on a 
> different
> file system, and your snapshot captures the WAL and the control file, but not
> the new row (it was still sitting in the kernel page cache when the snapshot 
> was taken).
> You end up with a lost row.
>
> That is only one scenario.  Many other ways of corruption can happen.

Can we say then that the risk comes only from the possibility of a checkpoint 
running inside the time gap between the non-simultaneous snapshots?


RE: Are ZFS snapshots unsafe when PGSQL is spreading through multiple zpools?

2023-01-16 Thread HECTOR INGERTO
I have understood I shall not do it, but could the technical details be 
discussed about why silent DB corruption can occur with non-atomical snapshots?




RE: Are ZFS snapshots unsafe when PGSQL is spreading through multiple zpools?

2023-01-15 Thread HECTOR INGERTO

> But you cannot and should not rely on snapshots alone

That’s only for non atomic (multiple pools) snapshots. Isn’t?

If I need to rely only on ZFS (automated) snapshots, then the best option would 
be to have two DB? Each one in each own pool. One HDD DB and one SSD DB. Then, 
the backend code should know on which DB the requested data is.

De: Magnus Hagander<mailto:mag...@hagander.net>
Enviado: domingo, 15 de enero de 2023 20:36
Para: HECTOR INGERTO<mailto:hector_...@hotmail.com>
CC: pgsql-gene...@postgresql.org<mailto:pgsql-gene...@postgresql.org>
Asunto: Re: Are ZFS snapshots unsafe when PGSQL is spreading through multiple 
zpools?



On Sun, Jan 15, 2023 at 8:18 PM HECTOR INGERTO 
mailto:hector_...@hotmail.com>> wrote:
Hello everybody,

I’m using PostgreSQL on openZFS. I use ZFS snapshots as a backup + hotspare 
method.

>From man zfs-snapshot: “Snapshots are taken atomically, so that all snapshots 
>correspond to the same moment in time.” So if a PSQL instance is started from 
>a zfs snapshot, it will start to replay the WAL from the last checkpoint, in 
>the same way it would do in a crash or power loss scenario. So from my 
>knowledge, ZFS snapshots can be used to rollback to a previous point in time. 
>Also, sending those snapshots to other computers will allow you to have 
>hotspares and remote backups. If I’m wrong here, I would appreciate being told 
>about it because I’m basing the whole question on this premise.

On the other hand, we have the tablespace PGSQL feature, which is great because 
it allows “unimportant” big data to be written into cheap HDD and frequently 
used data into fast NVMe.

So far, so good. The problem is when both ideas are merged. Then, snapshots 
from different pools are NOT atomical, snapshot on the HDD pool isn’t going to 
be done at the same exact time as the one on the SSD pool, and I don’t know 
enough about PGSQL internals to know how dangerous this is. So here is where I 
would like to ask for your help with the following questions:

First of all, what kind of problem can this lead to? Are we talking about 
potential whole DB corruption or only the loss of a few of the latest 
transactions?

Silent data corruption. *not* just losing your latest transaction.


In second place, if I’m initializing a corrupted PGSQL instance because ZFS 
snapshots are from different pools and slightly different times, am I going to 
notice it somehow or is it going to fail silently?

Silent. You might notice at the application level. Might.


In third and last place, is there some way to quantify the amount of risk taken 
when snapshotting a PGSQL instance spread across two (or more) different pools?


"Don't do it".

If you can't get atomic snapshots, don't do it, period.

You can use them together with a regular online backup. That is 
pg_start_backup() //  // pg_stop_backup() together 
with log archiving. That's a perfectly valid method. But you cannot and should 
not rely on snapshots alone.

--
 Magnus Hagander
 Me: 
https://www.hagander.net/<https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.hagander.net%2F=05%7C01%7C%7C4860509b67ea484420fb08daf72fddd4%7C84df9e7fe9f640afb435%7C1%7C0%7C638094082195595508%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=LRa%2BFTXpoZNsMLMrNLbL6xmgo9I3Mxx2CcCAh6nmguU%3D=0>
 Work: 
https://www.redpill-linpro.com/<https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.redpill-linpro.com%2F=05%7C01%7C%7C4860509b67ea484420fb08daf72fddd4%7C84df9e7fe9f640afb435%7C1%7C0%7C638094082195752157%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C=ziYhcTa5YOvHZEr2xk2nEKvSjLICE75zRhhCehvzIMs%3D=0>



Are ZFS snapshots unsafe when PGSQL is spreading through multiple zpools?

2023-01-15 Thread HECTOR INGERTO
Hello everybody,

I’m using PostgreSQL on openZFS. I use ZFS snapshots as a backup + hotspare 
method.

>From man zfs-snapshot: “Snapshots are taken atomically, so that all snapshots 
>correspond to the same moment in time.” So if a PSQL instance is started from 
>a zfs snapshot, it will start to replay the WAL from the last checkpoint, in 
>the same way it would do in a crash or power loss scenario. So from my 
>knowledge, ZFS snapshots can be used to rollback to a previous point in time. 
>Also, sending those snapshots to other computers will allow you to have 
>hotspares and remote backups. If I’m wrong here, I would appreciate being told 
>about it because I’m basing the whole question on this premise.

On the other hand, we have the tablespace PGSQL feature, which is great because 
it allows “unimportant” big data to be written into cheap HDD and frequently 
used data into fast NVMe.

So far, so good. The problem is when both ideas are merged. Then, snapshots 
from different pools are NOT atomical, snapshot on the HDD pool isn’t going to 
be done at the same exact time as the one on the SSD pool, and I don’t know 
enough about PGSQL internals to know how dangerous this is. So here is where I 
would like to ask for your help with the following questions:

First of all, what kind of problem can this lead to? Are we talking about 
potential whole DB corruption or only the loss of a few of the latest 
transactions?

In second place, if I’m initializing a corrupted PGSQL instance because ZFS 
snapshots are from different pools and slightly different times, am I going to 
notice it somehow or is it going to fail silently?

In third and last place, is there some way to quantify the amount of risk taken 
when snapshotting a PGSQL instance spread across two (or more) different pools?

Thanks for your time,


Héctor


Re: HOW TO OUTPUT DATA GOT FROM combined queryset IN DJANGO TEMPLATE (html file) ?

2023-01-14 Thread Hector Mwaky

   
   1. 
   
   In the windows_games view in others/views.py, you are using the filter() 
   method on the Action and Adventure models to get the action and adventure 
   games that have the 'os' field set to 'windows'. Then you are using the 
   itertools.chain() function to combine the two querysets into a single list, 
   and storing that list in the variable combined_list. The context variable 
   is not created correctly, it should be context = {'combined_list': 
   combined_list}
   2. 
   
   In the template others/os/windows_game.html, to output the results of 
   the combined_list you should use a for loop to iterate over the list and 
   display each item. Also, it seems that you are trying to access the 
   game_pic attribute of the objects in the combined_list, but the models you 
   have defined (Action and Adventure) do not have a field called game_pic
   3. 
   
   {% for game in combined_list %}  {{ 
   game.name }} {% endfor %}
   4. 
   
   It would be a good idea to add a __str__ method to the models in order 
   to return the name of the game, and also to add a field game_pic if you 
   want to display the image of the game.
   class Action(models.Model): name=models.Charfield() os= 
   models.Charfield(choices=OS) game_pic = models.ImageField(upload_to=
   'action_game_pic/') def __str__(self): return 
   

On Thursday, 12 January 2023 at 18:27:03 UTC+3 samuelb...@gmail.com wrote:

> Hey!  Am having a problem. And l request for your Help.
>
> Am having 3 apps in Django project
> - action
> - adventure
> - others
>
>
> #action/ models.py
>
> 
> class Action(models.Model):
> name=models.Charfield()
> os= models.Charfield( choices=OS)
>
>
>
> #adventure/models.py
>
>
> 
> class Adventure(models.Model):
>  name=models.Charfield()
>  os= models.Charfield( choices=OS)
>
>
>
> #Others/views.py
>
>
> from itertools import chain
>
> from action.models import Action
>
> from adventure.models import Adventure
>
>
> def windows_games(request):
>
> win_action = Action.objects.filter(os='windows')
>
> win_adventure = Adventure.objects.filter(os='windows')
> combined_list = list(chain(win_action,win_adventure))
>
> context = ['combined_list':combined_list,] 
>
>  return render(request, 'others/os/windows_game.html' , context)
>
>
>
> #others/os/windows_game.html
>
> .
> 
>  
>  {{combined_list.name}}
> .
>
>
>
>
>
>
>
>
> 1). I need to correct me in #others/ views.py  if there is any mistake 
> done.
>
> 2). I would like to know how to know how to write the tag that outputs the 
> results in #others/os/windows_game.html because I tried that but outputs 
> nothing.
> And l would like to be in form of,  
> #{{combined_list.game_pic}}, etc
>
>
> Please help me.
> I'm Samuel
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/0b3af5d4-bb40-430c-9016-c2c7fa6108fan%40googlegroups.com.


[plasma-pa] [Bug 442379] When using Natural Scrolling, scrolling direction to change volume is inverted

2023-01-13 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=442379

--- Comment #8 from Hector Martin  ---
(In reply to paul from comment #6)
> As a user of natural/inverted scrolling for years, I would argue that this
> is not actually a bug, but working as expected.  "Invert scrolling" means
> invert it _everywhere_, including the volume applet. 
> It also makes more sense if you picture a vertical volume slider (with 100%
> volume at the top) with a handle that works the same as any other scroll
> bar: you do scrollwheel-UP to push that handle down (i.e. lower volume) and
> vice versa.

It makes no sense on a touchpad. "Natural scrolling" on a touchpad means you
drag window content in the direction you want it to go. That happens to be the
opposite of the old default for scrolling windows, but not for sliders and
volume controls. Right now, enabling natural scrolling means you scroll down to
turn the volume up, which makes no sense. There is no "wheel" metaphor to make
an inverted direction ever make sense like there is on a mouse.

Ultimately, the root cause of all this confusion is that when people first
defined wheel scrolling for window content, they did so based on
*viewport/scrollbar movement* ("scroll down" means "look down" which means
"move the content up"). But that is not the case for any other context in which
the scroll wheel is used, where down and up are directly mapped. "Invert
scrolling" and "Natural scrolling" are therefore not, really, the same concept.
"Invert" might be expected to "invert everything" under some interpretations,
but there is nothing natural about that. What people naturally expect from
"Natural scrolling" is that window content scrolling flips from
viewport-centric to content-centric, and nothing else.

-- 
You are receiving this mail because:
You are watching all bug changes.

[plasma-pa] [Bug 442379] When using Natural Scrolling, scrolling direction to change volume is inverted

2023-01-13 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=442379

Hector Martin  changed:

   What|Removed |Added

 CC||mar...@marcan.st

--- Comment #7 from Hector Martin  ---
See also #442789, since this issue affects basically all non-scroll window
actions as far as I can tell (sliders, etc).

-- 
You are receiving this mail because:
You are watching all bug changes.

[Powerdevil] [Bug 450551] Battery charge limit is not preserved after reboot on ASUS (and ThinkPad) laptops supporting charge limits; need to write it on every boot

2022-12-25 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=450551

Hector Martin  changed:

   What|Removed |Added

 CC||mar...@marcan.st

--- Comment #30 from Hector Martin  ---
FYI, this is the case on Apple laptops too. They actually don't support charge
thresholds at all, just a charge behavior toggle (inhibit charge/not), and the
OS is supposed to do the rest. We'll be emulating the thresholds in the kernel
(for the convenience of userspace and because that means we can make them work
in s2idle sleep). There is no flash memory to store any of these settings, so
the OS has to re-set them on every boot.

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: RFC: Handling of multiple EFI System Partitions

2022-12-19 Thread Hector Martin
On 19/12/2022 19.52, Ilias Apalodimas wrote:
> Hi Janne, 
> 
> [...]
> 
>>>> function that can be called from board-specific code that sets the 
>>>> UUID.
>>>>
>>>> Thoughts?  Would such a feature be useful on other hardware 
>>>> platforms?
>>>
>>> efi/boot/bootaarch64.efi is only a fallback if load options are not 
>>> set up. Once the operating system has generated a load option it is 
>>> not used anymore.
>>
>> Setting load options from operation systems is currently not 
>> implemented. The only readily available method to store variables is a 
>> file in the ESP. This obviously can not be supported as UEFI runtime 
>> service while the operation system is using the same disk.
> 
> Yes, but I'd skip the 'currently not implemented' part.  If you store your EFI
> variables in a file on the ESP, we will *never* be able to (sanely) support 
> SetVariable
> at runtime. 
> 
>> It might be possible to use NOR flash as UEFI variable store. This could 
>> cause issues with the primary boot loader iboot which we can not avoid 
>> or with macOS in dual boot configurations.
> 
> It is possible.  In fact we already have code in U-Boot that stores the EFI
> variables in an RPMB partition of the eMMC.  We also have (unfortunately
> not yet upstreamed) code that stores the in an i2c eeprom in the secure
> world. 
> 
> This is again situational though and none of these applies to MBPs.
> SetVariable at runtime in an RPMB comes with it's own set of problems.
> You basically need to replace the runtime services with OS provided ones...
> The I2C one works fine.
> 
> But letting the implementation details aside, what we need to keep in mind,
> is that being able to support SetVariable-RT primarily depends on the 
> *hardware*
> and there's gonna be hardware we'll never be able to support this. 

As far as I'm concerned, the NOR is an implementation detail under
Apple's control and I would NAK any attempt at shoehorning EFI variables
in there. This is global storage and Apple already have their own NVRAM
format for boot settings (based on CHRP). Trying to abuse it for our own
purposes is asking for trouble, since we can't coordinate anything like
that with Apple. Plus there's a good chance they'll ditch the NOR and go
NVMe-only in the future (they already do it like that on iOS devices).
And if anything goes wrong we make user systems unbootable. Plus we
still have the problem that there is a logical OS environment split
before EFI, which means we'd still need multiple ESPs and an independent
EFI variable store for each. And then if the EFI services owns the NOR,
we *still* need to provide an Apple NVRAM interface to the OS, since we
do want to be able to mutate that for things like configuring boot
settings (boot chime, next boot OS, etc.) at the Apple/iBoot layer.

In my opinion, the only sane way to get EFI variables to work here is to
use ubootefi.var and teach OSes how to manipulate it directly, in lieu
of runtime services being available (or perhaps with some kind of
callback layer so the OS can provide ESP file R/W services back to the
runtime services). I'm not sure it's worth actually doing this, but I
can't think of any other viable option if we decide we really want it.

- Hector


Re: RFC: Handling of multiple EFI System Partitions

2022-12-19 Thread Hector Martin
On 19/12/2022 04.40, Heinrich Schuchardt wrote:
> The MacBooks only have one drive. Why would you want two ESPs on one drive?

The boot model of these machines is fundamentally different from EFI.

There is top-level, built-in multiboot support. The boot stages are:

=== global ===
1. iBoot1 (global bootloader + global firmware)
=== per OS / security context ===
2. iBoot2 (OS loader + runtime firmware)
3. Whatever we want

Global firmware has ABI compatibility guarantees, but runtime firmware
does not. That means that runtime firmware must be, to some extent,
paired to the OS that is running under it. macOS implements this by
always upgrading firmware in tandem with the OS. We implement this by
having our OS support a subset of "blessed" firmwares, and automatically
selecting the correct firmware for a to-be-installed OS at installation
time.

We do *not* control the firmware load process and we *cannot* replace
already running firmware with our own.

We extend this system by provisioning an EFI environment as step 3. This
allows downstream OSes to use a more familiar boot environment. But
since this EFI environment is *necessarily* tied to what the prior boot
stages consider to be *one* OS with *one* set of firmware, running
multiple OSes under the same EFI environment is a problem. There are
also likely to be further issues in the future once we start integrating
more with the platform's secure boot and SEP (think TPM) support, since
from its point of view one top-level OS is one OS, not several.

Hence, multiple ESPs, one per top-level OS, with users expected to only
install one persistent OS to each ESP (but retaining the ability to e.g.
boot from USB for recovery, under the understanding that live-booted
OSes would have to support the firmware in question and aren't going to
try to do system management tasks that are the responsibility of the
owner OS).

Additionally, this setup lets us define device trees and update the m1n1
second-stage loader with each OS, which is particularly important until
all DT schemas are stabilized, since we can't guarantee backwards
compatibility between DTs and OSes (although we try, we do have to break
it sometimes). The DT, m1n1 (which populates runtime DT options), u-boot
(DT consumer), and the OS (DT consumer) are all involved in this
process, and need to be compatible to varying degrees. If the installed
OS owns the ESP, it can take responsibility for updating the
DT/m1n1/u-boot together with itself, which solves the compatibility
issue (and makes the whole thing way more seamless for users).

I know "multiple ESPs" sounds weird in the context of traditional EFI
systems, but it's the best we could come up with to shoehorn EFI into
this very-much-not-EFI platform.

Further reading:
https://github.com/AsahiLinux/docs/wiki/Open-OS-Ecosystem-on-Apple-Silicon-Macs

> Why can't the Asahi team use the current UEFI bootflow? We should avoid 
> unneeded deviations. Can the current deviations be removed in Asahi Linux?

If you can come up with a better idea that actually works on these
platforms and solves all the issues I mentioned above, I'm all ears.

- Hector


Re: [PATCH 1/2] cmd: exit: Fix return value propagation out of environment scripts

2022-12-19 Thread Hector Palacios

Hi Marek

On 18/12/22 21:46, Marek Vasut wrote:

Make sure the 'exit' command as well as 'exit $val' command exits
from environment scripts immediately and propagates return value
out of those scripts fully. That means the following behavior is
expected:

"
=> setenv foo 'echo bar ; exit 1' ; run foo ; echo $?
bar
1
=> setenv foo 'echo bar ; exit 0' ; run foo ; echo $?
bar
0
=> setenv foo 'echo bar ; exit -2' ; run foo ; echo $?
bar
0
"

As well as the followin behavior:

"
=> setenv foo 'echo bar ; exit 3 ; echo fail'; run foo; echo $?
bar
3
=> setenv foo 'echo bar ; exit 1 ; echo fail'; run foo; echo $?
bar
1
=> setenv foo 'echo bar ; exit 0 ; echo fail'; run foo; echo $?
bar
0
=> setenv foo 'echo bar ; exit -1 ; echo fail'; run foo; echo $?
bar
0
=> setenv foo 'echo bar ; exit -2 ; echo fail'; run foo; echo $?
bar
0
=> setenv foo 'echo bar ; exit ; echo fail'; run foo; echo $?
bar
0
"

Fixes: 8c4e3b79bd0 ("cmd: exit: Fix return value")
Signed-off-by: Marek Vasut 
---
Cc: Adrian Vovk 
Cc: Hector Palacios 
Cc: Pantelis Antoniou 
Cc: Simon Glass 
Cc: Tom Rini 
---
  cmd/exit.c|  7 +--
  common/cli.c  |  7 ---
  common/cli_hush.c | 21 +++--
  3 files changed, 24 insertions(+), 11 deletions(-)

diff --git a/cmd/exit.c b/cmd/exit.c
index 2c7132693ad..7bf241ec732 100644
--- a/cmd/exit.c
+++ b/cmd/exit.c
@@ -10,10 +10,13 @@
  static int do_exit(struct cmd_tbl *cmdtp, int flag, int argc,
char *const argv[])
  {
+   int r;
+
+   r = 0;
 if (argc > 1)
-   return dectoul(argv[1], NULL);
+   r = simple_strtoul(argv[1], NULL, 10);

-   return 0;
+   return -r - 2;
  }

  U_BOOT_CMD(
diff --git a/common/cli.c b/common/cli.c
index a47d6a3f2b4..ba45dad2db5 100644
--- a/common/cli.c
+++ b/common/cli.c
@@ -146,7 +146,7 @@ int run_commandf(const char *fmt, ...)
  #if defined(CONFIG_CMD_RUN)
  int do_run(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[])
  {
-   int i;
+   int i, ret;

 if (argc < 2)
 return CMD_RET_USAGE;
@@ -160,8 +160,9 @@ int do_run(struct cmd_tbl *cmdtp, int flag, int argc, char 
*const argv[])
 return 1;
 }

-   if (run_command(arg, flag | CMD_FLAG_ENV) != 0)
-   return 1;
+   ret = run_command(arg, flag | CMD_FLAG_ENV);
+   if (ret)
+   return ret;
 }
 return 0;
  }
diff --git a/common/cli_hush.c b/common/cli_hush.c
index 1467ff81b35..b8940b19735 100644
--- a/common/cli_hush.c
+++ b/common/cli_hush.c
@@ -1902,7 +1902,7 @@ static int run_list_real(struct pipe *pi)
 last_return_code = -rcode - 2;
 return -2;  /* exit */
 }
-   last_return_code=(rcode == 0) ? 0 : 1;
+   last_return_code = rcode;
  #endif
  #ifndef __U_BOOT__
 pi->num_progs = save_num_progs; /* restore number of programs 
*/
@@ -3212,7 +3212,15 @@ static int parse_stream_outer(struct in_str *inp, int 
flag)
 printf("exit not allowed from main input 
shell.\n");
 continue;
 }
-   break;
+   /*
+* DANGER
+* Return code -2 is special in this context,
+* it indicates exit from inner pipe instead
+* of return code itself, the return code is
+* stored in 'last_return_code' variable!
+* DANGER
+*/
+   return -2;
 }
 if (code == -1)
 flag_repeat = 0;
@@ -3249,9 +3257,9 @@ int parse_string_outer(const char *s, int flag)
  #endif /* __U_BOOT__ */
  {
 struct in_str input;
+   int rcode;
  #ifdef __U_BOOT__
 char *p = NULL;
-   int rcode;
 if (!s)
 return 1;
 if (!*s)
@@ -3263,11 +3271,12 @@ int parse_string_outer(const char *s, int flag)
 setup_string_in_str(, p);
 rcode = parse_stream_outer(, flag);
 free(p);
-   return rcode;
+   return rcode == -2 ? last_return_code : rcode;
 } else {
  #endif
 setup_string_in_str(, s);
-   return parse_stream_outer(, flag);
+   rcode = parse_stream_outer(, flag);
+   return rcode == -2 ? last_return_code : rcode;
  #ifdef __U_BOOT__
 }
  #endif
@@ -3287,7 +3296,7 @@ int parse_file_outer(void)
 setup_file_in_str();
  #endif
 rcode = parse_stream_outer(, FLAG_PARSE_SEMICOLON);
-   return

[jenkinsci/nexus-platform-plugin]

2022-12-13 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/bump-innersource-dependencies-08bceb
  Home:   https://github.com/jenkinsci/nexus-platform-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/bump-innersource-dependencies-08bceb/c509f9-00%40github.com.


Re: cmd: exit: Exit functionality broken

2022-12-13 Thread Hector Palacios

Hi Max,

On 12/13/22 13:24, Max van den Biggelaar wrote:

Hi,

I have a question regarding the U-Boot exit command. We are currently using 
mainline U-Boot 2022.04 version to provide our embedded systems with a 
bootloader image. To start our firmware via U-Boot environment, we use a 
bootscript to start our firmware. However, when we tried to exit a bootscript 
with the exit command, the bootscript was never exited.

After debugging to investigate the problem, we found this commit 
(https://github.com/u-boot/u-boot/commit/8c4e3b79bd0bb76eea16869e9666e19047c0d005)
 in mainline U-Boot:
cmd: exit: Fix return value


In case exit is called in a script without parameter, the command
returns -2 ; in case exit is called with a numerical parameter,
the command returns -2 and lower. This leads to the following problem:
=> setenv foo 'echo bar ; exit 1' ; run foo ; echo $?
bar
0
=> setenv foo 'echo bar ; exit 0' ; run foo ; echo $?
bar
0
=> setenv foo 'echo bar ; exit -2' ; run foo ; echo $?
bar
0
That is, no matter what the 'exit' command argument is, the return
value is always 0 and so it is not possible to use script return
value in subsequent tests.

Fix this and simplify the exit command such that if exit is called with
no argument, the command returns 0, just like 'true' in cmd/test.c. In
case the command is called with any argument that is positive integer,
the argument is set as return value.
=> setenv foo 'echo bar ; exit 1' ; run foo ; echo $?
bar
1
=> setenv foo 'echo bar ; exit 0' ; run foo ; echo $?
bar
0
=> setenv foo 'echo bar ; exit -2' ; run foo ; echo $?
bar
0

Note that this does change ABI established in 2004 , although it is
unclear whether that ABI was originally OK or not.

Fixes: 
c26e454
Signed-off-by: Marek Vasut 
Cc: Pantelis Antoniou 
Cc: Tom Rini 

This commit does solve the problem of returning the correct value given to the 
exit command, but this breaks the following source code in common/cli_hush.c:
https://github.com/u-boot/u-boot/blob/master/common/cli_hush.c#L3207

In the previous versions of U-Boot, such as 2020.04, the exit command returned 
-2, which was expected of the exit command API. However, after the patch above 
to fix the return value, -2 was never returned and the functionality to exit a 
bootscript or mainline U-Boot shell is not supported anymore. Thus, by the 
patch above the return value is fixed, but the functionality of the exit 
command is broken.

My question is if the functionality of this patch is fully tested/qualified to 
push in mainline U-Boot source code? And if so, is the functionality of the 
exit command also going to be fixed so that in future U-Boot source code 
releases bootscripts can be exited with this command?


I believe Marek's commit must be reverted as having 'exit' return a code 
other than 0 (success) or 1 (error) was never part of U-Boot. I don't 
know if reverting may break newer scripts, but now many scripts are 
broken because 'exit' does not currently work.


Anyway, Marek wanted to give it a second thought. See 
https://www.mail-archive.com/u-boot%40lists.denx.de/msg456830.html


Regards
--
Héctor Palacios



[Powerdevil] [Bug 444029] Disable keyboard backlight on laptop lid close

2022-12-06 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=444029

Hector Martin  changed:

   What|Removed |Added

 CC||mar...@marcan.st

--- Comment #3 from Hector Martin  ---
This is a major power management bug, as leaving the keyboard backlight on
while the laptop is closed wastes significant battery power. On An Apple M1
MacBook Air, having the backlight on at full brightness with the lid closed
causes a 40% (!) drop in idle battery life (from 28 hours to 17 hours, measured
with an idle KDE Plasma session with the lid closed and the latest
linux-asahi-edge kernel).

It's "just an LED backlight", but the impact is *massive* on machines with good
power management like Apple Silicon Macs.

I recommend increasing the priority, as this really isn't "wishlist" for some
machines, it's critical functionality. I'm sure it matters less for machines
that chew through batteries even while doing nothing, but not all do :)

-- 
You are receiving this mail because:
You are watching all bug changes.

[jenkinsci/nexus-platform-plugin] 3d81f1: Removing unused IAC constant

2022-12-06 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/main
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: 3d81f1858493c4205d69729867c008f47f311b44
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/3d81f1858493c4205d69729867c008f47f311b44
  Author: Hector Hurtado 
  Date:   2022-12-06 (Tue, 06 Dec 2022)

  Changed paths:
M src/main/java/org/sonatype/nexus/ci/iq/RemoteScanner.groovy

  Log Message:
  ---
  Removing unused IAC constant


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/fb557c-3d81f1%40github.com.


[jenkinsci/nexus-platform-plugin] fb557c: INT-7518 Removing support for IAC scanning (#241)

2022-12-06 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/main
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: fb557c735a84a93b811bb111564b05721a998d62
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/fb557c735a84a93b811bb111564b05721a998d62
  Author: Hector Danilo Hurtado Olaya 
  Date:   2022-12-06 (Tue, 06 Dec 2022)

  Changed paths:
M src/main/java/org/sonatype/nexus/ci/iq/RemoteScanner.groovy
M src/test/java/org/sonatype/nexus/ci/iq/RemoteScannerTest.groovy

  Log Message:
  ---
  INT-7518 Removing support for IAC scanning (#241)


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/16e7dc-fb557c%40github.com.


[jenkinsci/nexus-platform-plugin]

2022-12-06 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-7518-remove-support-for-iac-scanning
  Home:   https://github.com/jenkinsci/nexus-platform-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7518-remove-support-for-iac-scanning/6d32c9-00%40github.com.


[jenkinsci/nexus-platform-plugin] 6d32c9: INT-7518 Removinging support for IAC scanning

2022-12-06 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-7518-remove-support-for-iac-scanning
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: 6d32c96b3111aa97f8d3c2a7828961131a2f47b5
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/6d32c96b3111aa97f8d3c2a7828961131a2f47b5
  Author: Hector Hurtado 
  Date:   2022-12-06 (Tue, 06 Dec 2022)

  Changed paths:
M src/main/java/org/sonatype/nexus/ci/iq/RemoteScanner.groovy
M src/test/java/org/sonatype/nexus/ci/iq/RemoteScannerTest.groovy

  Log Message:
  ---
  INT-7518 Removinging support for IAC scanning


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7518-remove-support-for-iac-scanning/00-6d32c9%40github.com.


[Touch-packages] [Bug 1998058] Re: dpkg error libflac8_1.3.2-1ubuntu0.1_i386.deb

2022-12-03 Thread Hector Poirot
Hi, thanks for the feedback.

The Workaround in Bug 500042 described in the Wiki (clearing the
Package.triggers & reinstall) solved the issue. The install went
through.

Note: the command to purge the ureadahead package was not necessary in
the workaround as it is not installed on my system (not sure why).

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to flac in Ubuntu.
https://bugs.launchpad.net/bugs/1998058

Title:
  dpkg error libflac8_1.3.2-1ubuntu0.1_i386.deb

Status in flac package in Ubuntu:
  Incomplete

Bug description:
  Hi the Recent security patch for libflac8 is not installing :

  Preparing to unpack .../libflac8_1.3.2-1ubuntu0.1_i386.deb ...
  dpkg: error processing archive 
/var/cache/apt/archives/libflac8_1.3.2-1ubuntu0.1_i386.deb (--unpack):
   triggers ci file contains unknown directive 'libcrypto'
  Errors were encountered while processing:
   /var/cache/apt/archives/libflac8_1.3.2-1ubuntu0.1_i386.deb
  E: Sub-process /usr/bin/dpkg returned an error code (1)

  
  I am running : 

  Distributor ID:   Ubuntu
  Description:  Ubuntu 18.04.6 LTS
  Release:  18.04
  Codename: bionic

  4.15.0-191-generic

  libflac8:
Installed: 1.3.2-1
Candidate: 1.3.2-1ubuntu0.1
Version table:
   1.3.2-1ubuntu0.1 500
  500 http://ca.archive.ubuntu.com/ubuntu bionic-security/main i386 
Packages
  500 http://ca.archive.ubuntu.com/ubuntu bionic-updates/main i386 
Packages
   *** 1.3.2-1 500
  500 http://ca.archive.ubuntu.com/ubuntu bionic/main i386 Packages
  100 /var/lib/dpkg/status

  
  Thank you

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/flac/+bug/1998058/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[jenkinsci/nexus-platform-plugin]

2022-12-01 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/bump-innersource-dependencies-18fae9
  Home:   https://github.com/jenkinsci/nexus-platform-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/bump-innersource-dependencies-18fae9/c17ae4-00%40github.com.


Re: [DISCUSS] KIP-883: Add delete callback method to Connector API

2022-11-30 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Thanks for your feedback Chris,

1. I think the behavior should remain the same as it is today. The worker stops 
the connector when its configuration is updated, and if the update is a 
deletion, it won't start the connector again. If an error happens during stop() 
today, the statusListener will update the backing store with a FAILED state. 
The only thing that changes on this path is that the Connector#stop() method 
will include an additional boolean parameter, so the connector knows that the 
reason it is being stopped is because of a deletion, and can perform additional 
actions if necessary. 

2. I agree; at first I thought it made sense, but after reading KIP-875 and 
finding out that connectors can use custom offsets topics to store offsets, I 
think this idea needs more refinement. There's probably a way to reuse the work 
proposed by this KIP with the "Automatically delete offsets with connectors" 
feature mentioned on the "Future work" section of KIP-875, and am happy to 
explore it more.

3. I didn't consider that. There is some asymmetry here on how the 
StandaloneHerder handles this (tasks are stopped before the connector is) and 
the DistributedHerder. One option would be not to handle this on the 
#processConnectorConfigUpdates(...) method, but instead wait for the 
RebalanceListener#onRevoked(...) callback, which already stops the revoked 
connectors and tasks synchronously. The idea would be to enhance this to check 
the configState store and, if the configuration of the revoked connector(s) is 
gone, then we can let the connector know about that fact when stopping it (by 
the aforementioned mechanism). I'll update the KIP and PR if you think it is 
worth it.

4. That's correct. As the KIP motivates, we have connectors that need to do 
some provisioning/setup when they are deployed (we run connectors for internal 
clients), and when tenants delete a connector, we don't have a clear signal 
that allows us to cleanup those resources. The goal is probably similar to 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-419%3A+Safely+notify+Kafka+Connect+SourceTask+is+stopped,
 just took a different approach.


From: dev@kafka.apache.org At: 11/29/22 15:31:31 UTC-5:00To:  
dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Connector API

Hi Hector,

Thanks for the KIP! Here are my initial thoughts:

1. I like the simplicity of an overloaded stop method, but there is some
asymmetry between stopping a connector and deleting one. If a connector is
stopped (for rebalance, to be reconfigured, etc.) and a failure occurs
then, the failure will be clearly visible in the REST API via, e.g., the
GET /connectors/{connector}/status endpoint. If a connector is deleted and
a failure occurs, with the current proposal, users won't have the same
level of visibility. How can we clearly surface failures caused during the
"destroy" phase of a connector's lifecycle to users?

2. I don't think that this new feature should be used to control (delete)
offsets for connectors. We're addressing that separately in KIP-875, and it
could be a source of headaches for users if they discover that some
connectors' offsets persist across deletion/recreation while others do not.
If anything, we should explicitly recommend against this kind of logic in
the Javadocs for the newly-introduced method.

3. Is it worth trying to give all of the connector's tasks a chance to shut
down before invoking "stop(true)" on the Connector? If so, any thoughts on
how we can accomplish that?

4. Just to make sure we're on the same page--this feature is not being
proposed so that connectors can try to delete the data that they've
produced (i.e., that sink connectors have written to the external system,
or that source connectors have written to Kafka), right?

Cheers,

Chris

On Thu, Nov 17, 2022 at 5:31 PM Hector Geraldino (BLOOMBERG/ 919 3RD A) <
hgerald...@bloomberg.net> wrote:

> Hi,
>
> I've updated the KIP with the new #stop(boolean isDeleted) overloaded
> method, and have also amended the PR and JIRA tickets. I also added a
> couple entries to the "Rejected alternatives" section with the reasons why
> I pivoted from introducing new callback methods to retrofit the existing
> one.
>
> Please let me know what your thoughts are.
>
> Cheers,
> Hector
>
> From: Hector Geraldino (BLOOMBERG/ 919 3RD A) At: 11/16/22 17:38:59
> UTC-5:00To:  dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Connector API
>
> Hi Mickael,
>
> I agree that the new STOPPED state proposed in KIP-875 will improve the
> connector lifecycle. The changes proposed in this KIP aim to cover the gap
> where connectors need to actually be deleted, but because the API doesn't
> provide any hooks, external assets are left lingering where they shouldn't.
>
> I agree that this proposal

Re: [Ietf-dkim] Remove the signature! (was: Re: DKIM reply mitigations: re-opening the DKIM working group)

2022-11-30 Thread Hector Santos

> On Nov 20, 2022, at 6:01 PM, Murray S. Kucherawy  wrote:
> 
> 
> 
> On Sun, Nov 20, 2022, 11:08 Dave Crocker  > wrote:
>> Seriously.  DKIM is intended as a transit-time mechanism.  When delivery 
>> occurs, transit is done.  So DKIM has done its job and can (safely?) be 
>> removed.
> 
> 
> One of the informational RFCs the original working group produced discussed 
> this. A reason (maybe the reason) the envelope was not included in the signed 
> content was so that the signature could survive without an envelope, meaning 
> it could be retrieved from a mailbox and re-verified.
> 
> I don't know, though, if anyone does this regularly, but it's been shown to 
> be useful in some circumstances. 


My input would be since UUCP days, the importer stripped the “From 
return-address“ header (note no colon) once the final destination was 
determined (local user existed, bounce was no longer necessary). 

With SMTP, this evolved to a "Return-Path:”  header and the same plug and play 
stripping action occurred.  It is the reason the MUA could never rely on the 
“From “ or “Return-Path:” header existing when the mail was picked up, and why 
DKIM doe not recommend hash binding the Return-Path header to the signature. 

Side note: early mail systems had a RFC822 to Internal, Proprietary fixed 
header structure mail format transformations, only the minimal headers were 
read like;

From
To
Subject:
Date:

And only for replies:

Reply-to:

In short, when the MDA is reached,  nothing else matters and all else is 
overhead.

With the bigger ESPs now beginning to honor strong SPF and/or DKIM Policy 
protocol models, to resolve the many indirect path mail issues, the 
Receiver/Router needs to be fully aware of the incoming SPF protected and/or 
DKIM-signed message w/o a policy wrapper, i.e. DMARC.  Authorized modifications 
are needed which may include removing/stripping overhead headers that attempt 
to keep mail path processing change history and be a problem at the ESP.  

The Authorized mail processor/router that has one goal - properly deliver the 
mail to the MDA or make it available for pick up (pop) or reading (imap, web) - 
Different MUAs.



—
HLS___
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim


[Touch-packages] [Bug 1998058] [NEW] dpkg error libflac8_1.3.2-1ubuntu0.1_i386.deb

2022-11-27 Thread Hector Poirot
Public bug reported:

Hi the Recent security patch for libflac8 is not installing :

Preparing to unpack .../libflac8_1.3.2-1ubuntu0.1_i386.deb ...
dpkg: error processing archive 
/var/cache/apt/archives/libflac8_1.3.2-1ubuntu0.1_i386.deb (--unpack):
 triggers ci file contains unknown directive 'libcrypto'
Errors were encountered while processing:
 /var/cache/apt/archives/libflac8_1.3.2-1ubuntu0.1_i386.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)


I am running : 

Distributor ID: Ubuntu
Description:Ubuntu 18.04.6 LTS
Release:18.04
Codename:   bionic

4.15.0-191-generic

libflac8:
  Installed: 1.3.2-1
  Candidate: 1.3.2-1ubuntu0.1
  Version table:
 1.3.2-1ubuntu0.1 500
500 http://ca.archive.ubuntu.com/ubuntu bionic-security/main i386 
Packages
500 http://ca.archive.ubuntu.com/ubuntu bionic-updates/main i386 
Packages
 *** 1.3.2-1 500
500 http://ca.archive.ubuntu.com/ubuntu bionic/main i386 Packages
100 /var/lib/dpkg/status


Thank you

** Affects: flac (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Touch seeded packages, which is subscribed to flac in Ubuntu.
https://bugs.launchpad.net/bugs/1998058

Title:
  dpkg error libflac8_1.3.2-1ubuntu0.1_i386.deb

Status in flac package in Ubuntu:
  New

Bug description:
  Hi the Recent security patch for libflac8 is not installing :

  Preparing to unpack .../libflac8_1.3.2-1ubuntu0.1_i386.deb ...
  dpkg: error processing archive 
/var/cache/apt/archives/libflac8_1.3.2-1ubuntu0.1_i386.deb (--unpack):
   triggers ci file contains unknown directive 'libcrypto'
  Errors were encountered while processing:
   /var/cache/apt/archives/libflac8_1.3.2-1ubuntu0.1_i386.deb
  E: Sub-process /usr/bin/dpkg returned an error code (1)

  
  I am running : 

  Distributor ID:   Ubuntu
  Description:  Ubuntu 18.04.6 LTS
  Release:  18.04
  Codename: bionic

  4.15.0-191-generic

  libflac8:
Installed: 1.3.2-1
Candidate: 1.3.2-1ubuntu0.1
Version table:
   1.3.2-1ubuntu0.1 500
  500 http://ca.archive.ubuntu.com/ubuntu bionic-security/main i386 
Packages
  500 http://ca.archive.ubuntu.com/ubuntu bionic-updates/main i386 
Packages
   *** 1.3.2-1 500
  500 http://ca.archive.ubuntu.com/ubuntu bionic/main i386 Packages
  100 /var/lib/dpkg/status

  
  Thank you

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/flac/+bug/1998058/+subscriptions


-- 
Mailing list: https://launchpad.net/~touch-packages
Post to : touch-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~touch-packages
More help   : https://help.launchpad.net/ListHelp


[jira] [Updated] (HADOOP-18535) Implement token storage solution based on MySQL

2022-11-22 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HADOOP-18535:
--
Description: 
Hadoop RBF supports custom implementations of secret managers. At the moment, 
the only available implementation is ZKDelegationTokenSecretManagerImpl, which 
stores tokens and delegation keys in Zookeeper.

During our investigation, we found that the performance of routers is limited 
by the writes to the Zookeeper token store, which impacts requests for token 
creation, renewal and cancellation. An alternative secret manager 
implementation has been created, based on MySQL, to handle a higher number of 
writes.

We measured the throughput of each token operation (create/renew/cancel) on 
different setups and obtained the following results:
 # Sending requests directly to Namenode (no RBF):
Token creations: 290 reqs per sec
Token renewals: 86 reqs per sec
Token cancellations: 97 reqs per sec

 # Sending requests to routers using Zookeeper based secret manager:
Token creations: 31 reqs per sec
Token renewals: 29 reqs per sec
Token cancellations: 40 reqs per sec
 # Sending requests to routers using SQL based secret manager:
Token creations: 241 reqs per sec
Token renewals: 103 reqs per sec
Token cancellations: 114 reqs per sec

We noticed a significant improvement when using a SQL secret manager, 
comparable to the throughput offered by Namenodes.

  was:
Hadoop RBF supports custom implementations of secret managers. At the moment, 
the only available implementation is ZKDelegationTokenSecretManagerImpl, which 
stores tokens and delegation keys in Zookeeper.

During our investigation, we found that the performance of routers is limited 
by the writes to the Zookeeper token store, which impacts requests for token 
creation, renewal and cancellation. An alternative secret manager 
implementation has been created, based on MySQL, to handle a higher number of 
writes.

We measured the throughput of each token operation (create/renew/cancel) on 
different setups and obtained the following results:
 # Sending requests directly to Namenode (no RBF):
Token creations: 290 reqs per sec
Token renewals: 86 reqs per sec
Token cancellations: 97 reqs per sec


 # Sending requests to routers using Zookeeper based secret manager:
Token creations: 31 reqs per sec
Token renewals: 29 reqs per sec
Token cancellations: 40 reqs per sec
 # Sending requests to routers using SQL based secret manager:
Token creations: 241 reqs per sec
Token renewals: 103 reqs per sec
Token cancellations: 114 reqs per sec

We noticed a significant improvement when using a SQL secret manager, 
comparable to the throughput offered by Namenodes. For this reason, 


> Implement token storage solution based on MySQL
> ---
>
> Key: HADOOP-18535
> URL: https://issues.apache.org/jira/browse/HADOOP-18535
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>    Assignee: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
>
> Hadoop RBF supports custom implementations of secret managers. At the moment, 
> the only available implementation is ZKDelegationTokenSecretManagerImpl, 
> which stores tokens and delegation keys in Zookeeper.
> During our investigation, we found that the performance of routers is limited 
> by the writes to the Zookeeper token store, which impacts requests for token 
> creation, renewal and cancellation. An alternative secret manager 
> implementation has been created, based on MySQL, to handle a higher number of 
> writes.
> We measured the throughput of each token operation (create/renew/cancel) on 
> different setups and obtained the following results:
>  # Sending requests directly to Namenode (no RBF):
> Token creations: 290 reqs per sec
> Token renewals: 86 reqs per sec
> Token cancellations: 97 reqs per sec
>  # Sending requests to routers using Zookeeper based secret manager:
> Token creations: 31 reqs per sec
> Token renewals: 29 reqs per sec
> Token cancellations: 40 reqs per sec
>  # Sending requests to routers using SQL based secret manager:
> Token creations: 241 reqs per sec
> Token renewals: 103 reqs per sec
> Token cancellations: 114 reqs per sec
> We noticed a significant improvement when using a SQL secret manager, 
> comparable to the throughput offered by Namenodes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18535) Implement token storage solution based on MySQL

2022-11-22 Thread Hector Sandoval Chaverri (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Sandoval Chaverri updated HADOOP-18535:
--
Description: 
Hadoop RBF supports custom implementations of secret managers. At the moment, 
the only available implementation is ZKDelegationTokenSecretManagerImpl, which 
stores tokens and delegation keys in Zookeeper.

During our investigation, we found that the performance of routers is limited 
by the writes to the Zookeeper token store, which impacts requests for token 
creation, renewal and cancellation. An alternative secret manager 
implementation has been created, based on MySQL, to handle a higher number of 
writes.

We measured the throughput of each token operation (create/renew/cancel) on 
different setups and obtained the following results:
 # Sending requests directly to Namenode (no RBF):
Token creations: 290 reqs per sec
Token renewals: 86 reqs per sec
Token cancellations: 97 reqs per sec


 # Sending requests to routers using Zookeeper based secret manager:
Token creations: 31 reqs per sec
Token renewals: 29 reqs per sec
Token cancellations: 40 reqs per sec
 # Sending requests to routers using SQL based secret manager:
Token creations: 241 reqs per sec
Token renewals: 103 reqs per sec
Token cancellations: 114 reqs per sec

We noticed a significant improvement when using a SQL secret manager, 
comparable to the throughput offered by Namenodes. For this reason, 

  was:
Hadoop RBF supports custom implementations of secret managers. At the moment, 
the only available implementation is ZKDelegationTokenSecretManagerImpl, which 
stores tokens and delegation keys in Zookeeper.

During our investigation, we found that the performance of routers is limited 
by the writes to the Zookeeper token store, which impacts requests for token 
creation, renewal and cancellation. An alternative secret manager 
implementation will be made available, based on MySQL, to handle a higher 
number of writes.

 


> Implement token storage solution based on MySQL
> ---
>
> Key: HADOOP-18535
> URL: https://issues.apache.org/jira/browse/HADOOP-18535
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hector Sandoval Chaverri
>    Assignee: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
>
> Hadoop RBF supports custom implementations of secret managers. At the moment, 
> the only available implementation is ZKDelegationTokenSecretManagerImpl, 
> which stores tokens and delegation keys in Zookeeper.
> During our investigation, we found that the performance of routers is limited 
> by the writes to the Zookeeper token store, which impacts requests for token 
> creation, renewal and cancellation. An alternative secret manager 
> implementation has been created, based on MySQL, to handle a higher number of 
> writes.
> We measured the throughput of each token operation (create/renew/cancel) on 
> different setups and obtained the following results:
>  # Sending requests directly to Namenode (no RBF):
> Token creations: 290 reqs per sec
> Token renewals: 86 reqs per sec
> Token cancellations: 97 reqs per sec
>  # Sending requests to routers using Zookeeper based secret manager:
> Token creations: 31 reqs per sec
> Token renewals: 29 reqs per sec
> Token cancellations: 40 reqs per sec
>  # Sending requests to routers using SQL based secret manager:
> Token creations: 241 reqs per sec
> Token renewals: 103 reqs per sec
> Token cancellations: 114 reqs per sec
> We noticed a significant improvement when using a SQL secret manager, 
> comparable to the throughput offered by Namenodes. For this reason, 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18535) Implement token storage solution based on MySQL

2022-11-21 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HADOOP-18535:
-

 Summary: Implement token storage solution based on MySQL
 Key: HADOOP-18535
 URL: https://issues.apache.org/jira/browse/HADOOP-18535
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Hector Sandoval Chaverri
Assignee: Hector Sandoval Chaverri


Hadoop RBF supports custom implementations of secret managers. At the moment, 
the only available implementation is ZKDelegationTokenSecretManagerImpl, which 
stores tokens and delegation keys in Zookeeper.

During our investigation, we found that the performance of routers is limited 
by the writes to the Zookeeper token store, which impacts requests for token 
creation, renewal and cancellation. An alternative secret manager 
implementation will be made available, based on MySQL, to handle a higher 
number of writes.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18535) Implement token storage solution based on MySQL

2022-11-21 Thread Hector Sandoval Chaverri (Jira)
Hector Sandoval Chaverri created HADOOP-18535:
-

 Summary: Implement token storage solution based on MySQL
 Key: HADOOP-18535
 URL: https://issues.apache.org/jira/browse/HADOOP-18535
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Hector Sandoval Chaverri
Assignee: Hector Sandoval Chaverri


Hadoop RBF supports custom implementations of secret managers. At the moment, 
the only available implementation is ZKDelegationTokenSecretManagerImpl, which 
stores tokens and delegation keys in Zookeeper.

During our investigation, we found that the performance of routers is limited 
by the writes to the Zookeeper token store, which impacts requests for token 
creation, renewal and cancellation. An alternative secret manager 
implementation will be made available, based on MySQL, to handle a higher 
number of writes.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PATCH] cli_hush: fix 'exit' cmd that was not exiting scripts

2022-11-21 Thread Hector Palacios

On 11/21/22 09:55, Hector Palacios wrote:

Hi Marek,

On 11/19/22 15:12, Marek Vasut wrote:

On 11/18/22 12:19, Hector Palacios wrote:

Commit 8c4e3b79bd0bb76eea16869e9666e19047c0d005 supposedly
passed one-level up the argument passed to 'exit' but it also
broke 'exit' purpose of stopping a script.

In reality, even if 'do_exit()' is capable of returning any
integer, the cli only admits '1' or '0' as return values.

This commit respects the current implementation to allow 'exit'
to at least return '1' for future processing, but returns
when the command being run is 'exit'.

Before this:

  => setenv foo 'echo bar ; exit 3 ; echo should not see this'; 
run foo; echo $?

  bar
  should not see this
  0
  => setenv foo 'echo bar ; exit 1 ; echo should not see this'; 
run foo; echo $?

  bar
  should not see this
  0
  => setenv foo 'echo bar ; exit 0 ; echo should not see this'; 
run foo; echo $?

  bar
  should not see this
  0
  => setenv foo 'echo bar ; exit -1 ; echo should not see this'; 
run foo; echo $?

  bar
  should not see this
  0
  => setenv foo 'echo bar ; exit -2 ; echo should not see this'; 
run foo; echo $?

  bar
  should not see this
  0
  => setenv foo 'echo bar ; exit ; echo should not see this'; run 
foo; echo $?

  bar
  should not see this
  0

After this:

 => setenv foo 'echo bar ; exit 3 ; echo should not see 
this'; run foo; echo $?

 bar
 1
 => setenv foo 'echo bar ; exit 1 ; echo should not see 
this'; run foo; echo $?

 bar
 1
 => setenv foo 'echo bar ; exit 0 ; echo should not see 
this'; run foo; echo $?

 bar
 0
 => setenv foo 'echo bar ; exit -1 ; echo should not see 
this'; run foo; echo $?

 bar
 0
 => setenv foo 'echo bar ; exit -2 ; echo should not see 
this'; run foo; echo $?

 bar
 0
 => setenv foo 'echo bar ; exit ; echo should not see this'; 
run foo; echo $?

 bar
 0

Reported-by: Adrian Vovk 
Signed-off-by: Hector Palacios 
---
  common/cli_hush.c | 4 
  1 file changed, 4 insertions(+)

diff --git a/common/cli_hush.c b/common/cli_hush.c
index 1467ff81b35b..9fe8b87e02d7 100644
--- a/common/cli_hush.c
+++ b/common/cli_hush.c
@@ -1902,6 +1902,10 @@ static int run_list_real(struct pipe *pi)
  last_return_code = -rcode - 2;
  return -2;  /* exit */
  }
+ if (!strcmp(pi->progs->argv[0], "exit")) {
+ last_return_code = rcode;
+ return rcode;   /* exit */
+ }
  last_return_code=(rcode == 0) ? 0 : 1;
  #endif
  #ifndef __U_BOOT__


Looking at the code just above this change 'if (rcode < -1)
last_return_code = -rcode - 2', that explains the odd 'return -r - 2' in
cmd/exit.c I think.


That's what I thought, too. The cli captures a -2 as the number to exit 
a  script, and with -rcode -2 was exiting and returning a 0.
Instead of capturing a magic number, I'm suggesting to capture 'exit' 
command.




I wonder, can we somehow fix the return code handling in cmd/exit.c
instead, so that it would cover both this behavior listed in this patch,
and 8c4e3b79bd0 ("cmd: exit: Fix return value") ? The cmd/exit.c seems
like the right place to fix it.


I didn't revert or touched 8c4e3b79bd0 but if what you wanted to do with 
that commit is to return any positive integer to the upper layers, I 
must say that just doesn't work because the cli_hush only processes 1 
(failure) or 0 (success), so there's no way for something such as 'exit 
3' to produce a $? of 3.
I think the 'exit' command should only be used with this old U-Boot 
standard of considering 1 a failure and 0 a success.


I could remove the 'if (rcode < -1)  last_return_code = -rcode - 2', 
which doesn't add much value now, but other than that I'm unsure of what 
you have in mind as to fix cmd/exit.c.


I just saw my patch causes a data abort on if conditionals, when 
accessing argv[0].


Maybe we'd rather simply revert 8c4e3b79bd0 ("cmd: exit: Fix return 
value") and let the exit command return 0 in all cases, as it is 
documented, at least until we find a proper solution.

--
Héctor Palacios



Re: [PATCH] cli_hush: fix 'exit' cmd that was not exiting scripts

2022-11-21 Thread Hector Palacios

Hi Marek,

On 11/19/22 15:12, Marek Vasut wrote:

On 11/18/22 12:19, Hector Palacios wrote:

Commit 8c4e3b79bd0bb76eea16869e9666e19047c0d005 supposedly
passed one-level up the argument passed to 'exit' but it also
broke 'exit' purpose of stopping a script.

In reality, even if 'do_exit()' is capable of returning any
integer, the cli only admits '1' or '0' as return values.

This commit respects the current implementation to allow 'exit'
to at least return '1' for future processing, but returns
when the command being run is 'exit'.

Before this:

  => setenv foo 'echo bar ; exit 3 ; echo should not see this'; 
run foo; echo $?

  bar
  should not see this
  0
  => setenv foo 'echo bar ; exit 1 ; echo should not see this'; 
run foo; echo $?

  bar
  should not see this
  0
  => setenv foo 'echo bar ; exit 0 ; echo should not see this'; 
run foo; echo $?

  bar
  should not see this
  0
  => setenv foo 'echo bar ; exit -1 ; echo should not see this'; 
run foo; echo $?

  bar
  should not see this
  0
  => setenv foo 'echo bar ; exit -2 ; echo should not see this'; 
run foo; echo $?

  bar
  should not see this
  0
  => setenv foo 'echo bar ; exit ; echo should not see this'; run 
foo; echo $?

  bar
  should not see this
  0

After this:

 => setenv foo 'echo bar ; exit 3 ; echo should not see this'; 
run foo; echo $?

 bar
 1
 => setenv foo 'echo bar ; exit 1 ; echo should not see this'; 
run foo; echo $?

 bar
 1
 => setenv foo 'echo bar ; exit 0 ; echo should not see this'; 
run foo; echo $?

 bar
 0
 => setenv foo 'echo bar ; exit -1 ; echo should not see 
this'; run foo; echo $?

 bar
 0
 => setenv foo 'echo bar ; exit -2 ; echo should not see 
this'; run foo; echo $?

 bar
 0
 => setenv foo 'echo bar ; exit ; echo should not see this'; 
run foo; echo $?

 bar
 0

Reported-by: Adrian Vovk 
Signed-off-by: Hector Palacios 
---
  common/cli_hush.c | 4 
  1 file changed, 4 insertions(+)

diff --git a/common/cli_hush.c b/common/cli_hush.c
index 1467ff81b35b..9fe8b87e02d7 100644
--- a/common/cli_hush.c
+++ b/common/cli_hush.c
@@ -1902,6 +1902,10 @@ static int run_list_real(struct pipe *pi)
  last_return_code = -rcode - 2;
  return -2;  /* exit */
  }
+ if (!strcmp(pi->progs->argv[0], "exit")) {
+ last_return_code = rcode;
+ return rcode;   /* exit */
+ }
  last_return_code=(rcode == 0) ? 0 : 1;
  #endif
  #ifndef __U_BOOT__


Looking at the code just above this change 'if (rcode < -1)
last_return_code = -rcode - 2', that explains the odd 'return -r - 2' in
cmd/exit.c I think.


That's what I thought, too. The cli captures a -2 as the number to exit 
a  script, and with -rcode -2 was exiting and returning a 0.
Instead of capturing a magic number, I'm suggesting to capture 'exit' 
command.




I wonder, can we somehow fix the return code handling in cmd/exit.c
instead, so that it would cover both this behavior listed in this patch,
and 8c4e3b79bd0 ("cmd: exit: Fix return value") ? The cmd/exit.c seems
like the right place to fix it.


I didn't revert or touched 8c4e3b79bd0 but if what you wanted to do with 
that commit is to return any positive integer to the upper layers, I 
must say that just doesn't work because the cli_hush only processes 1 
(failure) or 0 (success), so there's no way for something such as 'exit 
3' to produce a $? of 3.
I think the 'exit' command should only be used with this old U-Boot 
standard of considering 1 a failure and 0 a success.


I could remove the 'if (rcode < -1)  last_return_code = -rcode - 2', 
which doesn't add much value now, but other than that I'm unsure of what 
you have in mind as to fix cmd/exit.c.





btw. it would be good to write a unit test for this, since it is
becoming messy.


Regards
--
Héctor Palacios



[PATCH] cli_hush: fix 'exit' cmd that was not exiting scripts

2022-11-18 Thread Hector Palacios
Commit 8c4e3b79bd0bb76eea16869e9666e19047c0d005 supposedly
passed one-level up the argument passed to 'exit' but it also
broke 'exit' purpose of stopping a script.

In reality, even if 'do_exit()' is capable of returning any
integer, the cli only admits '1' or '0' as return values.

This commit respects the current implementation to allow 'exit'
to at least return '1' for future processing, but returns
when the command being run is 'exit'.

Before this:

=> setenv foo 'echo bar ; exit 3 ; echo should not see this'; run foo; 
echo $?
bar
should not see this
0
=> setenv foo 'echo bar ; exit 1 ; echo should not see this'; run foo; 
echo $?
bar
should not see this
0
=> setenv foo 'echo bar ; exit 0 ; echo should not see this'; run foo; 
echo $?
bar
should not see this
0
=> setenv foo 'echo bar ; exit -1 ; echo should not see this'; run foo; 
echo $?
bar
should not see this
0
=> setenv foo 'echo bar ; exit -2 ; echo should not see this'; run foo; 
echo $?
bar
should not see this
0
=> setenv foo 'echo bar ; exit ; echo should not see this'; run foo; 
echo $?
bar
should not see this
0

After this:

=> setenv foo 'echo bar ; exit 3 ; echo should not see this'; run foo; 
echo $?
bar
1
=> setenv foo 'echo bar ; exit 1 ; echo should not see this'; run foo; 
echo $?
bar
1
=> setenv foo 'echo bar ; exit 0 ; echo should not see this'; run foo; 
echo $?
bar
0
=> setenv foo 'echo bar ; exit -1 ; echo should not see this'; run foo; 
echo $?
bar
0
=> setenv foo 'echo bar ; exit -2 ; echo should not see this'; run foo; 
echo $?
bar
0
=> setenv foo 'echo bar ; exit ; echo should not see this'; run foo; 
echo $?
bar
0

Reported-by: Adrian Vovk 
Signed-off-by: Hector Palacios 
---
 common/cli_hush.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/common/cli_hush.c b/common/cli_hush.c
index 1467ff81b35b..9fe8b87e02d7 100644
--- a/common/cli_hush.c
+++ b/common/cli_hush.c
@@ -1902,6 +1902,10 @@ static int run_list_real(struct pipe *pi)
last_return_code = -rcode - 2;
return -2;  /* exit */
}
+   if (!strcmp(pi->progs->argv[0], "exit")) {
+   last_return_code = rcode;
+   return rcode;   /* exit */
+   }
last_return_code=(rcode == 0) ? 0 : 1;
 #endif
 #ifndef __U_BOOT__


Re: [DISCUSS] KIP-883: Add delete callback method to Connector API

2022-11-17 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi,

I've updated the KIP with the new #stop(boolean isDeleted) overloaded method, 
and have also amended the PR and JIRA tickets. I also added a couple entries to 
the "Rejected alternatives" section with the reasons why I pivoted from 
introducing new callback methods to retrofit the existing one.

Please let me know what your thoughts are.

Cheers,
Hector 

From: Hector Geraldino (BLOOMBERG/ 919 3RD A) At: 11/16/22 17:38:59 UTC-5:00To: 
 dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Connector API

Hi Mickael,

I agree that the new STOPPED state proposed in KIP-875 will improve the 
connector lifecycle. The changes proposed in this KIP aim to cover the gap 
where connectors need to actually be deleted, but because the API doesn't 
provide any hooks, external assets are left lingering where they shouldn't.

I agree that this proposal is similar to KIP-419, maybe the main difference is 
their focus on Tasks whereas KIP-833 proposes changes to the Connector. My goal 
is to figure out the correct semantics for notifying connectors that they're 
being stopped because the connector has been deleted. 

Now, computing the "deleted" state in both the Standalone and Distributed 
herders is not hard, so the question is: when shall the connector be notified? 
The "easiest" option would be to do it by calling an overloaded 
Connector#stop(deleted) method, but there are other - more expressive - ways, 
like providing an 'onDelete()' or 'destroy()' method. 

I'm leaning towards adding an overload method (less complexity, known corner 
cases), and will amend the KIP with the reasoning behind that decision soon.

Thanks for your feedback! 

From: dev@kafka.apache.org At: 11/16/22 11:13:17 UTC-5:00To:  
dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Connector API

Hi Hector,

Thanks for the KIP.

One tricky aspect is that currently there's no real way to stop a
connector so to do so people often just delete them temporarily.
KIP-875 proposes adding a mechanism to properly stop connectors which
should reduce the need to deleting them and avoid doing potentially
expensive cleanup operations repetitively.

This KIP also reminds me of KIP-419:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-419%3A+Safely+notify+Kafka
+Connect+SourceTask+is+stopped.
Is it guaranteed the new delete callback will be the last method
called?

Thanks,
Mickael


On Tue, Nov 15, 2022 at 5:40 PM Sagar  wrote:
>
> Hey Hector,
>
> Thanks for the KIP. I have a minor suggestion in terms of naming. Since
> this is a callback method, would it make sense to call it onDelete()?
>
> Also, the failure scenarios discussed by Greg would need handling. Among
> other things, I like the idea of having a timeout for graceful shutdown or
> else try a force shutdown. What do you think about that approach?
>
> Thanks!
> Sagar.
>
> On Sat, Nov 12, 2022 at 1:53 AM Hector Geraldino (BLOOMBERG/ 919 3RD A) <
> hgerald...@bloomberg.net> wrote:
>
> > Thanks Greg for taking your time to review not just the KIP but also the
> > PR.
> >
> > 1. You made very valid points regarding the behavior of the destroy()
> > callback for connectors that don't follow the happy path. After thinking
> > about it, I decided to tweak the implementation a bit and have the
> > destroy() method be called during the worker shutdown: this means it will
> > share the same guarantees the connector#stop() method has. An alternative
> > implementation can be to have an overloaded connector#stop(boolean deleted)
> > method that signals a connector that it is being stopped due to deletion,
> > but I think that having a separate destroy() method provides clearer
> > semantics.
> >
> > I'll make sure to ammend the KIP with these details.
> >
> > 3. Without going too deep on the types of operations that can be performed
> > by a connector when it's being deleted, I can imagine the
> > org.apache.kafka.connect.source.SourceConnector base class having a default
> > implementation that deletes the connector's offsets automatically
> > (controlled by a property); this is in the context of KIP-875 (first-class
> > offsets support in Kafka Connect). Similar behaviors can be introduced for
> > the SinkConnector, however I'm not sure if this KIP is the right place to
> > discuss all the possibilities, or if we shoold keeping it more
> > narrow-focused on  providing a callback mechanism for when connectors are
> > deleted, and what the expectations are around this newly introduced method.
> > What do you think?
> >
> >
> > From: dev@kafka.apache.org At: 11/09/22 16:55:04 UTC-5:00To:
> > dev@kafka.apache.org
> > Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Co

[jira] [Updated] (KAFKA-14354) Add 'isDeleted' parameter when stopping a Connector

2022-11-17 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino updated KAFKA-14354:
-
Description: It would be useful for Connectors to know when its instance is 
being deleted. This will give a chance to connectors to perform any cleanup 
tasks (e.g. deleting external resources, or deleting offsets) before the 
connector is completely removed from the cluster.  (was: It would be useful to 
have a callback method added to the Connector API, so connectors extending the 
SourceConnector and SinkConnector classes can be notified when their connector 
instance is being deleted. This will give a chance to connectors to perform any 
cleanup tasks (e.g. deleting external resources, or deleting offsets) before 
the connector is completely removed from the cluster.)

> Add 'isDeleted' parameter when stopping a Connector
> ---
>
> Key: KAFKA-14354
> URL: https://issues.apache.org/jira/browse/KAFKA-14354
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
>
> It would be useful for Connectors to know when its instance is being deleted. 
> This will give a chance to connectors to perform any cleanup tasks (e.g. 
> deleting external resources, or deleting offsets) before the connector is 
> completely removed from the cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14354) Add 'isDeleted' parameter when stopping a Connector

2022-11-17 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino updated KAFKA-14354:
-
Summary: Add 'isDeleted' parameter when stopping a Connector  (was: Add 
'destroyed()' callback method to Connector API)

> Add 'isDeleted' parameter when stopping a Connector
> ---
>
> Key: KAFKA-14354
> URL: https://issues.apache.org/jira/browse/KAFKA-14354
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
>
> It would be useful to have a callback method added to the Connector API, so 
> connectors extending the SourceConnector and SinkConnector classes can be 
> notified when their connector instance is being deleted. This will give a 
> chance to connectors to perform any cleanup tasks (e.g. deleting external 
> resources, or deleting offsets) before the connector is completely removed 
> from the cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jenkinsci/nexus-platform-plugin] 08f9dd: Updating releae notes for version 3.16.459.vcdf273...

2022-11-17 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/main
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: 08f9dd58dab798347cf46bd3860ab16b16fde76d
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/08f9dd58dab798347cf46bd3860ab16b16fde76d
  Author: Hector Hurtado 
  Date:   2022-11-17 (Thu, 17 Nov 2022)

  Changed paths:
M README.md

  Log Message:
  ---
  Updating releae notes for version 3.16.459.vcdf273b_29f8c


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/b8a811-08f9dd%40github.com.


[jenkinsci/nexus-platform-plugin]

2022-11-17 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/bump-innersource-dependencies-cc40cb
  Home:   https://github.com/jenkinsci/nexus-platform-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/bump-innersource-dependencies-cc40cb/926c47-00%40github.com.


[krita] [Bug 461660] Bug at startup

2022-11-16 Thread Hector
https://bugs.kde.org/show_bug.cgi?id=461660

Hector  changed:

   What|Removed |Added

 Resolution|--- |NOT A BUG
 Status|REPORTED|RESOLVED

--- Comment #1 from Hector  ---
A week has passed. I remembered about my report. Now I can't repeat this bug in
any builds. Maybe it's related to something else. So you can close it.

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: [DISCUSS] KIP-883: Add delete callback method to Connector API

2022-11-16 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi Mickael,

I agree that the new STOPPED state proposed in KIP-875 will improve the 
connector lifecycle. The changes proposed in this KIP aim to cover the gap 
where connectors need to actually be deleted, but because the API doesn't 
provide any hooks, external assets are left lingering where they shouldn't.

I agree that this proposal is similar to KIP-419, maybe the main difference is 
their focus on Tasks whereas KIP-833 proposes changes to the Connector. My goal 
is to figure out the correct semantics for notifying connectors that they're 
being stopped because the connector has been deleted. 

Now, computing the "deleted" state in both the Standalone and Distributed 
herders is not hard, so the question is: when shall the connector be notified? 
The "easiest" option would be to do it by calling an overloaded 
Connector#stop(deleted) method, but there are other - more expressive - ways, 
like providing an 'onDelete()' or 'destroy()' method. 

I'm leaning towards adding an overload method (less complexity, known corner 
cases), and will amend the KIP with the reasoning behind that decision soon.

Thanks for your feedback! 

From: dev@kafka.apache.org At: 11/16/22 11:13:17 UTC-5:00To:  
dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Connector API

Hi Hector,

Thanks for the KIP.

One tricky aspect is that currently there's no real way to stop a
connector so to do so people often just delete them temporarily.
KIP-875 proposes adding a mechanism to properly stop connectors which
should reduce the need to deleting them and avoid doing potentially
expensive cleanup operations repetitively.

This KIP also reminds me of KIP-419:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-419%3A+Safely+notify+Kafka
+Connect+SourceTask+is+stopped.
Is it guaranteed the new delete callback will be the last method
called?

Thanks,
Mickael


On Tue, Nov 15, 2022 at 5:40 PM Sagar  wrote:
>
> Hey Hector,
>
> Thanks for the KIP. I have a minor suggestion in terms of naming. Since
> this is a callback method, would it make sense to call it onDelete()?
>
> Also, the failure scenarios discussed by Greg would need handling. Among
> other things, I like the idea of having a timeout for graceful shutdown or
> else try a force shutdown. What do you think about that approach?
>
> Thanks!
> Sagar.
>
> On Sat, Nov 12, 2022 at 1:53 AM Hector Geraldino (BLOOMBERG/ 919 3RD A) <
> hgerald...@bloomberg.net> wrote:
>
> > Thanks Greg for taking your time to review not just the KIP but also the
> > PR.
> >
> > 1. You made very valid points regarding the behavior of the destroy()
> > callback for connectors that don't follow the happy path. After thinking
> > about it, I decided to tweak the implementation a bit and have the
> > destroy() method be called during the worker shutdown: this means it will
> > share the same guarantees the connector#stop() method has. An alternative
> > implementation can be to have an overloaded connector#stop(boolean deleted)
> > method that signals a connector that it is being stopped due to deletion,
> > but I think that having a separate destroy() method provides clearer
> > semantics.
> >
> > I'll make sure to ammend the KIP with these details.
> >
> > 3. Without going too deep on the types of operations that can be performed
> > by a connector when it's being deleted, I can imagine the
> > org.apache.kafka.connect.source.SourceConnector base class having a default
> > implementation that deletes the connector's offsets automatically
> > (controlled by a property); this is in the context of KIP-875 (first-class
> > offsets support in Kafka Connect). Similar behaviors can be introduced for
> > the SinkConnector, however I'm not sure if this KIP is the right place to
> > discuss all the possibilities, or if we shoold keeping it more
> > narrow-focused on  providing a callback mechanism for when connectors are
> > deleted, and what the expectations are around this newly introduced method.
> > What do you think?
> >
> >
> > From: dev@kafka.apache.org At: 11/09/22 16:55:04 UTC-5:00To:
> > dev@kafka.apache.org
> > Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Connector API
> >
> > Hi Hector,
> >
> > Thanks for the KIP!
> >
> > This is certainly missing functionality from the native Connect framework,
> > and we should try to make it possible to inform connectors about this part
> > of their lifecycle.
> > However, as with most functionality that was left out of the initial
> > implementation of the framework, the details are more challenging to work
> > out.
> >
> > 1. What happens when the destroy call throws an error, how d

Re: [DISCUSS] KIP-883: Add delete callback method to Connector API

2022-11-16 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi Sagar,

Thanks for your feedback! I actually renamed the method from "deleted()" to 
"destroyed()", which I think conveys the intention more clearly. I can 
certainly rename it to be 'onDeleted()', although I feel any method named 
onXXX() belongs to a listener class :)

Regarding failure scenarios, an option I'm considering is to just provide an 
overloaded Connector#stop(boolean deleted) method that is called during 
WorkerConnector#doShutdown(). This has the advantage of providing the same 
semantics that the current Connector#stop() has, with the caveat that the API 
won't be as expressive. Also, the extra 'cleanup' bits that were supposed to 
happen when a connector is deleted might not to happen at all if the connector 
doesn't stop before the configured timeout (and is therefore cancelled).

At this point I think the simplest option would be to provide an overloaded 
method (with a default implementation) that connectors can override. Wdyt?

From: dev@kafka.apache.org At: 11/15/22 11:40:26 UTC-5:00To:  
dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Connector API

Hey Hector,

Thanks for the KIP. I have a minor suggestion in terms of naming. Since
this is a callback method, would it make sense to call it onDelete()?

Also, the failure scenarios discussed by Greg would need handling. Among
other things, I like the idea of having a timeout for graceful shutdown or
else try a force shutdown. What do you think about that approach?

Thanks!
Sagar.

On Sat, Nov 12, 2022 at 1:53 AM Hector Geraldino (BLOOMBERG/ 919 3RD A) <
hgerald...@bloomberg.net> wrote:

> Thanks Greg for taking your time to review not just the KIP but also the
> PR.
>
> 1. You made very valid points regarding the behavior of the destroy()
> callback for connectors that don't follow the happy path. After thinking
> about it, I decided to tweak the implementation a bit and have the
> destroy() method be called during the worker shutdown: this means it will
> share the same guarantees the connector#stop() method has. An alternative
> implementation can be to have an overloaded connector#stop(boolean deleted)
> method that signals a connector that it is being stopped due to deletion,
> but I think that having a separate destroy() method provides clearer
> semantics.
>
> I'll make sure to ammend the KIP with these details.
>
> 3. Without going too deep on the types of operations that can be performed
> by a connector when it's being deleted, I can imagine the
> org.apache.kafka.connect.source.SourceConnector base class having a default
> implementation that deletes the connector's offsets automatically
> (controlled by a property); this is in the context of KIP-875 (first-class
> offsets support in Kafka Connect). Similar behaviors can be introduced for
> the SinkConnector, however I'm not sure if this KIP is the right place to
> discuss all the possibilities, or if we shoold keeping it more
> narrow-focused on  providing a callback mechanism for when connectors are
> deleted, and what the expectations are around this newly introduced method.
> What do you think?
>
>
> From: dev@kafka.apache.org At: 11/09/22 16:55:04 UTC-5:00To:
> dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Connector API
>
> Hi Hector,
>
> Thanks for the KIP!
>
> This is certainly missing functionality from the native Connect framework,
> and we should try to make it possible to inform connectors about this part
> of their lifecycle.
> However, as with most functionality that was left out of the initial
> implementation of the framework, the details are more challenging to work
> out.
>
> 1. What happens when the destroy call throws an error, how does the
> framework respond?
>
> This is unspecified in the KIP, and it appears that your proposed changes
> could cause the herder to fail.
> From the perspective of operators & connector developers, what is a
> reasonable expectation to have for failure of a destroy?
> I could see operators wanting both a graceful-delete to make use of this
> new feature, and a force-delete for when the graceful-delete fails.
> A connector developer could choose to swallow all errors encountered, or
> fail-fast to indicate to the operator that there is an issue with the
> graceful-delete flow.
> If the alternative is crashing the herder, connector developers may choose
> to hide serious errors, which is undesirable.
>
> 2. What happens when the destroy() call takes a long time to complete, or
> is interrupted?
>
> It appears that your implementation serially destroy()s each appropriate
> connector, and may prevent the herder thread from making progress while the
> operation is ongoing.
> We have previously had to patch Connect to perform al

Re: [Ietf-dkim] DKIM reply mitigations: re-opening the DKIM working group

2022-11-16 Thread Hector Santos


> On Nov 11, 2022, at 11:46 AM, Barry Leiba  wrote:
> 
> Indeed...
> The issue here is this:
> 
> 1. I get a (free) account on free-email.com.

Ok 

> 2. I send myself email from my account to my account.  Of course,
> free-email signs it, because it's sent from me to me: why would it
> not?

The wcSMTP router logic will not sign this path because it never reaches the 
remote target outbound queue where it may be signed.  

The router will export a message that targets a locally-hosted domain but it 
goes into the Inbound Import Queue rather than the Remote target outbound 
queue.  I have debated the signing the Export->Local-Queue->Import mail. It is 
a matter of moving the location of wcDKIM signer which is now at the router 
outbound logic.


> 3. I take that signed message and cart it over somewhere else, sending
> it out to 10,000,000 recipients through somewhere else's
> infrastructure.  It's legitimately signed by free-email.com.

Ok

> 4. Of course, it fails SPF validation.  But DKIM verifies and is
> aligned to spam...@free-email.com, because there you go.
> 
> That's the attack.  It's happening all the time.  If between 1 and 2
> we could use x= to cause the signature to time out, we'd be OK.n


For failed SPF validation, I believe we need to honor the handling expected by 
the domain owner, first and foremost.  Meaning Exclusive, Strong policies 
should always be honored.  The weak and partial/soft policies is what has added 
handling unknowns (and unfortunately, we have not focused on leveraging the 
LOOKUP opportunities to extend policies).

> The trouble is that we have to make x= broad enough to deal with
> legitimate delays.  And during that legitimate time, it's trivial for
> a spammer to send out millions of spam messages.  Crap.  So x= doesn't
> help.

I had a 2006 draft to deal with expiration for time-shifted, time delayed 
verification.

Partial DKIM Verifier Support using a DKIM-Received Trace Header
https://datatracker.ietf.org/doc/html/draft-santos-dkim-rcvd-00

I have to review it to see if it can apply here in some manner 16 yrs later.

> 
> We have to look at other options.  We thought of this when we designed
> DKIM, but couldn't come up with anything that would work.  

> We have new
> experience since then, and we want to look at alternatives, and decide
> whether priorities have changed, use cases, have changed, and so on.
> 
> It's entirely possible that we still can't fix it without breaking use
> cases that we're not willing to break.  But we have to try.


I have always been a strong advocate for extended policies for what I believe 
is the new “normal’ for SMTP receiver lookups.  For the most part, related to 
SPF/DKIM/DMARC, we have today lookups:

5321: SPF
5322: DKIM, DMARC

We can leverage the archived PRA/SUBMITTER protocol.

SMTP Service Extension for Indicating the Responsible Submitter of an E-Mail 
Message
https://www.rfc-editor.org/rfc/rfc4405
Purported Responsible Address in E-Mail Messages
https://www.rfc-editor.org/rfc/rfc4407

which passes then 5322 PRA to 5321 via the SUBMITTER ESMTP extension:

MAIL FROM: SUBMITTER=PRA

The PRA is typically the 5322.FROM

This gives ESMTP an optimizer and heads up for the transaction to expected 
DMARC domain handling policy prior to transferring the DATA payload.

ESMTP receivers who enables RFC4405 will immediately see it being used by 
compliant ESMTP senders.

We have used it for SPF lookups over the years.


—
HLS___
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim


Re: [DISCUSS] KIP-883: Add delete callback method to Connector API

2022-11-11 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Thanks Greg for taking your time to review not just the KIP but also the PR. 

1. You made very valid points regarding the behavior of the destroy() callback 
for connectors that don't follow the happy path. After thinking about it, I 
decided to tweak the implementation a bit and have the destroy() method be 
called during the worker shutdown: this means it will share the same guarantees 
the connector#stop() method has. An alternative implementation can be to have 
an overloaded connector#stop(boolean deleted) method that signals a connector 
that it is being stopped due to deletion, but I think that having a separate 
destroy() method provides clearer semantics.

I'll make sure to ammend the KIP with these details.

3. Without going too deep on the types of operations that can be performed by a 
connector when it's being deleted, I can imagine the 
org.apache.kafka.connect.source.SourceConnector base class having a default 
implementation that deletes the connector's offsets automatically (controlled 
by a property); this is in the context of KIP-875 (first-class offsets support 
in Kafka Connect). Similar behaviors can be introduced for the SinkConnector, 
however I'm not sure if this KIP is the right place to discuss all the 
possibilities, or if we shoold keeping it more narrow-focused on  providing a 
callback mechanism for when connectors are deleted, and what the expectations 
are around this newly introduced method. What do you think?


From: dev@kafka.apache.org At: 11/09/22 16:55:04 UTC-5:00To:  
dev@kafka.apache.org
Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Connector API

Hi Hector,

Thanks for the KIP!

This is certainly missing functionality from the native Connect framework,
and we should try to make it possible to inform connectors about this part
of their lifecycle.
However, as with most functionality that was left out of the initial
implementation of the framework, the details are more challenging to work
out.

1. What happens when the destroy call throws an error, how does the
framework respond?

This is unspecified in the KIP, and it appears that your proposed changes
could cause the herder to fail.
From the perspective of operators & connector developers, what is a
reasonable expectation to have for failure of a destroy?
I could see operators wanting both a graceful-delete to make use of this
new feature, and a force-delete for when the graceful-delete fails.
A connector developer could choose to swallow all errors encountered, or
fail-fast to indicate to the operator that there is an issue with the
graceful-delete flow.
If the alternative is crashing the herder, connector developers may choose
to hide serious errors, which is undesirable.

2. What happens when the destroy() call takes a long time to complete, or
is interrupted?

It appears that your implementation serially destroy()s each appropriate
connector, and may prevent the herder thread from making progress while the
operation is ongoing.
We have previously had to patch Connect to perform all connector and task
operations on a background thread, because some connector method
implementations can stall indefinitely.
Connect also has the notion of "cancelling" a connector/task if a graceful
shutdown timeout operation takes too long. Perhaps some of that design or
machinery may be useful to protect this method call as well.

More specific to the destroy() call itself, what happens when a connector
completes part of a destroy operation and then cannot complete the
remainder, either due to timing out or a worker crashing?
What is the contract with the connector developer about this method? Is the
destroy() only started exactly once during the lifetime of the connector,
or may it be retried?

3. What should be considered a reasonable custom implementation of the
destroy() call? What resources should it clean up by default?

I think we can broadly categorize the state a connector mutates among the
following
* Framework-managed state (e.g. source offsets, consumer offsets)
* Implementation detail state (e.g. debezium db history topic, audit
tables, temporary accounts)
* Third party system data (e.g. the actual data being written by a sink
connector)
* Third party system metadata (e.g. tables in a database, delivery
receipts, permissions)

I think it's apparent that the framework-managed state cannot/should not be
interacted with by the destroy() call. However, the framework could be
changed to clean up these resources at the same time that destroy() is
called. Is that out-of-scope of this proposal, and better handled by manual
intervention?
From the text of the KIP, I think it explicitly includes the Implementation
detail state, which should not be depended on externally and should be safe
to clean up during a destroy(). I think this is completely reasonable.
Are the third-party data and metadata out-of-scope for this proposal? Can
we officially recommend against it, or should we accommodate users

[krita] [Bug 461660] New: Bug at startup

2022-11-10 Thread Hector
https://bugs.kde.org/show_bug.cgi?id=461660

Bug ID: 461660
   Summary: Bug at startup
Classification: Applications
   Product: krita
   Version: nightly build (please specify the git hash!)
  Platform: Microsoft Windows
OS: Microsoft Windows
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: * Unknown
  Assignee: krita-bugs-n...@kde.org
  Reporter: misha.bossm...@yandex.ru
  Target Milestone: ---

I'm not good at describing bugs by all the rules, sorry.

In the night builds, I noticed that sometimes Krita does not start the first
time. It turned out that the process starts, but freezes in some kind of cycle.
Consumes CPU power, but takes up less than 30 MB of RAM. At the same time, I
can start more processes with Krita, which will load as expected. But the first
process will consume CPU power. I have it about 15-20%. 
And I also noticed that nightly version takes longer to process "loading
resource type" during startup. I tried to delete resourcecache, but I didn't
notice any difference. 

Windows 10 (21h2, 22h2). 
Only in Krita Nightly. An oldest one I have is from October 3, so i dont know
since... Build from October 10 works the same way.
Krita is not installed. Only portable from binary-factory.

-- 
You are receiving this mail because:
You are watching all bug changes.

[jira] [Updated] (KAFKA-14354) Add 'destroyed()' callback method to Connector API

2022-11-07 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino updated KAFKA-14354:
-
Summary: Add 'destroyed()' callback method to Connector API  (was: Add 
delete callback method to Connector API)

> Add 'destroyed()' callback method to Connector API
> --
>
> Key: KAFKA-14354
> URL: https://issues.apache.org/jira/browse/KAFKA-14354
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
>
> It would be useful to have a callback method added to the Connector API, so 
> connectors extending the SourceConnector and SinkConnector classes can be 
> notified when their connector instance is being deleted. This will give a 
> chance to connectors to perform any cleanup tasks (e.g. deleting external 
> resources, or deleting offsets) before the connector is completely removed 
> from the cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PATCH v4] i2c/pasemi: PASemi I2C controller IRQ enablement

2022-11-06 Thread Hector Martin
On 05/11/2022 20.56, Arminder Singh wrote:
> This patch adds IRQ support to the PASemi I2C controller driver to
> increase the performace of I2C transactions on platforms with PASemi I2C
> controllers. While primarily intended for Apple silicon platforms, this
> patch should also help in enabling IRQ support for older PASemi hardware
> as well should the need arise.
> 
> This version of the patch has been tested on an M1 Ultra Mac Studio,
> as well as an M1 MacBook Pro, and userspace launches successfully
> while using the IRQ path for I2C transactions.
> 
> Signed-off-by: Arminder Singh 
> ---
> This version of the patch fixes some reliability issues brought up by
> Hector and Sven in the v3 patch email thread. First, this patch
> increases the timeout value in pasemi_smb_waitready to 100ms from 10ms,
> as the original 10ms timeout in the driver was incorrect according to the
> controller's datasheet as Hector pointed out in the v3 patch email thread.
> This incorrect timeout had caused some issues with the tps6598x controller
> on Apple silicon platforms.
> 
> This version of the patch also adds a reg_write to REG_IMASK in the IRQ
> handler, because as Sven pointed out in the previous thread, the I2C
> transaction interrupt is level sensitive so not masking the interrupt in
> REG_IMASK will cause the interrupt to trigger again when it leaves the IRQ
> handler until it reaches the call to reg_write after the completion expires.
> 
> Patch changelog:
> 
> v3 to v4 changes:
>  - Increased the timeout value for I2C transactions to 100ms, as the original
>10ms timeout in the driver was incorrect according to the I2C chip's
>datasheet. Mitigates an issue with the tps6598x controller on Apple
>silicon platforms.
>  - Added a reg_write to REG_IMASK inside the IRQ handler, which prevents
>the IRQ from triggering again after leaving the IRQ handler, as the
>IRQ is level-sensitive.
> 
> v2 to v3 changes:
>  - Fixed some whitespace and alignment issues found in v2 of the patch
> 
> v1 to v2 changes:
>  - moved completion setup from pasemi_platform_i2c_probe to
>pasemi_i2c_common_probe to allow PASemi and Apple platforms to share
>common completion setup code in case PASemi hardware gets IRQ support
>added
>  - initialized the status variable in pasemi_smb_waitready when going down
>the non-IRQ path
>  - removed an unnecessary cast of dev_id in the IRQ handler
>  - fixed alignment of struct member names in i2c-pasemi-core.h
>(addresses Christophe's feedback in the original submission)
>  - IRQs are now disabled after the wait_for_completion_timeout call
>instead of inside the IRQ handler
>(prevents the IRQ from going off after the completion times out)
>  - changed the request_irq call to a devm_request_irq call to obviate
>the need for a remove function and a free_irq call
>(thanks to Sven for pointing this out in the original submission)
>  - added a reinit_completion call to pasemi_reset 
>as a failsafe to prevent missed interrupts from causing the completion
>to never complete (thanks to Arnd Bergmann for pointing this out)
>  - removed the bitmask variable in favor of just using the value
>directly (it wasn't used anywhere else)
> 
> v3: 
> https://lore.kernel.org/linux-i2c/mn2pr01mb5358ed8fc32c0cfaebd4a0e19f...@mn2pr01mb5358.prod.exchangelabs.com/T/
> 
> v2: 
> https://lore.kernel.org/linux-i2c/mn2pr01mb535821c8058c7814b2f8eedf9f...@mn2pr01mb5358.prod.exchangelabs.com/T/
> 
> v1: 
> https://lore.kernel.org/linux-i2c/mn2pr01mb535838492432c910f2381f929f...@mn2pr01mb5358.prod.exchangelabs.com/T/
> 
>  drivers/i2c/busses/i2c-pasemi-core.c | 32 ++++
>  drivers/i2c/busses/i2c-pasemi-core.h |  5 
>  drivers/i2c/busses/i2c-pasemi-platform.c |  6 +
>  3 files changed, 38 insertions(+), 5 deletions(-)
> 

Reviewed-by: Hector Martin 

- Hector


[DISCUSS] KIP-883: Add delete callback method to Connector API

2022-11-03 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi everyone,

I've submitted KIP-883, which introduces a callback to the public Connector API 
called when deleting a connector:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-883%3A+Add+delete+callback+method+to+Connector+API

It adds a new `deleted()` method (open to better naming suggestions) to the 
org.apache.kafka.connect.connector.Connector abstract class, which will be 
invoked by connect Workers when a connector is being deleted. 

Feedback and comments are welcome.

Thank you!
Hector



[jira] [Updated] (KAFKA-14354) Add delete callback method to Connector API

2022-11-03 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino updated KAFKA-14354:
-
Description: It would be useful to have a callback method added to the 
Connector API, so connectors extending the SourceConnector and SinkConnector 
classes can be notified when their connector instance is being deleted. This 
will give a chance to connectors to perform any cleanup tasks (e.g. deleting 
external resources, or deleting offsets) before the connector is completely 
removed from the cluster.  (was: KIP-795: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator

The AbstractCoordinator should have a companion public interface that is part 
of Kafka's public API, so backwards compatibility can be maintained in future 
versions of the client libraries)

> Add delete callback method to Connector API
> ---
>
> Key: KAFKA-14354
> URL: https://issues.apache.org/jira/browse/KAFKA-14354
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
>
> It would be useful to have a callback method added to the Connector API, so 
> connectors extending the SourceConnector and SinkConnector classes can be 
> notified when their connector instance is being deleted. This will give a 
> chance to connectors to perform any cleanup tasks (e.g. deleting external 
> resources, or deleting offsets) before the connector is completely removed 
> from the cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14354) Add delete callback method to Connector API

2022-11-03 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino updated KAFKA-14354:
-
Priority: Minor  (was: Major)

> Add delete callback method to Connector API
> ---
>
> Key: KAFKA-14354
> URL: https://issues.apache.org/jira/browse/KAFKA-14354
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Minor
>
> KIP-795: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator
> The AbstractCoordinator should have a companion public interface that is part 
> of Kafka's public API, so backwards compatibility can be maintained in future 
> versions of the client libraries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14354) Add delete callback method to Connector API

2022-11-03 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-14354:


 Summary: Add delete callback method to Connector API
 Key: KAFKA-14354
 URL: https://issues.apache.org/jira/browse/KAFKA-14354
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Hector Geraldino
Assignee: Hector Geraldino


KIP-795: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator

The AbstractCoordinator should have a companion public interface that is part 
of Kafka's public API, so backwards compatibility can be maintained in future 
versions of the client libraries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-14354) Add delete callback method to Connector API

2022-11-03 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino updated KAFKA-14354:
-
Component/s: KafkaConnect
 (was: clients)

> Add delete callback method to Connector API
> ---
>
> Key: KAFKA-14354
> URL: https://issues.apache.org/jira/browse/KAFKA-14354
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Major
>
> KIP-795: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator
> The AbstractCoordinator should have a companion public interface that is part 
> of Kafka's public API, so backwards compatibility can be maintained in future 
> versions of the client libraries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14354) Add delete callback method to Connector API

2022-11-03 Thread Hector Geraldino (Jira)
Hector Geraldino created KAFKA-14354:


 Summary: Add delete callback method to Connector API
 Key: KAFKA-14354
 URL: https://issues.apache.org/jira/browse/KAFKA-14354
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Hector Geraldino
Assignee: Hector Geraldino


KIP-795: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator

The AbstractCoordinator should have a companion public interface that is part 
of Kafka's public API, so backwards compatibility can be maintained in future 
versions of the client libraries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13434) Add a public API for AbstractCoordinator

2022-11-03 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino resolved KAFKA-13434.
--
Resolution: Won't Do

KIP has been discarded

> Add a public API for AbstractCoordinator
> 
>
> Key: KAFKA-13434
> URL: https://issues.apache.org/jira/browse/KAFKA-13434
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Major
>
> KIP-795: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator
> The AbstractCoordinator should have a companion public interface that is part 
> of Kafka's public API, so backwards compatibility can be maintained in future 
> versions of the client libraries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-13434) Add a public API for AbstractCoordinator

2022-11-03 Thread Hector Geraldino (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Geraldino resolved KAFKA-13434.
--
Resolution: Won't Do

KIP has been discarded

> Add a public API for AbstractCoordinator
> 
>
> Key: KAFKA-13434
> URL: https://issues.apache.org/jira/browse/KAFKA-13434
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>    Reporter: Hector Geraldino
>Assignee: Hector Geraldino
>Priority: Major
>
> KIP-795: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator
> The AbstractCoordinator should have a companion public interface that is part 
> of Kafka's public API, so backwards compatibility can be maintained in future 
> versions of the client libraries



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PATCH v3] i2c/pasemi: PASemi I2C controller IRQ enablement

2022-11-01 Thread Hector Martin
On 07/10/2022 09.42, Arminder Singh wrote:
> This patch adds IRQ support to the PASemi I2C controller driver to 
> increase the performace of I2C transactions on platforms with PASemi I2C 
> controllers. While primarily intended for Apple silicon platforms, this 
> patch should also help in enabling IRQ support for older PASemi hardware 
> as well should the need arise.
> 
> Signed-off-by: Arminder Singh 
> ---
> This version of the patch has been tested on an M1 Ultra Mac Studio,
> as well as an M1 MacBook Pro, and userspace launches successfully
> while using the IRQ path for I2C transactions.
>
[...]

Please increase the timeout to 100ms for v4. 10ms was always wrong (the
datasheet says the hardware clock stretching timeout is 25ms, and most
i2c drivers have much larger timeouts), and with the tighter timing
achievable with the IRQ patchset we are seeing timeouts in tipd
controller requests which can clock-stretch for ~10ms themselves,
followed by a spiral of errors as the driver has pretty poor error
recovery. Increasing the timeout fixes the immediate problem/regression.

Other than that, I now have a patch that makes the whole timeout/error
detection/recovery much more robust, but I can submit it after this goes
in :)

- Hector


Re: [PATCH v2] drm/format-helper: Only advertise supported formats for conversion

2022-10-31 Thread Hector Martin
On 01/11/2022 01.15, Justin Forbes wrote:
> On Thu, Oct 27, 2022 at 8:57 AM Hector Martin  wrote:
>>
>> drm_fb_build_fourcc_list() currently returns all emulated formats
>> unconditionally as long as the native format is among them, even though
>> not all combinations have conversion helpers. Although the list is
>> arguably provided to userspace in precedence order, userspace can pick
>> something out-of-order (and thus break when it shouldn't), or simply
>> only support a format that is unsupported (and thus think it can work,
>> which results in the appearance of a hang as FB blits fail later on,
>> instead of the initialization error you'd expect in this case).
>>
>> Add checks to filter the list of emulated formats to only those
>> supported for conversion to the native format. This presumes that there
>> is a single native format (only the first is checked, if there are
>> multiple). Refactoring this API to drop the native list or support it
>> properly (by returning the appropriate emulated->native mapping table)
>> is left for a future patch.
>>
>> The simpledrm driver is left as-is with a full table of emulated
>> formats. This keeps all currently working conversions available and
>> drops all the broken ones (i.e. this a strict bugfix patch, adding no
>> new supported formats nor removing any actually working ones). In order
>> to avoid proliferation of emulated formats, future drivers should
>> advertise only XRGB as the sole emulated format (since some
>> userspace assumes its presence).
>>
>> This fixes a real user regression where the ?RGB2101010 support commit
>> started advertising it unconditionally where not supported, and KWin
>> decided to start to use it over the native format and broke, but also
>> the fixes the spurious RGB565/RGB888 formats which have been wrongly
>> unconditionally advertised since the dawn of simpledrm.
>>
>> Fixes: 6ea966fca084 ("drm/simpledrm: Add [AX]RGB210101
> 
> 
>> Cc: sta...@vger.kernel.org
>> Signed-off-by: Hector Martin 
> 
> There is a CC for stable on here, but this patch does not apply in any
> way on 6.0 or older kernels as the fourcc bits and considerable churn
> came in with the 6.1 merge window.  You don't happen to have a
> backport of this to 6.0 do you?

v1 is probably closer to such a backport, and I offered to figure it out
on Matrix but I heard you're already working on it ;)

- Hector


[yakuake] [Bug 363333] Processes started in yakuake terminals block indefinitely some time after switching to a different VT

2022-10-31 Thread Hector Martin
https://bugs.kde.org/show_bug.cgi?id=36

Hector Martin  changed:

   What|Removed |Added

 CC||hec...@marcansoft.com

--- Comment #8 from Hector Martin  ---
This also affects Konsole. Switching to another VT hangs processes that are
producing output, even if Konsole is minimized or the active tab is not the one
producing output.

-- 
You are receiving this mail because:
You are watching all bug changes.

[jira] [Commented] (FLINK-29609) Clean up jobmanager deployment on suspend after recording savepoint info

2022-10-30 Thread Hector Miuler Malpica Gallegos (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17626372#comment-17626372
 ] 

Hector Miuler Malpica Gallegos commented on FLINK-29609:


[~sriramgr] In my opinion, this should only happen in application mode, in 
session mode it should continue to exist waiting for a new job.

> Clean up jobmanager deployment on suspend after recording savepoint info
> 
>
> Key: FLINK-29609
> URL: https://issues.apache.org/jira/browse/FLINK-29609
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Sriram Ganesh
>Priority: Major
> Fix For: kubernetes-operator-1.3.0
>
>
> Currently in case of suspending with savepoint. The jobmanager pod will 
> linger there forever after cancelling the job.
> This is currently used to ensure consistency in case the 
> operator/cancel-with-savepoint operation fails.
> Once we are sure however that the savepoint has been recorded and the job is 
> shut down, we should clean up all the resources. Optionally we can make this 
> configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PATCH v2] drm/format-helper: Only advertise supported formats for conversion

2022-10-29 Thread Hector Martin
On 28/10/2022 17.07, Thomas Zimmermann wrote:
> In yesterday's discussion on IRC, it was said that several devices 
> advertise ARGB framebuffers when the hardware actually uses XRGB? Is 
> there hardware that supports transparent primary planes?

ARGB hardware probably exists in the form of embedded systems with
preconfigured blending. For example, one could imagine an OSD-type setup
where there is a hardware video scaler controlled entirely outside of
DRM/KMS (probably by a horrible vendor driver), and the overlay
framebuffer is exposed via simpledrm as a dumb memory region, and
expects ARGB to work. So ideally, we wouldn't expose XRGB on
ARGB systems.

But there is this problem:

arch/arm64/boot/dts/qcom/msm8998-oneplus-common.dtsi:
   format = "a8r8g8b8";
arch/arm64/boot/dts/qcom/sdm630-sony-xperia-nile.dtsi:
   format = "a8r8g8b8";
arch/arm64/boot/dts/qcom/sdm660-xiaomi-lavender.dts:
   format = "a8r8g8b8";
arch/arm64/boot/dts/qcom/sdm845-shift-axolotl.dts:
format = "a8r8g8b8";
arch/arm64/boot/dts/qcom/sdm850-samsung-w737.dts:
format = "a8r8g8b8";
arch/arm64/boot/dts/qcom/sm6125-sony-xperia-seine-pdx201.dts:
   format = "a8r8g8b8";
arch/arm64/boot/dts/qcom/sm6350-sony-xperia-lena-pdx213.dts:
   format = "a8r8g8b8";
arch/arm64/boot/dts/qcom/sm7225-fairphone-fp4.dts:
format = "a8r8g8b8";
arch/arm64/boot/dts/qcom/sm8150-sony-xperia-kumano.dtsi:
   format = "a8r8g8b8";
arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi:
   format = "a8r8g8b8";
arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami.dtsi:
   format = "a8r8g8b8";
arch/arm64/boot/dts/socionext/uniphier-ld20-akebi96.dts:
format = "a8r8g8b8";

I'm pretty sure those phones don't have transparent screens, nor
magically put video planes below the firmware framebuffer. If there are
12 device trees for phones in mainline which lie about having alpha
support, who knows how many more exist outside? If we stop advertising
pretend-XRGB on them, I suspect we're going to break a lot of
software...

Of course, there is one "correct" solution here: have an actual
xrgb->argb conversion helper that just clears the high byte.
Then those platforms lying about having alpha and using xrgb from
userspace will take a performace hit, but they should arguably just fix
their device tree in that case. Maybe this is the way to go in this
case? Note that there would be no inverse conversion (no advertising
argb on xrgb backends), so that one would be dropped vs. what we
have today. This effectively keeps the "xrgb helpers and nothing
else" rule while actually supporting it for argb backend
framebuffers correctly. Any platforms actually wanting to use argb
framebuffers with meaningful alpha should be configuring their userspace
to preferentially render directly to argb to avoid the perf hit anyway.

- Hector


[PATCH v2] drm/format-helper: Only advertise supported formats for conversion

2022-10-27 Thread Hector Martin
drm_fb_build_fourcc_list() currently returns all emulated formats
unconditionally as long as the native format is among them, even though
not all combinations have conversion helpers. Although the list is
arguably provided to userspace in precedence order, userspace can pick
something out-of-order (and thus break when it shouldn't), or simply
only support a format that is unsupported (and thus think it can work,
which results in the appearance of a hang as FB blits fail later on,
instead of the initialization error you'd expect in this case).

Add checks to filter the list of emulated formats to only those
supported for conversion to the native format. This presumes that there
is a single native format (only the first is checked, if there are
multiple). Refactoring this API to drop the native list or support it
properly (by returning the appropriate emulated->native mapping table)
is left for a future patch.

The simpledrm driver is left as-is with a full table of emulated
formats. This keeps all currently working conversions available and
drops all the broken ones (i.e. this a strict bugfix patch, adding no
new supported formats nor removing any actually working ones). In order
to avoid proliferation of emulated formats, future drivers should
advertise only XRGB as the sole emulated format (since some
userspace assumes its presence).

This fixes a real user regression where the ?RGB2101010 support commit
started advertising it unconditionally where not supported, and KWin
decided to start to use it over the native format and broke, but also
the fixes the spurious RGB565/RGB888 formats which have been wrongly
unconditionally advertised since the dawn of simpledrm.

Fixes: 6ea966fca084 ("drm/simpledrm: Add [AX]RGB2101010 formats")
Fixes: 11e8f5fd223b ("drm: Add simpledrm driver")
Cc: sta...@vger.kernel.org
Signed-off-by: Hector Martin 
---
I'm proposing this alternative approach after a heated discussion on
IRC. I'm out of ideas, if y'all don't like this one you can figure it
out for yourseves :-)

Changes since v1:
This v2 moves all the changes to the helper (so they will apply to
the upcoming ofdrm, though ofdrm also needs to be fixed to trim its
format table to only formats that should be emulated, probably only
XRGB, to avoid further proliferating the use of conversions),
and avoids touching more than one file. The API still needs cleanup
as mentioned (supporting more than one native format is fundamentally
broken, since the helper would need to tell the driver *what* native
format to use for *each* emulated format somehow), but all current and
planned users only pass in one native format, so this can (and should)
be fixed later.

Aside: After other IRC discussion, I'm testing nuking the
XRGB2101010 <-> ARGB2101010 advertisement (which does not involve
conversion) by removing those entries from simpledrm in the Asahi Linux
downstream tree. As far as I'm concerned, it can be removed if nobody
complains (by removing those entries from the simpledrm array), if
maintainers are generally okay with removing advertised formats at all.
If so, there might be other opportunities for further trimming the list
non-native formats advertised to userspace.

Tested with KWin-X11, KWin-Wayland, GNOME-X11, GNOME-Wayland, and Weston
on both XRGB2101010 and RGB simpledrm framebuffers.

 drivers/gpu/drm/drm_format_helper.c | 66 -
 1 file changed, 47 insertions(+), 19 deletions(-)

diff --git a/drivers/gpu/drm/drm_format_helper.c 
b/drivers/gpu/drm/drm_format_helper.c
index e2f76621453c..3ee59bae9d2f 100644
--- a/drivers/gpu/drm/drm_format_helper.c
+++ b/drivers/gpu/drm/drm_format_helper.c
@@ -807,6 +807,38 @@ static bool is_listed_fourcc(const uint32_t *fourccs, 
size_t nfourccs, uint32_t
return false;
 }
 
+static const uint32_t conv_from_xrgb[] = {
+   DRM_FORMAT_XRGB,
+   DRM_FORMAT_ARGB,
+   DRM_FORMAT_XRGB2101010,
+   DRM_FORMAT_ARGB2101010,
+   DRM_FORMAT_RGB565,
+   DRM_FORMAT_RGB888,
+};
+
+static const uint32_t conv_from_rgb565_888[] = {
+   DRM_FORMAT_XRGB,
+   DRM_FORMAT_ARGB,
+};
+
+static bool is_conversion_supported(uint32_t from, uint32_t to)
+{
+   switch (from) {
+   case DRM_FORMAT_XRGB:
+   case DRM_FORMAT_ARGB:
+   return is_listed_fourcc(conv_from_xrgb, 
ARRAY_SIZE(conv_from_xrgb), to);
+   case DRM_FORMAT_RGB565:
+   case DRM_FORMAT_RGB888:
+   return is_listed_fourcc(conv_from_rgb565_888, 
ARRAY_SIZE(conv_from_rgb565_888), to);
+   case DRM_FORMAT_XRGB2101010:
+   return to == DRM_FORMAT_ARGB2101010;
+   case DRM_FORMAT_ARGB2101010:
+   return to == DRM_FORMAT_XRGB2101010;
+   default:
+   return false;
+   }
+}
+
 /**
  * drm_fb_build_fourcc_list - Filters a list of supported color formats against
  *the device's native formats
@@

Re: [PATCH] drm/simpledrm: Only advertise formats that are supported

2022-10-27 Thread Hector Martin
On 27/10/2022 20.08, Thomas Zimmermann wrote:
> We currently have two DRM drivers that call drm_fb_build_fourcc_list(): 
> simpledrm and ofdrm. I've been very careful to keep the format selection 
> in sync between them. (That's the reason why the helper exists at all.) 
> If the drivers start to use different logic, it will only become more 
> chaotic.
> 
> The format array of ofdrm is at [1]. At a minimum, ofdrm should get the 
> same fix as simpledrm.

Looks like this was merged recently, so I didn't see it on my tree (I
was basing off of 6.1-rc2).

Since this patch is a regression fix, it should be applied to drm-fixes
(and automatically picked up by stable folks) soon to be fixed in 6.1,
and then we can fix whatever is needed in ofdrm separately in drm-tip.
As long as ofdrm is ready for the new behavior prior to the merge of
drm-tip with 6.1, there will be no breakage.

In this case, the change required to ofdrm is probably just to replace
that array with just DRM_FORMAT_XRGB (which should be the only
supported fallback format for new drivers) and then to add a test to
only expose it for formats for which we actually have conversion
helpers, similar to what the the switch() enumerates here. That logic
could later be moved into the helper as a refactor.

>>  /* Primary plane */
>> +switch (format->format) {
> 
> I trust you when you say that ->XRGB is not enough. But 
> although I've read your replies, I still don't understand why this 
> switch is necessary.
> 
> Why don't we call drm_fb_build_fourcc_list() with the native 
> format/formats and let it append a number of formats, such as adding 
> XRGB888, adding ARGB if necessary, adding ARGB2101010 if necessary. 
> Each with a elaborate comment why and which userspace needs the format. (?)

That would be fine to do, it would just be moving the logic to the
helper. That kind of refactoring is better suited for subsequent
patches. This is a regression fix, it attempts to minimize the amount of
refactoring, which means keeping the logic in simpledrm, to make it
easier to review for correctness.

Also, that would change the API of that function, which would likely
make the merge with the new ofdrm painful. The way things are now, a
small fix to ofdrm will make it compatible with both the state before
and after this patch, which means the merge will go through painlessly,
and then we can just refactor everything once everything is in the same
tree.

- Hector


Re: [PATCH] drm/simpledrm: Only advertise formats that are supported

2022-10-27 Thread Hector Martin
On 27/10/2022 19.13, Hector Martin wrote:
> Until now, simpledrm unconditionally advertised all formats that can be
> supported natively as conversions. However, we don't actually have a
> full conversion matrix of helpers. Although the list is arguably
> provided to userspace in precedence order, userspace can pick something
> out-of-order (and thus break when it shouldn't), or simply only support
> a format that is unsupported (and thus think it can work, which results
> in the appearance of a hang as FB blits fail later on, instead of the
> initialization error you'd expect in this case).
> 
> Split up the format table into separate ones for each required subset,
> and then pick one based on the native format. Also remove the
> native<->conversion overlap check from the helper (which doesn't make
> sense any more, since the native format is advertised anyway and this
> way RGB565/RGB888 can share a format table), and instead print the same
> message in simpledrm when the native format is not one for which we have
> conversions at all.
> 
> This fixes a real user regression where the ?RGB2101010 support commit
> started advertising it unconditionally where not supported, and KWin
> decided to start to use it over the native format, but also the fixes
> the spurious RGB565/RGB888 formats which have been wrongly
> unconditionally advertised since the dawn of simpledrm.
> 
> Note: this patch is merged because splitting it into two patches, one
> for the helper and one for simpledrm, would regress at the midpoint
> regardless of the order. If simpledrm is changed first, that would break
> working conversions to RGB565/RGB888 (since those share a table that
> does not include the native formats). If the helper is changed first, it
> would start spuriously advertising all conversion formats when the
> native format doesn't have any supported conversions at all.
> 
> Acked-by: Pekka Paalanen 
> Fixes: 6ea966fca084 ("drm/simpledrm: Add [AX]RGB2101010 formats")
> Fixes: 11e8f5fd223b ("drm: Add simpledrm driver")
> Cc: sta...@vger.kernel.org
> Signed-off-by: Hector Martin 
> ---
>  drivers/gpu/drm/drm_format_helper.c | 15 ---
>  drivers/gpu/drm/tiny/simpledrm.c| 62 +
>  2 files changed, 55 insertions(+), 22 deletions(-)
> 

To answer some issues that came up on IRC:

Q: Why not move this logic / the tables to the helper?
A: Because simpledrm is the only user so far, and this patch is Cc:
stable because we have an actual regression that broke KDE. I'm going
for the minimal patch that keeps everything that worked to this day
working, and stops advertising things that never worked, no more, no
less. Future refactoring can always happen later (and is probably a good
idea when other drivers start using the helper).

Q: XRGB is supposed to be the only canonical format. Why not just
drop everything but conversions to/from XRGB?
A: Because that would regress things that work today, and could break
existing userspace on some platforms. That may be a good idea, but I
think we should fix the bugs first, and leave the discussion of whether
we want to actually remove existing functionality for later.

Q: Why not just add a conversion from XRGB2101010 to XRGB?
A: Because that would only fix KDE, and would make it slower vs. not
advertising XRGB2101010 at all (double conversions, plus kernel
conversion can be slower). Plus, it doesn't make any sense as it only
fills in one entry in the conversion matrix. If we wanted to actually
fill out the conversion matrix, and thus support everything simpledrm
has advertised to day correctly, we would need helpers for:

rgb565->rgb888
rgb888->rgb565
rgb565->xrgb2101010
rgb888->xrgb2101010
xrgb2101010->rgb565
xrgb2101010->rgb888
xrgb2101010->xrgb

That seems like overkill and unlikely to actually help anyone, it'd just
give userspace more options to shoot itself in the foot with a
sub-optimal format choice. And it's a pile of code.

- Hector


[PATCH] drm/simpledrm: Only advertise formats that are supported

2022-10-27 Thread Hector Martin
Until now, simpledrm unconditionally advertised all formats that can be
supported natively as conversions. However, we don't actually have a
full conversion matrix of helpers. Although the list is arguably
provided to userspace in precedence order, userspace can pick something
out-of-order (and thus break when it shouldn't), or simply only support
a format that is unsupported (and thus think it can work, which results
in the appearance of a hang as FB blits fail later on, instead of the
initialization error you'd expect in this case).

Split up the format table into separate ones for each required subset,
and then pick one based on the native format. Also remove the
native<->conversion overlap check from the helper (which doesn't make
sense any more, since the native format is advertised anyway and this
way RGB565/RGB888 can share a format table), and instead print the same
message in simpledrm when the native format is not one for which we have
conversions at all.

This fixes a real user regression where the ?RGB2101010 support commit
started advertising it unconditionally where not supported, and KWin
decided to start to use it over the native format, but also the fixes
the spurious RGB565/RGB888 formats which have been wrongly
unconditionally advertised since the dawn of simpledrm.

Note: this patch is merged because splitting it into two patches, one
for the helper and one for simpledrm, would regress at the midpoint
regardless of the order. If simpledrm is changed first, that would break
working conversions to RGB565/RGB888 (since those share a table that
does not include the native formats). If the helper is changed first, it
would start spuriously advertising all conversion formats when the
native format doesn't have any supported conversions at all.

Acked-by: Pekka Paalanen 
Fixes: 6ea966fca084 ("drm/simpledrm: Add [AX]RGB2101010 formats")
Fixes: 11e8f5fd223b ("drm: Add simpledrm driver")
Cc: sta...@vger.kernel.org
Signed-off-by: Hector Martin 
---
 drivers/gpu/drm/drm_format_helper.c | 15 ---
 drivers/gpu/drm/tiny/simpledrm.c| 62 +
 2 files changed, 55 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/drm_format_helper.c 
b/drivers/gpu/drm/drm_format_helper.c
index e2f76621453c..c60c13f3a872 100644
--- a/drivers/gpu/drm/drm_format_helper.c
+++ b/drivers/gpu/drm/drm_format_helper.c
@@ -864,20 +864,6 @@ size_t drm_fb_build_fourcc_list(struct drm_device *dev,
++fourccs;
}
 
-   /*
-* The plane's atomic_update helper converts the framebuffer's color 
format
-* to a native format when copying to device memory.
-*
-* If there is not a single format supported by both, device and
-* driver, the native formats are likely not supported by the conversion
-* helpers. Therefore *only* support the native formats and add a
-* conversion helper ASAP.
-*/
-   if (!found_native) {
-   drm_warn(dev, "Format conversion helpers required to add extra 
formats.\n");
-   goto out;
-   }
-
/*
 * The extra formats, emulated by the driver, go second.
 */
@@ -898,7 +884,6 @@ size_t drm_fb_build_fourcc_list(struct drm_device *dev,
++fourccs;
}
 
-out:
return fourccs - fourccs_out;
 }
 EXPORT_SYMBOL(drm_fb_build_fourcc_list);
diff --git a/drivers/gpu/drm/tiny/simpledrm.c b/drivers/gpu/drm/tiny/simpledrm.c
index 18489779fb8a..1257411f3d44 100644
--- a/drivers/gpu/drm/tiny/simpledrm.c
+++ b/drivers/gpu/drm/tiny/simpledrm.c
@@ -446,22 +446,48 @@ static int simpledrm_device_init_regulators(struct 
simpledrm_device *sdev)
  */
 
 /*
- * Support all formats of simplefb and maybe more; in order
- * of preference. The display's update function will do any
+ * Support the subset of formats that we have conversion helpers for,
+ * in order of preference. The display's update function will do any
  * conversion necessary.
  *
  * TODO: Add blit helpers for remaining formats and uncomment
  *   constants.
  */
-static const uint32_t simpledrm_primary_plane_formats[] = {
+
+/*
+ * Supported conversions to RGB565 and RGB888:
+ *   from [AX]RGB
+ */
+static const uint32_t simpledrm_primary_plane_formats_base[] = {
+   DRM_FORMAT_XRGB,
+   DRM_FORMAT_ARGB,
+};
+
+/*
+ * Supported conversions to [AX]RGB:
+ *   A/X variants (no-op)
+ *   from RGB565
+ *   from RGB888
+ */
+static const uint32_t simpledrm_primary_plane_formats_xrgb[] = {
DRM_FORMAT_XRGB,
DRM_FORMAT_ARGB,
+   DRM_FORMAT_RGB888,
DRM_FORMAT_RGB565,
//DRM_FORMAT_XRGB1555,
//DRM_FORMAT_ARGB1555,
-   DRM_FORMAT_RGB888,
+};
+
+/*
+ * Supported conversions to [AX]RGB2101010:
+ *   A/X variants (no-op)
+ *   from [AX]RGB
+ */
+static const uint32_t simpledrm_primary_plane_formats_xrgb2101010[] = {
DRM_FORMAT_XRGB210

[jira] [Commented] (FLINK-29609) Clean up jobmanager deployment on suspend after recording savepoint info

2022-10-26 Thread Hector Miuler Malpica Gallegos (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-29609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17624585#comment-17624585
 ] 

Hector Miuler Malpica Gallegos commented on FLINK-29609:


Please, take into account the stateless batch processes, which once finished 
processing, should clean all the resources

> Clean up jobmanager deployment on suspend after recording savepoint info
> 
>
> Key: FLINK-29609
> URL: https://issues.apache.org/jira/browse/FLINK-29609
> Project: Flink
>  Issue Type: Improvement
>  Components: Kubernetes Operator
>Reporter: Gyula Fora
>Assignee: Sriram Ganesh
>Priority: Major
> Fix For: kubernetes-operator-1.3.0
>
>
> Currently in case of suspending with savepoint. The jobmanager pod will 
> linger there forever after cancelling the job.
> This is currently used to ensure consistency in case the 
> operator/cancel-with-savepoint operation fails.
> Once we are sure however that the savepoint has been recorded and the job is 
> shut down, we should clean up all the resources. Optionally we can make this 
> configurable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: bindfs for web docroot - is this sane?

2022-10-11 Thread Richard Hector

On 12/10/22 00:26, Dan Ritter wrote:

Richard Hector wrote:

Hi all,

I host a few websites, mostly Wordpress.

I prefer to have the site files (mostly) owned by an owner user, and php-fpm
runs as a different user, so that it can't write its own code. For uploads,
those directories are group-writeable.

Then for site developers (who might be contractors to my client) to be able
to update teh site, they need read/write access to the docroot, but I don't
want them all logging in using the same account/credentials.

So I've set up bindfs ( https://bindfs.org/ ) with the following fstab line
(example at this stage):

/srv/wptest-home/doc_root /home/richard/wptest-home/doc_root fuse.bindfs 
--force-user=richard,--force-group=richard,--create-for-user=wptest-home,--create-for-group=wptest-home
0 0

That means they can see their own 'view' of the docroot under their own home
directory, and they can create files as needed, which will have the correct
owner under /srv. I haven't yet looked at what happens with the uploaded and
cached files which are owned by the php user; hopefully that works ok.

This means I don't need to worry about sudo and similar things, or
chown/chgrp - which in turn means I should be able to offer sftp as an
alternative to full ssh logins. It can probably even be chrooted.

Does that sound like a sane plan? Are there gotchas I haven't spotted?


That's a solution which has worked in similar situations in the
past, but it runs into problems with accountability and
debugging.

The better solution is to use a versioning system -- git is the
default these days, subversion will certainly work -- and
require your site developers to make their changes to the
version controlled repository. The repo is either automatically
(cron, usually) or manually (dev sends an email or a ticket)
updated on the web host.


I agree that a git-based deployment scheme would be good. However, I 
understand that it's considered bad practice for the docroot to itself 
be a git repo, which means writing scripts to check out the right 
version and then deploy it (which might also help with setting the right 
permissions).


I'm also not entirely comfortable with either a cron or ticket-based 
trigger - I'd want to look into either git hooks (but that's on the 
wrong machine), or maybe a webapp with a deploy button.


And then there's the issue of what is in git and what isn't, and how to 
customise the installation after checkout - eg setting the site name/url 
to distinguish it from the dev/staging site or whatever, setting db 
passwords etc. More stuff for the deployment script to do, I guess.


So I like this idea, but it's a lot more work. And I have to convince my 
clients and/or their devs to use it, which might require learning git. 
And I'm not necessarily good enough at git myself to do that teaching well.



- devs don't get accounts on the web host at all


They might need it anyway, for running wp cli commands etc (especially 
given the privilege separation which means that installing plugins via 
the WP admin pages won't work - or would you include the plugins in the 
git repo?)



- you can resolve the conflicts of two people working on the
   same site

True.


- automatic backups, assuming you have a repo not on this server


I have backups of the web server; backups of the repo as well would be good.


- easy revert to a previous version


True.


- easy deployment to multiple servers for load balancing

True, though I'm not at that level at this point.


Drawbacks:

- devs have to have a local webserver to test their changes

Yes, or a dev server/site provided by me


- devs have to follow the process

And have to know how, yes


- someone has to resolve conflicts or decide what the deployed
   version is

True anyway

Note that this method doesn't stop the dev(s) using git anyway.

In summary, I think I want to offer a git-based method, but I think it 
would work ok in combination with this, which is initially simpler.


It sounds like there's nothing fundamentally broken about it, at least :-)

Cheers,
Richard



Re: bindfs for web docroot - is this sane?

2022-10-11 Thread Richard Hector

On 11/10/22 22:40, hede wrote:

On 11.10.2022 10:03 Richard Hector wrote:

[...]
Then for site developers (who might be contractors to my client) to be
able to update teh site, they need read/write access to the docroot,
but I don't want them all logging in using the same
account/credentials.
[...]
Does that sound like a sane plan? Are there gotchas I haven't spotted?


I think I'm not able to assess the bind-mount question, but...
Isn't that a use case for ACLs? (incl. default ACLs for the webservers 
user here?)


Yes, probably. However, I looked at ACLs earlier (months ago at least), 
and they did my head in ...


Files will then still be owned by the user who created them. But your 
default-user has all  (predefined) rights on them.


Having them owned by the user that created them is good for 
accountability, but bad for glancing at ls output to see if everything 
looks right.


I'd probably prefer that because - by instinct - I have a bad feeling 
regarding security if one user can slip/foist(?) a file to be "created" 
by some other user. But that's only a feeling without knowing all the 
circumstances.


They can only have it owned by one specific user, but I acknowledge 
possible issues there.


And this way it's always clear which users have access by looking at the 
ACLs while elsewhere defined bind mount commands are (maybe) less 
transparent. And you always knows who created them, if something goes 
wrong, for example.


Nothing is clear to me when I look at ACLs :-) I do have the output of 
'last' (for a while) to see who is likely to have created them.


On the other hand, if you know of a good resource for better 
understanding ACLs, preferably with examples that are similar to my use 
case, I'd love to see it :-)


?) I'm not native English and slip or foist are maybe the wrong terms / 
wrongly translated. The context is that one user creates files and the 
system marks them as "created by" some other user.


Seem fine to me :-) But they're owned by the other user; I wouldn't 
assume that that user created them. Especially when that user isn't 
directly a person.


Thanks,
Richard



bindfs for web docroot - is this sane?

2022-10-11 Thread Richard Hector

Hi all,

I host a few websites, mostly Wordpress.

I prefer to have the site files (mostly) owned by an owner user, and 
php-fpm runs as a different user, so that it can't write its own code. 
For uploads, those directories are group-writeable.


Then for site developers (who might be contractors to my client) to be 
able to update teh site, they need read/write access to the docroot, but 
I don't want them all logging in using the same account/credentials.


So I've set up bindfs ( https://bindfs.org/ ) with the following fstab 
line (example at this stage):


/srv/wptest-home/doc_root /home/richard/wptest-home/doc_root fuse.bindfs 
--force-user=richard,--force-group=richard,--create-for-user=wptest-home,--create-for-group=wptest-home 
0 0


That means they can see their own 'view' of the docroot under their own 
home directory, and they can create files as needed, which will have the 
correct owner under /srv. I haven't yet looked at what happens with the 
uploaded and cached files which are owned by the php user; hopefully 
that works ok.


This means I don't need to worry about sudo and similar things, or 
chown/chgrp - which in turn means I should be able to offer sftp as an 
alternative to full ssh logins. It can probably even be chrooted.


Does that sound like a sane plan? Are there gotchas I haven't spotted?

Cheers,
Richard



Re: nginx.conf woes

2022-10-10 Thread Richard Hector

On 3/10/22 02:07, Patrick Kirk wrote:

Hi all,

I have 2 sites to run from one server.  Both are based on ASP.Net Core.  
Both have SSL certs from letsencrypt.  One works perfectly.  The other 
sort of works.


Firstly, I notice that cleardragon.com and kirks.net resolve to 
different addresses, though maybe cloudflare forwards kirks.net to the 
same place. But the setups are different.


Or maybe you're using a different dns or other system to reach your 
pre-production system.


If I go to http://localhost:5100 by redirecting to 
https://localhost:5101 and then it warns of an invalid certificate.


I'm a bit unclear on this - I guess these are both the upstreams? The 
upstream (ASP thing?) also redirects http to https?


Is nginx supposed to handle its upstream redirecting it to https?

Anyway, the invalid cert is expected, because you presumably don't have 
a cert for 'localhost'.


If 
I try lynx http://cleardragon.com a similar redirect takes place and I 
get a "Alert!: Unable to connect to remote host" error and lynx closes down.


I see a website, but then again, maybe I'm looking at the production 
site and you're not.


This redirect is presumably the one in your nginx config, rather than 
the one done by the upstream.


Does connecting explicitly to https://cleardragon.com also fail?



When I do sudo tail -f /var/log/nginx/error.log I see: 2022/10/02 
12:44:22 [notice] 1624399#1624399: signal process started


I don't know about this - lots of people report it, but I don't see 
answers. But it's a notice rather than an error.


Cheers,
Richard



Re:[DISCUSS] KIP-874: TopicRoundRobinAssignor

2022-10-07 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi Mathieu. I took a look at your KIP and have a couple questions. 

If the goal is to do the partition assignments at a topic level, wouldn't 
having single-partition topics solve this problem? 

You also mentioned that your goal is to minimize the potential of a poison pill 
message breaking all members of a group (by keeping track of which topics have 
'failed'), but it is not clear how this can be achieved with this assignor. If 
we imagine an scenario where:

* A group has 3 members (A, B, C)
* Members are subscribed to 3 topics (T1, T2, T3)
* Each member is assigned one topic (A[T1], B[T2], C[T3])
* One member fails to consume from a topic/partition (B[T2]), and goes into 
failed state 

How will the group leader know that T2 should not be re-assigned on the next 
rebalance? Can you elaborate a bit more on the mechanisms used to communicate 
this state to the other group members?

Thanks

From: dev@kafka.apache.org At: 10/05/22 03:47:33 UTC-4:00To:  
dev@kafka.apache.org
Subject: [DISCUSS] KIP-874: TopicRoundRobinAssignor

Hi Kafka Developers,

My proposal is to add a new partition assignment strategy at the topic
level to :
 - have a better data consistency by consumed topic in case of exception
 - have a solution much thread safe for the consumer
In case there are multiple consumers and multiple topics.

Here is the link to the KIP with all the explanations :
https://cwiki.apache.org/confluence/x/XozGDQ

Thank you in advance for your feedbacks,
Mathieu




[Warzone2100-commits] [Warzone2100/warzone2100] c1cb49: Added difficulty selector to debug menu

2022-10-06 Thread Hector Lucero via Warzone2100-commits
  Branch: refs/heads/master
  Home:   https://github.com/Warzone2100/warzone2100
  Commit: c1cb494d171add05d98feab5df664092affac4c7
  
https://github.com/Warzone2100/warzone2100/commit/c1cb494d171add05d98feab5df664092affac4c7
  Author: kammy 
  Date:   2022-10-07 (Fri, 07 Oct 2022)

  Changed paths:
M src/levels.cpp
M src/qtscript.cpp
M src/qtscript.h
M src/wzscriptdebug.cpp

  Log Message:
  ---
  Added difficulty selector to debug menu




___
Warzone2100-commits mailing list
Warzone2100-commits@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/warzone2100-commits


Connector API callbacks for create/delete events

2022-10-05 Thread Hector Geraldino (BLOOMBERG/ 919 3RD A)
Hi,

We've some custom connectors that require provisioning external resources 
(think of creating queues, S3 buckets, or activating accounts) when the 
connector instance is created, but also need to cleanup these resources 
(delete, deactivate) when the connector instance is deleted.

The connector API (org.apache.kafka.connect.connector.Connector) provides a 
start() and stop() methods and, while we can probably work around the start() 
method to check if the initialization of external resources has been done, 
there is currently no hook that a connector can use to perform any cleanup task 
when it is deleted. 

I'm planning to write a KIP that enhances the Connector API by having methods 
that are invoked by the Herder when connectors are created and/or deleted; but 
before doing so, I wanted to ask the community if there's already some 
workaround(s) that we can be used to achieve these tasks.

Thank you!

[jenkinsci/nexus-platform-plugin]

2022-10-05 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-7295-solving-antlr-version-mismatch
  Home:   https://github.com/jenkinsci/nexus-platform-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7295-solving-antlr-version-mismatch/fea48a-00%40github.com.


[jenkinsci/nexus-platform-plugin] 7da0a8: INT-7295 Solving ANTLR version mismatch (#227)

2022-10-05 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/main
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: 7da0a8ed0e1121973df72051ad93ac299bdc3963
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/7da0a8ed0e1121973df72051ad93ac299bdc3963
  Author: Hector Danilo Hurtado Olaya 
  Date:   2022-10-05 (Wed, 05 Oct 2022)

  Changed paths:
M pom.xml

  Log Message:
  ---
  INT-7295 Solving ANTLR version mismatch (#227)


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/869515-7da0a8%40github.com.


[jenkinsci/nexus-platform-plugin] fea48a: INT-7295 Solving ANTLR version mismatch

2022-10-04 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-7295-solving-antlr-version-mismatch
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: fea48a3c615dddc78e83b1bdc9ccc3f516dde5a2
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/fea48a3c615dddc78e83b1bdc9ccc3f516dde5a2
  Author: Hector Hurtado 
  Date:   2022-10-04 (Tue, 04 Oct 2022)

  Changed paths:
M pom.xml

  Log Message:
  ---
  INT-7295 Solving ANTLR version mismatch


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7295-solving-antlr-version-mismatch/00-fea48a%40github.com.


[jenkinsci/nexus-platform-plugin] 869515: INT-7293 Making stage unstable for Jenkins and Blu...

2022-10-04 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/main
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: 8695153b5c4c2891b1fb88cd959b5303b67f3373
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/8695153b5c4c2891b1fb88cd959b5303b67f3373
  Author: Hector Danilo Hurtado Olaya 
  Date:   2022-10-04 (Tue, 04 Oct 2022)

  Changed paths:
M pom.xml
M src/main/java/org/sonatype/nexus/ci/iq/PolicyEvaluatorExecution.groovy
M 
src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorIntegrationTest.groovy

  Log Message:
  ---
  INT-7293 Making stage unstable for Jenkins and Blue Ocen graphs (#226)


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/0be6cc-869515%40github.com.


[jenkinsci/nexus-platform-plugin]

2022-10-04 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-7293-making-stage-unstable
  Home:   https://github.com/jenkinsci/nexus-platform-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7293-making-stage-unstable/f4f795-00%40github.com.


[jenkinsci/nexus-platform-plugin] f4f795: INT-7293 Making stage unstable for Jenkins and Blu...

2022-10-03 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-7293-making-stage-unstable
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: f4f7958ab2a8d2b7c7c38d3395400c9ba8316668
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/f4f7958ab2a8d2b7c7c38d3395400c9ba8316668
  Author: Hector Hurtado 
  Date:   2022-10-03 (Mon, 03 Oct 2022)

  Changed paths:
M pom.xml
M src/main/java/org/sonatype/nexus/ci/iq/PolicyEvaluatorExecution.groovy
M 
src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorIntegrationTest.groovy

  Log Message:
  ---
  INT-7293 Making stage unstable for Jenkins and Blue Ocen graphs


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7293-making-stage-unstable/00-f4f795%40github.com.


[jenkinsci/nexus-platform-plugin]

2022-09-27 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/INT-7283-updating-integrations-links
  Home:   https://github.com/jenkinsci/nexus-platform-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7283-updating-integrations-links/0d3d78-00%40github.com.


[jenkinsci/nexus-platform-plugin] 0be6cc: INT-7283 Updating integrations help pages links (#...

2022-09-27 Thread 'Hector Danilo Hurtado Olaya' via Jenkins Commits
  Branch: refs/heads/main
  Home:   https://github.com/jenkinsci/nexus-platform-plugin
  Commit: 0be6ccc80c3ee91a8bc3f241fb3cb0229903c108
  
https://github.com/jenkinsci/nexus-platform-plugin/commit/0be6ccc80c3ee91a8bc3f241fb3cb0229903c108
  Author: Hector Danilo Hurtado Olaya 
  Date:   2022-09-27 (Tue, 27 Sep 2022)

  Changed paths:
M README.md
M docs/overview.md
M 
src/main/resources/org/sonatype/nexus/ci/iq/PolicyEvaluationReportAction/index.jelly

  Log Message:
  ---
  INT-7283 Updating integrations help pages links (#225)


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/5833c3-0be6cc%40github.com.


<    1   2   3   4   5   6   7   8   9   10   >