Re: [DISCUSS] KIP-883: Add delete callback method to Connector API
Hi Sagar, Thanks for your feedback! I actually renamed the method from "deleted()" to "destroyed()", which I think conveys the intention more clearly. I can certainly rename it to be 'onDeleted()', although I feel any method named onXXX() belongs to a listener class :) Regarding failure scenarios, an option I'm considering is to just provide an overloaded Connector#stop(boolean deleted) method that is called during WorkerConnector#doShutdown(). This has the advantage of providing the same semantics that the current Connector#stop() has, with the caveat that the API won't be as expressive. Also, the extra 'cleanup' bits that were supposed to happen when a connector is deleted might not to happen at all if the connector doesn't stop before the configured timeout (and is therefore cancelled). At this point I think the simplest option would be to provide an overloaded method (with a default implementation) that connectors can override. Wdyt? From: dev@kafka.apache.org At: 11/15/22 11:40:26 UTC-5:00To: dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Connector API Hey Hector, Thanks for the KIP. I have a minor suggestion in terms of naming. Since this is a callback method, would it make sense to call it onDelete()? Also, the failure scenarios discussed by Greg would need handling. Among other things, I like the idea of having a timeout for graceful shutdown or else try a force shutdown. What do you think about that approach? Thanks! Sagar. On Sat, Nov 12, 2022 at 1:53 AM Hector Geraldino (BLOOMBERG/ 919 3RD A) < hgerald...@bloomberg.net> wrote: > Thanks Greg for taking your time to review not just the KIP but also the > PR. > > 1. You made very valid points regarding the behavior of the destroy() > callback for connectors that don't follow the happy path. After thinking > about it, I decided to tweak the implementation a bit and have the > destroy() method be called during the worker shutdown: this means it will > share the same guarantees the connector#stop() method has. An alternative > implementation can be to have an overloaded connector#stop(boolean deleted) > method that signals a connector that it is being stopped due to deletion, > but I think that having a separate destroy() method provides clearer > semantics. > > I'll make sure to ammend the KIP with these details. > > 3. Without going too deep on the types of operations that can be performed > by a connector when it's being deleted, I can imagine the > org.apache.kafka.connect.source.SourceConnector base class having a default > implementation that deletes the connector's offsets automatically > (controlled by a property); this is in the context of KIP-875 (first-class > offsets support in Kafka Connect). Similar behaviors can be introduced for > the SinkConnector, however I'm not sure if this KIP is the right place to > discuss all the possibilities, or if we shoold keeping it more > narrow-focused on providing a callback mechanism for when connectors are > deleted, and what the expectations are around this newly introduced method. > What do you think? > > > From: dev@kafka.apache.org At: 11/09/22 16:55:04 UTC-5:00To: > dev@kafka.apache.org > Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Connector API > > Hi Hector, > > Thanks for the KIP! > > This is certainly missing functionality from the native Connect framework, > and we should try to make it possible to inform connectors about this part > of their lifecycle. > However, as with most functionality that was left out of the initial > implementation of the framework, the details are more challenging to work > out. > > 1. What happens when the destroy call throws an error, how does the > framework respond? > > This is unspecified in the KIP, and it appears that your proposed changes > could cause the herder to fail. > From the perspective of operators & connector developers, what is a > reasonable expectation to have for failure of a destroy? > I could see operators wanting both a graceful-delete to make use of this > new feature, and a force-delete for when the graceful-delete fails. > A connector developer could choose to swallow all errors encountered, or > fail-fast to indicate to the operator that there is an issue with the > graceful-delete flow. > If the alternative is crashing the herder, connector developers may choose > to hide serious errors, which is undesirable. > > 2. What happens when the destroy() call takes a long time to complete, or > is interrupted? > > It appears that your implementation serially destroy()s each appropriate > connector, and may prevent the herder thread from making progress while the > operation is ongoing. > We have previously had to patch Connect to perform al
Re: [Ietf-dkim] DKIM reply mitigations: re-opening the DKIM working group
> On Nov 11, 2022, at 11:46 AM, Barry Leiba wrote: > > Indeed... > The issue here is this: > > 1. I get a (free) account on free-email.com. Ok > 2. I send myself email from my account to my account. Of course, > free-email signs it, because it's sent from me to me: why would it > not? The wcSMTP router logic will not sign this path because it never reaches the remote target outbound queue where it may be signed. The router will export a message that targets a locally-hosted domain but it goes into the Inbound Import Queue rather than the Remote target outbound queue. I have debated the signing the Export->Local-Queue->Import mail. It is a matter of moving the location of wcDKIM signer which is now at the router outbound logic. > 3. I take that signed message and cart it over somewhere else, sending > it out to 10,000,000 recipients through somewhere else's > infrastructure. It's legitimately signed by free-email.com. Ok > 4. Of course, it fails SPF validation. But DKIM verifies and is > aligned to spam...@free-email.com, because there you go. > > That's the attack. It's happening all the time. If between 1 and 2 > we could use x= to cause the signature to time out, we'd be OK.n For failed SPF validation, I believe we need to honor the handling expected by the domain owner, first and foremost. Meaning Exclusive, Strong policies should always be honored. The weak and partial/soft policies is what has added handling unknowns (and unfortunately, we have not focused on leveraging the LOOKUP opportunities to extend policies). > The trouble is that we have to make x= broad enough to deal with > legitimate delays. And during that legitimate time, it's trivial for > a spammer to send out millions of spam messages. Crap. So x= doesn't > help. I had a 2006 draft to deal with expiration for time-shifted, time delayed verification. Partial DKIM Verifier Support using a DKIM-Received Trace Header https://datatracker.ietf.org/doc/html/draft-santos-dkim-rcvd-00 I have to review it to see if it can apply here in some manner 16 yrs later. > > We have to look at other options. We thought of this when we designed > DKIM, but couldn't come up with anything that would work. > We have new > experience since then, and we want to look at alternatives, and decide > whether priorities have changed, use cases, have changed, and so on. > > It's entirely possible that we still can't fix it without breaking use > cases that we're not willing to break. But we have to try. I have always been a strong advocate for extended policies for what I believe is the new “normal’ for SMTP receiver lookups. For the most part, related to SPF/DKIM/DMARC, we have today lookups: 5321: SPF 5322: DKIM, DMARC We can leverage the archived PRA/SUBMITTER protocol. SMTP Service Extension for Indicating the Responsible Submitter of an E-Mail Message https://www.rfc-editor.org/rfc/rfc4405 Purported Responsible Address in E-Mail Messages https://www.rfc-editor.org/rfc/rfc4407 which passes then 5322 PRA to 5321 via the SUBMITTER ESMTP extension: MAIL FROM: SUBMITTER=PRA The PRA is typically the 5322.FROM This gives ESMTP an optimizer and heads up for the transaction to expected DMARC domain handling policy prior to transferring the DATA payload. ESMTP receivers who enables RFC4405 will immediately see it being used by compliant ESMTP senders. We have used it for SPF lookups over the years. — HLS___ Ietf-dkim mailing list Ietf-dkim@ietf.org https://www.ietf.org/mailman/listinfo/ietf-dkim
Re: [DISCUSS] KIP-883: Add delete callback method to Connector API
Thanks Greg for taking your time to review not just the KIP but also the PR. 1. You made very valid points regarding the behavior of the destroy() callback for connectors that don't follow the happy path. After thinking about it, I decided to tweak the implementation a bit and have the destroy() method be called during the worker shutdown: this means it will share the same guarantees the connector#stop() method has. An alternative implementation can be to have an overloaded connector#stop(boolean deleted) method that signals a connector that it is being stopped due to deletion, but I think that having a separate destroy() method provides clearer semantics. I'll make sure to ammend the KIP with these details. 3. Without going too deep on the types of operations that can be performed by a connector when it's being deleted, I can imagine the org.apache.kafka.connect.source.SourceConnector base class having a default implementation that deletes the connector's offsets automatically (controlled by a property); this is in the context of KIP-875 (first-class offsets support in Kafka Connect). Similar behaviors can be introduced for the SinkConnector, however I'm not sure if this KIP is the right place to discuss all the possibilities, or if we shoold keeping it more narrow-focused on providing a callback mechanism for when connectors are deleted, and what the expectations are around this newly introduced method. What do you think? From: dev@kafka.apache.org At: 11/09/22 16:55:04 UTC-5:00To: dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-883: Add delete callback method to Connector API Hi Hector, Thanks for the KIP! This is certainly missing functionality from the native Connect framework, and we should try to make it possible to inform connectors about this part of their lifecycle. However, as with most functionality that was left out of the initial implementation of the framework, the details are more challenging to work out. 1. What happens when the destroy call throws an error, how does the framework respond? This is unspecified in the KIP, and it appears that your proposed changes could cause the herder to fail. From the perspective of operators & connector developers, what is a reasonable expectation to have for failure of a destroy? I could see operators wanting both a graceful-delete to make use of this new feature, and a force-delete for when the graceful-delete fails. A connector developer could choose to swallow all errors encountered, or fail-fast to indicate to the operator that there is an issue with the graceful-delete flow. If the alternative is crashing the herder, connector developers may choose to hide serious errors, which is undesirable. 2. What happens when the destroy() call takes a long time to complete, or is interrupted? It appears that your implementation serially destroy()s each appropriate connector, and may prevent the herder thread from making progress while the operation is ongoing. We have previously had to patch Connect to perform all connector and task operations on a background thread, because some connector method implementations can stall indefinitely. Connect also has the notion of "cancelling" a connector/task if a graceful shutdown timeout operation takes too long. Perhaps some of that design or machinery may be useful to protect this method call as well. More specific to the destroy() call itself, what happens when a connector completes part of a destroy operation and then cannot complete the remainder, either due to timing out or a worker crashing? What is the contract with the connector developer about this method? Is the destroy() only started exactly once during the lifetime of the connector, or may it be retried? 3. What should be considered a reasonable custom implementation of the destroy() call? What resources should it clean up by default? I think we can broadly categorize the state a connector mutates among the following * Framework-managed state (e.g. source offsets, consumer offsets) * Implementation detail state (e.g. debezium db history topic, audit tables, temporary accounts) * Third party system data (e.g. the actual data being written by a sink connector) * Third party system metadata (e.g. tables in a database, delivery receipts, permissions) I think it's apparent that the framework-managed state cannot/should not be interacted with by the destroy() call. However, the framework could be changed to clean up these resources at the same time that destroy() is called. Is that out-of-scope of this proposal, and better handled by manual intervention? From the text of the KIP, I think it explicitly includes the Implementation detail state, which should not be depended on externally and should be safe to clean up during a destroy(). I think this is completely reasonable. Are the third-party data and metadata out-of-scope for this proposal? Can we officially recommend against it, or should we accommodate users
[krita] [Bug 461660] New: Bug at startup
https://bugs.kde.org/show_bug.cgi?id=461660 Bug ID: 461660 Summary: Bug at startup Classification: Applications Product: krita Version: nightly build (please specify the git hash!) Platform: Microsoft Windows OS: Microsoft Windows Status: REPORTED Severity: normal Priority: NOR Component: * Unknown Assignee: krita-bugs-n...@kde.org Reporter: misha.bossm...@yandex.ru Target Milestone: --- I'm not good at describing bugs by all the rules, sorry. In the night builds, I noticed that sometimes Krita does not start the first time. It turned out that the process starts, but freezes in some kind of cycle. Consumes CPU power, but takes up less than 30 MB of RAM. At the same time, I can start more processes with Krita, which will load as expected. But the first process will consume CPU power. I have it about 15-20%. And I also noticed that nightly version takes longer to process "loading resource type" during startup. I tried to delete resourcecache, but I didn't notice any difference. Windows 10 (21h2, 22h2). Only in Krita Nightly. An oldest one I have is from October 3, so i dont know since... Build from October 10 works the same way. Krita is not installed. Only portable from binary-factory. -- You are receiving this mail because: You are watching all bug changes.
[jira] [Updated] (KAFKA-14354) Add 'destroyed()' callback method to Connector API
[ https://issues.apache.org/jira/browse/KAFKA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hector Geraldino updated KAFKA-14354: - Summary: Add 'destroyed()' callback method to Connector API (was: Add delete callback method to Connector API) > Add 'destroyed()' callback method to Connector API > -- > > Key: KAFKA-14354 > URL: https://issues.apache.org/jira/browse/KAFKA-14354 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect > Reporter: Hector Geraldino >Assignee: Hector Geraldino >Priority: Minor > > It would be useful to have a callback method added to the Connector API, so > connectors extending the SourceConnector and SinkConnector classes can be > notified when their connector instance is being deleted. This will give a > chance to connectors to perform any cleanup tasks (e.g. deleting external > resources, or deleting offsets) before the connector is completely removed > from the cluster. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PATCH v4] i2c/pasemi: PASemi I2C controller IRQ enablement
On 05/11/2022 20.56, Arminder Singh wrote: > This patch adds IRQ support to the PASemi I2C controller driver to > increase the performace of I2C transactions on platforms with PASemi I2C > controllers. While primarily intended for Apple silicon platforms, this > patch should also help in enabling IRQ support for older PASemi hardware > as well should the need arise. > > This version of the patch has been tested on an M1 Ultra Mac Studio, > as well as an M1 MacBook Pro, and userspace launches successfully > while using the IRQ path for I2C transactions. > > Signed-off-by: Arminder Singh > --- > This version of the patch fixes some reliability issues brought up by > Hector and Sven in the v3 patch email thread. First, this patch > increases the timeout value in pasemi_smb_waitready to 100ms from 10ms, > as the original 10ms timeout in the driver was incorrect according to the > controller's datasheet as Hector pointed out in the v3 patch email thread. > This incorrect timeout had caused some issues with the tps6598x controller > on Apple silicon platforms. > > This version of the patch also adds a reg_write to REG_IMASK in the IRQ > handler, because as Sven pointed out in the previous thread, the I2C > transaction interrupt is level sensitive so not masking the interrupt in > REG_IMASK will cause the interrupt to trigger again when it leaves the IRQ > handler until it reaches the call to reg_write after the completion expires. > > Patch changelog: > > v3 to v4 changes: > - Increased the timeout value for I2C transactions to 100ms, as the original >10ms timeout in the driver was incorrect according to the I2C chip's >datasheet. Mitigates an issue with the tps6598x controller on Apple >silicon platforms. > - Added a reg_write to REG_IMASK inside the IRQ handler, which prevents >the IRQ from triggering again after leaving the IRQ handler, as the >IRQ is level-sensitive. > > v2 to v3 changes: > - Fixed some whitespace and alignment issues found in v2 of the patch > > v1 to v2 changes: > - moved completion setup from pasemi_platform_i2c_probe to >pasemi_i2c_common_probe to allow PASemi and Apple platforms to share >common completion setup code in case PASemi hardware gets IRQ support >added > - initialized the status variable in pasemi_smb_waitready when going down >the non-IRQ path > - removed an unnecessary cast of dev_id in the IRQ handler > - fixed alignment of struct member names in i2c-pasemi-core.h >(addresses Christophe's feedback in the original submission) > - IRQs are now disabled after the wait_for_completion_timeout call >instead of inside the IRQ handler >(prevents the IRQ from going off after the completion times out) > - changed the request_irq call to a devm_request_irq call to obviate >the need for a remove function and a free_irq call >(thanks to Sven for pointing this out in the original submission) > - added a reinit_completion call to pasemi_reset >as a failsafe to prevent missed interrupts from causing the completion >to never complete (thanks to Arnd Bergmann for pointing this out) > - removed the bitmask variable in favor of just using the value >directly (it wasn't used anywhere else) > > v3: > https://lore.kernel.org/linux-i2c/mn2pr01mb5358ed8fc32c0cfaebd4a0e19f...@mn2pr01mb5358.prod.exchangelabs.com/T/ > > v2: > https://lore.kernel.org/linux-i2c/mn2pr01mb535821c8058c7814b2f8eedf9f...@mn2pr01mb5358.prod.exchangelabs.com/T/ > > v1: > https://lore.kernel.org/linux-i2c/mn2pr01mb535838492432c910f2381f929f...@mn2pr01mb5358.prod.exchangelabs.com/T/ > > drivers/i2c/busses/i2c-pasemi-core.c | 32 ++++ > drivers/i2c/busses/i2c-pasemi-core.h | 5 > drivers/i2c/busses/i2c-pasemi-platform.c | 6 + > 3 files changed, 38 insertions(+), 5 deletions(-) > Reviewed-by: Hector Martin - Hector
[DISCUSS] KIP-883: Add delete callback method to Connector API
Hi everyone, I've submitted KIP-883, which introduces a callback to the public Connector API called when deleting a connector: https://cwiki.apache.org/confluence/display/KAFKA/KIP-883%3A+Add+delete+callback+method+to+Connector+API It adds a new `deleted()` method (open to better naming suggestions) to the org.apache.kafka.connect.connector.Connector abstract class, which will be invoked by connect Workers when a connector is being deleted. Feedback and comments are welcome. Thank you! Hector
[jira] [Updated] (KAFKA-14354) Add delete callback method to Connector API
[ https://issues.apache.org/jira/browse/KAFKA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hector Geraldino updated KAFKA-14354: - Description: It would be useful to have a callback method added to the Connector API, so connectors extending the SourceConnector and SinkConnector classes can be notified when their connector instance is being deleted. This will give a chance to connectors to perform any cleanup tasks (e.g. deleting external resources, or deleting offsets) before the connector is completely removed from the cluster. (was: KIP-795: https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator The AbstractCoordinator should have a companion public interface that is part of Kafka's public API, so backwards compatibility can be maintained in future versions of the client libraries) > Add delete callback method to Connector API > --- > > Key: KAFKA-14354 > URL: https://issues.apache.org/jira/browse/KAFKA-14354 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect > Reporter: Hector Geraldino >Assignee: Hector Geraldino >Priority: Minor > > It would be useful to have a callback method added to the Connector API, so > connectors extending the SourceConnector and SinkConnector classes can be > notified when their connector instance is being deleted. This will give a > chance to connectors to perform any cleanup tasks (e.g. deleting external > resources, or deleting offsets) before the connector is completely removed > from the cluster. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-14354) Add delete callback method to Connector API
[ https://issues.apache.org/jira/browse/KAFKA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hector Geraldino updated KAFKA-14354: - Priority: Minor (was: Major) > Add delete callback method to Connector API > --- > > Key: KAFKA-14354 > URL: https://issues.apache.org/jira/browse/KAFKA-14354 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect > Reporter: Hector Geraldino >Assignee: Hector Geraldino >Priority: Minor > > KIP-795: > https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator > The AbstractCoordinator should have a companion public interface that is part > of Kafka's public API, so backwards compatibility can be maintained in future > versions of the client libraries -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14354) Add delete callback method to Connector API
Hector Geraldino created KAFKA-14354: Summary: Add delete callback method to Connector API Key: KAFKA-14354 URL: https://issues.apache.org/jira/browse/KAFKA-14354 Project: Kafka Issue Type: Improvement Components: clients Reporter: Hector Geraldino Assignee: Hector Geraldino KIP-795: https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator The AbstractCoordinator should have a companion public interface that is part of Kafka's public API, so backwards compatibility can be maintained in future versions of the client libraries -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-14354) Add delete callback method to Connector API
[ https://issues.apache.org/jira/browse/KAFKA-14354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hector Geraldino updated KAFKA-14354: - Component/s: KafkaConnect (was: clients) > Add delete callback method to Connector API > --- > > Key: KAFKA-14354 > URL: https://issues.apache.org/jira/browse/KAFKA-14354 > Project: Kafka > Issue Type: Improvement > Components: KafkaConnect > Reporter: Hector Geraldino >Assignee: Hector Geraldino >Priority: Major > > KIP-795: > https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator > The AbstractCoordinator should have a companion public interface that is part > of Kafka's public API, so backwards compatibility can be maintained in future > versions of the client libraries -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-14354) Add delete callback method to Connector API
Hector Geraldino created KAFKA-14354: Summary: Add delete callback method to Connector API Key: KAFKA-14354 URL: https://issues.apache.org/jira/browse/KAFKA-14354 Project: Kafka Issue Type: Improvement Components: clients Reporter: Hector Geraldino Assignee: Hector Geraldino KIP-795: https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator The AbstractCoordinator should have a companion public interface that is part of Kafka's public API, so backwards compatibility can be maintained in future versions of the client libraries -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-13434) Add a public API for AbstractCoordinator
[ https://issues.apache.org/jira/browse/KAFKA-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hector Geraldino resolved KAFKA-13434. -- Resolution: Won't Do KIP has been discarded > Add a public API for AbstractCoordinator > > > Key: KAFKA-13434 > URL: https://issues.apache.org/jira/browse/KAFKA-13434 > Project: Kafka > Issue Type: Improvement > Components: clients > Reporter: Hector Geraldino >Assignee: Hector Geraldino >Priority: Major > > KIP-795: > https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator > The AbstractCoordinator should have a companion public interface that is part > of Kafka's public API, so backwards compatibility can be maintained in future > versions of the client libraries -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (KAFKA-13434) Add a public API for AbstractCoordinator
[ https://issues.apache.org/jira/browse/KAFKA-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hector Geraldino resolved KAFKA-13434. -- Resolution: Won't Do KIP has been discarded > Add a public API for AbstractCoordinator > > > Key: KAFKA-13434 > URL: https://issues.apache.org/jira/browse/KAFKA-13434 > Project: Kafka > Issue Type: Improvement > Components: clients > Reporter: Hector Geraldino >Assignee: Hector Geraldino >Priority: Major > > KIP-795: > https://cwiki.apache.org/confluence/display/KAFKA/KIP-795%3A+Add+public+APIs+for+AbstractCoordinator > The AbstractCoordinator should have a companion public interface that is part > of Kafka's public API, so backwards compatibility can be maintained in future > versions of the client libraries -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PATCH v3] i2c/pasemi: PASemi I2C controller IRQ enablement
On 07/10/2022 09.42, Arminder Singh wrote: > This patch adds IRQ support to the PASemi I2C controller driver to > increase the performace of I2C transactions on platforms with PASemi I2C > controllers. While primarily intended for Apple silicon platforms, this > patch should also help in enabling IRQ support for older PASemi hardware > as well should the need arise. > > Signed-off-by: Arminder Singh > --- > This version of the patch has been tested on an M1 Ultra Mac Studio, > as well as an M1 MacBook Pro, and userspace launches successfully > while using the IRQ path for I2C transactions. > [...] Please increase the timeout to 100ms for v4. 10ms was always wrong (the datasheet says the hardware clock stretching timeout is 25ms, and most i2c drivers have much larger timeouts), and with the tighter timing achievable with the IRQ patchset we are seeing timeouts in tipd controller requests which can clock-stretch for ~10ms themselves, followed by a spiral of errors as the driver has pretty poor error recovery. Increasing the timeout fixes the immediate problem/regression. Other than that, I now have a patch that makes the whole timeout/error detection/recovery much more robust, but I can submit it after this goes in :) - Hector
Re: [PATCH v2] drm/format-helper: Only advertise supported formats for conversion
On 01/11/2022 01.15, Justin Forbes wrote: > On Thu, Oct 27, 2022 at 8:57 AM Hector Martin wrote: >> >> drm_fb_build_fourcc_list() currently returns all emulated formats >> unconditionally as long as the native format is among them, even though >> not all combinations have conversion helpers. Although the list is >> arguably provided to userspace in precedence order, userspace can pick >> something out-of-order (and thus break when it shouldn't), or simply >> only support a format that is unsupported (and thus think it can work, >> which results in the appearance of a hang as FB blits fail later on, >> instead of the initialization error you'd expect in this case). >> >> Add checks to filter the list of emulated formats to only those >> supported for conversion to the native format. This presumes that there >> is a single native format (only the first is checked, if there are >> multiple). Refactoring this API to drop the native list or support it >> properly (by returning the appropriate emulated->native mapping table) >> is left for a future patch. >> >> The simpledrm driver is left as-is with a full table of emulated >> formats. This keeps all currently working conversions available and >> drops all the broken ones (i.e. this a strict bugfix patch, adding no >> new supported formats nor removing any actually working ones). In order >> to avoid proliferation of emulated formats, future drivers should >> advertise only XRGB as the sole emulated format (since some >> userspace assumes its presence). >> >> This fixes a real user regression where the ?RGB2101010 support commit >> started advertising it unconditionally where not supported, and KWin >> decided to start to use it over the native format and broke, but also >> the fixes the spurious RGB565/RGB888 formats which have been wrongly >> unconditionally advertised since the dawn of simpledrm. >> >> Fixes: 6ea966fca084 ("drm/simpledrm: Add [AX]RGB210101 > > >> Cc: sta...@vger.kernel.org >> Signed-off-by: Hector Martin > > There is a CC for stable on here, but this patch does not apply in any > way on 6.0 or older kernels as the fourcc bits and considerable churn > came in with the 6.1 merge window. You don't happen to have a > backport of this to 6.0 do you? v1 is probably closer to such a backport, and I offered to figure it out on Matrix but I heard you're already working on it ;) - Hector
[yakuake] [Bug 363333] Processes started in yakuake terminals block indefinitely some time after switching to a different VT
https://bugs.kde.org/show_bug.cgi?id=36 Hector Martin changed: What|Removed |Added CC||hec...@marcansoft.com --- Comment #8 from Hector Martin --- This also affects Konsole. Switching to another VT hangs processes that are producing output, even if Konsole is minimized or the active tab is not the one producing output. -- You are receiving this mail because: You are watching all bug changes.
[jira] [Commented] (FLINK-29609) Clean up jobmanager deployment on suspend after recording savepoint info
[ https://issues.apache.org/jira/browse/FLINK-29609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17626372#comment-17626372 ] Hector Miuler Malpica Gallegos commented on FLINK-29609: [~sriramgr] In my opinion, this should only happen in application mode, in session mode it should continue to exist waiting for a new job. > Clean up jobmanager deployment on suspend after recording savepoint info > > > Key: FLINK-29609 > URL: https://issues.apache.org/jira/browse/FLINK-29609 > Project: Flink > Issue Type: Improvement > Components: Kubernetes Operator >Reporter: Gyula Fora >Assignee: Sriram Ganesh >Priority: Major > Fix For: kubernetes-operator-1.3.0 > > > Currently in case of suspending with savepoint. The jobmanager pod will > linger there forever after cancelling the job. > This is currently used to ensure consistency in case the > operator/cancel-with-savepoint operation fails. > Once we are sure however that the savepoint has been recorded and the job is > shut down, we should clean up all the resources. Optionally we can make this > configurable. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PATCH v2] drm/format-helper: Only advertise supported formats for conversion
On 28/10/2022 17.07, Thomas Zimmermann wrote: > In yesterday's discussion on IRC, it was said that several devices > advertise ARGB framebuffers when the hardware actually uses XRGB? Is > there hardware that supports transparent primary planes? ARGB hardware probably exists in the form of embedded systems with preconfigured blending. For example, one could imagine an OSD-type setup where there is a hardware video scaler controlled entirely outside of DRM/KMS (probably by a horrible vendor driver), and the overlay framebuffer is exposed via simpledrm as a dumb memory region, and expects ARGB to work. So ideally, we wouldn't expose XRGB on ARGB systems. But there is this problem: arch/arm64/boot/dts/qcom/msm8998-oneplus-common.dtsi: format = "a8r8g8b8"; arch/arm64/boot/dts/qcom/sdm630-sony-xperia-nile.dtsi: format = "a8r8g8b8"; arch/arm64/boot/dts/qcom/sdm660-xiaomi-lavender.dts: format = "a8r8g8b8"; arch/arm64/boot/dts/qcom/sdm845-shift-axolotl.dts: format = "a8r8g8b8"; arch/arm64/boot/dts/qcom/sdm850-samsung-w737.dts: format = "a8r8g8b8"; arch/arm64/boot/dts/qcom/sm6125-sony-xperia-seine-pdx201.dts: format = "a8r8g8b8"; arch/arm64/boot/dts/qcom/sm6350-sony-xperia-lena-pdx213.dts: format = "a8r8g8b8"; arch/arm64/boot/dts/qcom/sm7225-fairphone-fp4.dts: format = "a8r8g8b8"; arch/arm64/boot/dts/qcom/sm8150-sony-xperia-kumano.dtsi: format = "a8r8g8b8"; arch/arm64/boot/dts/qcom/sm8250-sony-xperia-edo.dtsi: format = "a8r8g8b8"; arch/arm64/boot/dts/qcom/sm8350-sony-xperia-sagami.dtsi: format = "a8r8g8b8"; arch/arm64/boot/dts/socionext/uniphier-ld20-akebi96.dts: format = "a8r8g8b8"; I'm pretty sure those phones don't have transparent screens, nor magically put video planes below the firmware framebuffer. If there are 12 device trees for phones in mainline which lie about having alpha support, who knows how many more exist outside? If we stop advertising pretend-XRGB on them, I suspect we're going to break a lot of software... Of course, there is one "correct" solution here: have an actual xrgb->argb conversion helper that just clears the high byte. Then those platforms lying about having alpha and using xrgb from userspace will take a performace hit, but they should arguably just fix their device tree in that case. Maybe this is the way to go in this case? Note that there would be no inverse conversion (no advertising argb on xrgb backends), so that one would be dropped vs. what we have today. This effectively keeps the "xrgb helpers and nothing else" rule while actually supporting it for argb backend framebuffers correctly. Any platforms actually wanting to use argb framebuffers with meaningful alpha should be configuring their userspace to preferentially render directly to argb to avoid the perf hit anyway. - Hector
[PATCH v2] drm/format-helper: Only advertise supported formats for conversion
drm_fb_build_fourcc_list() currently returns all emulated formats unconditionally as long as the native format is among them, even though not all combinations have conversion helpers. Although the list is arguably provided to userspace in precedence order, userspace can pick something out-of-order (and thus break when it shouldn't), or simply only support a format that is unsupported (and thus think it can work, which results in the appearance of a hang as FB blits fail later on, instead of the initialization error you'd expect in this case). Add checks to filter the list of emulated formats to only those supported for conversion to the native format. This presumes that there is a single native format (only the first is checked, if there are multiple). Refactoring this API to drop the native list or support it properly (by returning the appropriate emulated->native mapping table) is left for a future patch. The simpledrm driver is left as-is with a full table of emulated formats. This keeps all currently working conversions available and drops all the broken ones (i.e. this a strict bugfix patch, adding no new supported formats nor removing any actually working ones). In order to avoid proliferation of emulated formats, future drivers should advertise only XRGB as the sole emulated format (since some userspace assumes its presence). This fixes a real user regression where the ?RGB2101010 support commit started advertising it unconditionally where not supported, and KWin decided to start to use it over the native format and broke, but also the fixes the spurious RGB565/RGB888 formats which have been wrongly unconditionally advertised since the dawn of simpledrm. Fixes: 6ea966fca084 ("drm/simpledrm: Add [AX]RGB2101010 formats") Fixes: 11e8f5fd223b ("drm: Add simpledrm driver") Cc: sta...@vger.kernel.org Signed-off-by: Hector Martin --- I'm proposing this alternative approach after a heated discussion on IRC. I'm out of ideas, if y'all don't like this one you can figure it out for yourseves :-) Changes since v1: This v2 moves all the changes to the helper (so they will apply to the upcoming ofdrm, though ofdrm also needs to be fixed to trim its format table to only formats that should be emulated, probably only XRGB, to avoid further proliferating the use of conversions), and avoids touching more than one file. The API still needs cleanup as mentioned (supporting more than one native format is fundamentally broken, since the helper would need to tell the driver *what* native format to use for *each* emulated format somehow), but all current and planned users only pass in one native format, so this can (and should) be fixed later. Aside: After other IRC discussion, I'm testing nuking the XRGB2101010 <-> ARGB2101010 advertisement (which does not involve conversion) by removing those entries from simpledrm in the Asahi Linux downstream tree. As far as I'm concerned, it can be removed if nobody complains (by removing those entries from the simpledrm array), if maintainers are generally okay with removing advertised formats at all. If so, there might be other opportunities for further trimming the list non-native formats advertised to userspace. Tested with KWin-X11, KWin-Wayland, GNOME-X11, GNOME-Wayland, and Weston on both XRGB2101010 and RGB simpledrm framebuffers. drivers/gpu/drm/drm_format_helper.c | 66 - 1 file changed, 47 insertions(+), 19 deletions(-) diff --git a/drivers/gpu/drm/drm_format_helper.c b/drivers/gpu/drm/drm_format_helper.c index e2f76621453c..3ee59bae9d2f 100644 --- a/drivers/gpu/drm/drm_format_helper.c +++ b/drivers/gpu/drm/drm_format_helper.c @@ -807,6 +807,38 @@ static bool is_listed_fourcc(const uint32_t *fourccs, size_t nfourccs, uint32_t return false; } +static const uint32_t conv_from_xrgb[] = { + DRM_FORMAT_XRGB, + DRM_FORMAT_ARGB, + DRM_FORMAT_XRGB2101010, + DRM_FORMAT_ARGB2101010, + DRM_FORMAT_RGB565, + DRM_FORMAT_RGB888, +}; + +static const uint32_t conv_from_rgb565_888[] = { + DRM_FORMAT_XRGB, + DRM_FORMAT_ARGB, +}; + +static bool is_conversion_supported(uint32_t from, uint32_t to) +{ + switch (from) { + case DRM_FORMAT_XRGB: + case DRM_FORMAT_ARGB: + return is_listed_fourcc(conv_from_xrgb, ARRAY_SIZE(conv_from_xrgb), to); + case DRM_FORMAT_RGB565: + case DRM_FORMAT_RGB888: + return is_listed_fourcc(conv_from_rgb565_888, ARRAY_SIZE(conv_from_rgb565_888), to); + case DRM_FORMAT_XRGB2101010: + return to == DRM_FORMAT_ARGB2101010; + case DRM_FORMAT_ARGB2101010: + return to == DRM_FORMAT_XRGB2101010; + default: + return false; + } +} + /** * drm_fb_build_fourcc_list - Filters a list of supported color formats against *the device's native formats @@
Re: [PATCH] drm/simpledrm: Only advertise formats that are supported
On 27/10/2022 20.08, Thomas Zimmermann wrote: > We currently have two DRM drivers that call drm_fb_build_fourcc_list(): > simpledrm and ofdrm. I've been very careful to keep the format selection > in sync between them. (That's the reason why the helper exists at all.) > If the drivers start to use different logic, it will only become more > chaotic. > > The format array of ofdrm is at [1]. At a minimum, ofdrm should get the > same fix as simpledrm. Looks like this was merged recently, so I didn't see it on my tree (I was basing off of 6.1-rc2). Since this patch is a regression fix, it should be applied to drm-fixes (and automatically picked up by stable folks) soon to be fixed in 6.1, and then we can fix whatever is needed in ofdrm separately in drm-tip. As long as ofdrm is ready for the new behavior prior to the merge of drm-tip with 6.1, there will be no breakage. In this case, the change required to ofdrm is probably just to replace that array with just DRM_FORMAT_XRGB (which should be the only supported fallback format for new drivers) and then to add a test to only expose it for formats for which we actually have conversion helpers, similar to what the the switch() enumerates here. That logic could later be moved into the helper as a refactor. >> /* Primary plane */ >> +switch (format->format) { > > I trust you when you say that ->XRGB is not enough. But > although I've read your replies, I still don't understand why this > switch is necessary. > > Why don't we call drm_fb_build_fourcc_list() with the native > format/formats and let it append a number of formats, such as adding > XRGB888, adding ARGB if necessary, adding ARGB2101010 if necessary. > Each with a elaborate comment why and which userspace needs the format. (?) That would be fine to do, it would just be moving the logic to the helper. That kind of refactoring is better suited for subsequent patches. This is a regression fix, it attempts to minimize the amount of refactoring, which means keeping the logic in simpledrm, to make it easier to review for correctness. Also, that would change the API of that function, which would likely make the merge with the new ofdrm painful. The way things are now, a small fix to ofdrm will make it compatible with both the state before and after this patch, which means the merge will go through painlessly, and then we can just refactor everything once everything is in the same tree. - Hector
Re: [PATCH] drm/simpledrm: Only advertise formats that are supported
On 27/10/2022 19.13, Hector Martin wrote: > Until now, simpledrm unconditionally advertised all formats that can be > supported natively as conversions. However, we don't actually have a > full conversion matrix of helpers. Although the list is arguably > provided to userspace in precedence order, userspace can pick something > out-of-order (and thus break when it shouldn't), or simply only support > a format that is unsupported (and thus think it can work, which results > in the appearance of a hang as FB blits fail later on, instead of the > initialization error you'd expect in this case). > > Split up the format table into separate ones for each required subset, > and then pick one based on the native format. Also remove the > native<->conversion overlap check from the helper (which doesn't make > sense any more, since the native format is advertised anyway and this > way RGB565/RGB888 can share a format table), and instead print the same > message in simpledrm when the native format is not one for which we have > conversions at all. > > This fixes a real user regression where the ?RGB2101010 support commit > started advertising it unconditionally where not supported, and KWin > decided to start to use it over the native format, but also the fixes > the spurious RGB565/RGB888 formats which have been wrongly > unconditionally advertised since the dawn of simpledrm. > > Note: this patch is merged because splitting it into two patches, one > for the helper and one for simpledrm, would regress at the midpoint > regardless of the order. If simpledrm is changed first, that would break > working conversions to RGB565/RGB888 (since those share a table that > does not include the native formats). If the helper is changed first, it > would start spuriously advertising all conversion formats when the > native format doesn't have any supported conversions at all. > > Acked-by: Pekka Paalanen > Fixes: 6ea966fca084 ("drm/simpledrm: Add [AX]RGB2101010 formats") > Fixes: 11e8f5fd223b ("drm: Add simpledrm driver") > Cc: sta...@vger.kernel.org > Signed-off-by: Hector Martin > --- > drivers/gpu/drm/drm_format_helper.c | 15 --- > drivers/gpu/drm/tiny/simpledrm.c| 62 + > 2 files changed, 55 insertions(+), 22 deletions(-) > To answer some issues that came up on IRC: Q: Why not move this logic / the tables to the helper? A: Because simpledrm is the only user so far, and this patch is Cc: stable because we have an actual regression that broke KDE. I'm going for the minimal patch that keeps everything that worked to this day working, and stops advertising things that never worked, no more, no less. Future refactoring can always happen later (and is probably a good idea when other drivers start using the helper). Q: XRGB is supposed to be the only canonical format. Why not just drop everything but conversions to/from XRGB? A: Because that would regress things that work today, and could break existing userspace on some platforms. That may be a good idea, but I think we should fix the bugs first, and leave the discussion of whether we want to actually remove existing functionality for later. Q: Why not just add a conversion from XRGB2101010 to XRGB? A: Because that would only fix KDE, and would make it slower vs. not advertising XRGB2101010 at all (double conversions, plus kernel conversion can be slower). Plus, it doesn't make any sense as it only fills in one entry in the conversion matrix. If we wanted to actually fill out the conversion matrix, and thus support everything simpledrm has advertised to day correctly, we would need helpers for: rgb565->rgb888 rgb888->rgb565 rgb565->xrgb2101010 rgb888->xrgb2101010 xrgb2101010->rgb565 xrgb2101010->rgb888 xrgb2101010->xrgb That seems like overkill and unlikely to actually help anyone, it'd just give userspace more options to shoot itself in the foot with a sub-optimal format choice. And it's a pile of code. - Hector
[PATCH] drm/simpledrm: Only advertise formats that are supported
Until now, simpledrm unconditionally advertised all formats that can be supported natively as conversions. However, we don't actually have a full conversion matrix of helpers. Although the list is arguably provided to userspace in precedence order, userspace can pick something out-of-order (and thus break when it shouldn't), or simply only support a format that is unsupported (and thus think it can work, which results in the appearance of a hang as FB blits fail later on, instead of the initialization error you'd expect in this case). Split up the format table into separate ones for each required subset, and then pick one based on the native format. Also remove the native<->conversion overlap check from the helper (which doesn't make sense any more, since the native format is advertised anyway and this way RGB565/RGB888 can share a format table), and instead print the same message in simpledrm when the native format is not one for which we have conversions at all. This fixes a real user regression where the ?RGB2101010 support commit started advertising it unconditionally where not supported, and KWin decided to start to use it over the native format, but also the fixes the spurious RGB565/RGB888 formats which have been wrongly unconditionally advertised since the dawn of simpledrm. Note: this patch is merged because splitting it into two patches, one for the helper and one for simpledrm, would regress at the midpoint regardless of the order. If simpledrm is changed first, that would break working conversions to RGB565/RGB888 (since those share a table that does not include the native formats). If the helper is changed first, it would start spuriously advertising all conversion formats when the native format doesn't have any supported conversions at all. Acked-by: Pekka Paalanen Fixes: 6ea966fca084 ("drm/simpledrm: Add [AX]RGB2101010 formats") Fixes: 11e8f5fd223b ("drm: Add simpledrm driver") Cc: sta...@vger.kernel.org Signed-off-by: Hector Martin --- drivers/gpu/drm/drm_format_helper.c | 15 --- drivers/gpu/drm/tiny/simpledrm.c| 62 + 2 files changed, 55 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/drm_format_helper.c b/drivers/gpu/drm/drm_format_helper.c index e2f76621453c..c60c13f3a872 100644 --- a/drivers/gpu/drm/drm_format_helper.c +++ b/drivers/gpu/drm/drm_format_helper.c @@ -864,20 +864,6 @@ size_t drm_fb_build_fourcc_list(struct drm_device *dev, ++fourccs; } - /* -* The plane's atomic_update helper converts the framebuffer's color format -* to a native format when copying to device memory. -* -* If there is not a single format supported by both, device and -* driver, the native formats are likely not supported by the conversion -* helpers. Therefore *only* support the native formats and add a -* conversion helper ASAP. -*/ - if (!found_native) { - drm_warn(dev, "Format conversion helpers required to add extra formats.\n"); - goto out; - } - /* * The extra formats, emulated by the driver, go second. */ @@ -898,7 +884,6 @@ size_t drm_fb_build_fourcc_list(struct drm_device *dev, ++fourccs; } -out: return fourccs - fourccs_out; } EXPORT_SYMBOL(drm_fb_build_fourcc_list); diff --git a/drivers/gpu/drm/tiny/simpledrm.c b/drivers/gpu/drm/tiny/simpledrm.c index 18489779fb8a..1257411f3d44 100644 --- a/drivers/gpu/drm/tiny/simpledrm.c +++ b/drivers/gpu/drm/tiny/simpledrm.c @@ -446,22 +446,48 @@ static int simpledrm_device_init_regulators(struct simpledrm_device *sdev) */ /* - * Support all formats of simplefb and maybe more; in order - * of preference. The display's update function will do any + * Support the subset of formats that we have conversion helpers for, + * in order of preference. The display's update function will do any * conversion necessary. * * TODO: Add blit helpers for remaining formats and uncomment * constants. */ -static const uint32_t simpledrm_primary_plane_formats[] = { + +/* + * Supported conversions to RGB565 and RGB888: + * from [AX]RGB + */ +static const uint32_t simpledrm_primary_plane_formats_base[] = { + DRM_FORMAT_XRGB, + DRM_FORMAT_ARGB, +}; + +/* + * Supported conversions to [AX]RGB: + * A/X variants (no-op) + * from RGB565 + * from RGB888 + */ +static const uint32_t simpledrm_primary_plane_formats_xrgb[] = { DRM_FORMAT_XRGB, DRM_FORMAT_ARGB, + DRM_FORMAT_RGB888, DRM_FORMAT_RGB565, //DRM_FORMAT_XRGB1555, //DRM_FORMAT_ARGB1555, - DRM_FORMAT_RGB888, +}; + +/* + * Supported conversions to [AX]RGB2101010: + * A/X variants (no-op) + * from [AX]RGB + */ +static const uint32_t simpledrm_primary_plane_formats_xrgb2101010[] = { DRM_FORMAT_XRGB210
[jira] [Commented] (FLINK-29609) Clean up jobmanager deployment on suspend after recording savepoint info
[ https://issues.apache.org/jira/browse/FLINK-29609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17624585#comment-17624585 ] Hector Miuler Malpica Gallegos commented on FLINK-29609: Please, take into account the stateless batch processes, which once finished processing, should clean all the resources > Clean up jobmanager deployment on suspend after recording savepoint info > > > Key: FLINK-29609 > URL: https://issues.apache.org/jira/browse/FLINK-29609 > Project: Flink > Issue Type: Improvement > Components: Kubernetes Operator >Reporter: Gyula Fora >Assignee: Sriram Ganesh >Priority: Major > Fix For: kubernetes-operator-1.3.0 > > > Currently in case of suspending with savepoint. The jobmanager pod will > linger there forever after cancelling the job. > This is currently used to ensure consistency in case the > operator/cancel-with-savepoint operation fails. > Once we are sure however that the savepoint has been recorded and the job is > shut down, we should clean up all the resources. Optionally we can make this > configurable. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: bindfs for web docroot - is this sane?
On 12/10/22 00:26, Dan Ritter wrote: Richard Hector wrote: Hi all, I host a few websites, mostly Wordpress. I prefer to have the site files (mostly) owned by an owner user, and php-fpm runs as a different user, so that it can't write its own code. For uploads, those directories are group-writeable. Then for site developers (who might be contractors to my client) to be able to update teh site, they need read/write access to the docroot, but I don't want them all logging in using the same account/credentials. So I've set up bindfs ( https://bindfs.org/ ) with the following fstab line (example at this stage): /srv/wptest-home/doc_root /home/richard/wptest-home/doc_root fuse.bindfs --force-user=richard,--force-group=richard,--create-for-user=wptest-home,--create-for-group=wptest-home 0 0 That means they can see their own 'view' of the docroot under their own home directory, and they can create files as needed, which will have the correct owner under /srv. I haven't yet looked at what happens with the uploaded and cached files which are owned by the php user; hopefully that works ok. This means I don't need to worry about sudo and similar things, or chown/chgrp - which in turn means I should be able to offer sftp as an alternative to full ssh logins. It can probably even be chrooted. Does that sound like a sane plan? Are there gotchas I haven't spotted? That's a solution which has worked in similar situations in the past, but it runs into problems with accountability and debugging. The better solution is to use a versioning system -- git is the default these days, subversion will certainly work -- and require your site developers to make their changes to the version controlled repository. The repo is either automatically (cron, usually) or manually (dev sends an email or a ticket) updated on the web host. I agree that a git-based deployment scheme would be good. However, I understand that it's considered bad practice for the docroot to itself be a git repo, which means writing scripts to check out the right version and then deploy it (which might also help with setting the right permissions). I'm also not entirely comfortable with either a cron or ticket-based trigger - I'd want to look into either git hooks (but that's on the wrong machine), or maybe a webapp with a deploy button. And then there's the issue of what is in git and what isn't, and how to customise the installation after checkout - eg setting the site name/url to distinguish it from the dev/staging site or whatever, setting db passwords etc. More stuff for the deployment script to do, I guess. So I like this idea, but it's a lot more work. And I have to convince my clients and/or their devs to use it, which might require learning git. And I'm not necessarily good enough at git myself to do that teaching well. - devs don't get accounts on the web host at all They might need it anyway, for running wp cli commands etc (especially given the privilege separation which means that installing plugins via the WP admin pages won't work - or would you include the plugins in the git repo?) - you can resolve the conflicts of two people working on the same site True. - automatic backups, assuming you have a repo not on this server I have backups of the web server; backups of the repo as well would be good. - easy revert to a previous version True. - easy deployment to multiple servers for load balancing True, though I'm not at that level at this point. Drawbacks: - devs have to have a local webserver to test their changes Yes, or a dev server/site provided by me - devs have to follow the process And have to know how, yes - someone has to resolve conflicts or decide what the deployed version is True anyway Note that this method doesn't stop the dev(s) using git anyway. In summary, I think I want to offer a git-based method, but I think it would work ok in combination with this, which is initially simpler. It sounds like there's nothing fundamentally broken about it, at least :-) Cheers, Richard
Re: bindfs for web docroot - is this sane?
On 11/10/22 22:40, hede wrote: On 11.10.2022 10:03 Richard Hector wrote: [...] Then for site developers (who might be contractors to my client) to be able to update teh site, they need read/write access to the docroot, but I don't want them all logging in using the same account/credentials. [...] Does that sound like a sane plan? Are there gotchas I haven't spotted? I think I'm not able to assess the bind-mount question, but... Isn't that a use case for ACLs? (incl. default ACLs for the webservers user here?) Yes, probably. However, I looked at ACLs earlier (months ago at least), and they did my head in ... Files will then still be owned by the user who created them. But your default-user has all (predefined) rights on them. Having them owned by the user that created them is good for accountability, but bad for glancing at ls output to see if everything looks right. I'd probably prefer that because - by instinct - I have a bad feeling regarding security if one user can slip/foist(?) a file to be "created" by some other user. But that's only a feeling without knowing all the circumstances. They can only have it owned by one specific user, but I acknowledge possible issues there. And this way it's always clear which users have access by looking at the ACLs while elsewhere defined bind mount commands are (maybe) less transparent. And you always knows who created them, if something goes wrong, for example. Nothing is clear to me when I look at ACLs :-) I do have the output of 'last' (for a while) to see who is likely to have created them. On the other hand, if you know of a good resource for better understanding ACLs, preferably with examples that are similar to my use case, I'd love to see it :-) ?) I'm not native English and slip or foist are maybe the wrong terms / wrongly translated. The context is that one user creates files and the system marks them as "created by" some other user. Seem fine to me :-) But they're owned by the other user; I wouldn't assume that that user created them. Especially when that user isn't directly a person. Thanks, Richard
bindfs for web docroot - is this sane?
Hi all, I host a few websites, mostly Wordpress. I prefer to have the site files (mostly) owned by an owner user, and php-fpm runs as a different user, so that it can't write its own code. For uploads, those directories are group-writeable. Then for site developers (who might be contractors to my client) to be able to update teh site, they need read/write access to the docroot, but I don't want them all logging in using the same account/credentials. So I've set up bindfs ( https://bindfs.org/ ) with the following fstab line (example at this stage): /srv/wptest-home/doc_root /home/richard/wptest-home/doc_root fuse.bindfs --force-user=richard,--force-group=richard,--create-for-user=wptest-home,--create-for-group=wptest-home 0 0 That means they can see their own 'view' of the docroot under their own home directory, and they can create files as needed, which will have the correct owner under /srv. I haven't yet looked at what happens with the uploaded and cached files which are owned by the php user; hopefully that works ok. This means I don't need to worry about sudo and similar things, or chown/chgrp - which in turn means I should be able to offer sftp as an alternative to full ssh logins. It can probably even be chrooted. Does that sound like a sane plan? Are there gotchas I haven't spotted? Cheers, Richard
Re: nginx.conf woes
On 3/10/22 02:07, Patrick Kirk wrote: Hi all, I have 2 sites to run from one server. Both are based on ASP.Net Core. Both have SSL certs from letsencrypt. One works perfectly. The other sort of works. Firstly, I notice that cleardragon.com and kirks.net resolve to different addresses, though maybe cloudflare forwards kirks.net to the same place. But the setups are different. Or maybe you're using a different dns or other system to reach your pre-production system. If I go to http://localhost:5100 by redirecting to https://localhost:5101 and then it warns of an invalid certificate. I'm a bit unclear on this - I guess these are both the upstreams? The upstream (ASP thing?) also redirects http to https? Is nginx supposed to handle its upstream redirecting it to https? Anyway, the invalid cert is expected, because you presumably don't have a cert for 'localhost'. If I try lynx http://cleardragon.com a similar redirect takes place and I get a "Alert!: Unable to connect to remote host" error and lynx closes down. I see a website, but then again, maybe I'm looking at the production site and you're not. This redirect is presumably the one in your nginx config, rather than the one done by the upstream. Does connecting explicitly to https://cleardragon.com also fail? When I do sudo tail -f /var/log/nginx/error.log I see: 2022/10/02 12:44:22 [notice] 1624399#1624399: signal process started I don't know about this - lots of people report it, but I don't see answers. But it's a notice rather than an error. Cheers, Richard
Re:[DISCUSS] KIP-874: TopicRoundRobinAssignor
Hi Mathieu. I took a look at your KIP and have a couple questions. If the goal is to do the partition assignments at a topic level, wouldn't having single-partition topics solve this problem? You also mentioned that your goal is to minimize the potential of a poison pill message breaking all members of a group (by keeping track of which topics have 'failed'), but it is not clear how this can be achieved with this assignor. If we imagine an scenario where: * A group has 3 members (A, B, C) * Members are subscribed to 3 topics (T1, T2, T3) * Each member is assigned one topic (A[T1], B[T2], C[T3]) * One member fails to consume from a topic/partition (B[T2]), and goes into failed state How will the group leader know that T2 should not be re-assigned on the next rebalance? Can you elaborate a bit more on the mechanisms used to communicate this state to the other group members? Thanks From: dev@kafka.apache.org At: 10/05/22 03:47:33 UTC-4:00To: dev@kafka.apache.org Subject: [DISCUSS] KIP-874: TopicRoundRobinAssignor Hi Kafka Developers, My proposal is to add a new partition assignment strategy at the topic level to : - have a better data consistency by consumed topic in case of exception - have a solution much thread safe for the consumer In case there are multiple consumers and multiple topics. Here is the link to the KIP with all the explanations : https://cwiki.apache.org/confluence/x/XozGDQ Thank you in advance for your feedbacks, Mathieu
[Warzone2100-commits] [Warzone2100/warzone2100] c1cb49: Added difficulty selector to debug menu
Branch: refs/heads/master Home: https://github.com/Warzone2100/warzone2100 Commit: c1cb494d171add05d98feab5df664092affac4c7 https://github.com/Warzone2100/warzone2100/commit/c1cb494d171add05d98feab5df664092affac4c7 Author: kammy Date: 2022-10-07 (Fri, 07 Oct 2022) Changed paths: M src/levels.cpp M src/qtscript.cpp M src/qtscript.h M src/wzscriptdebug.cpp Log Message: --- Added difficulty selector to debug menu ___ Warzone2100-commits mailing list Warzone2100-commits@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/warzone2100-commits
Connector API callbacks for create/delete events
Hi, We've some custom connectors that require provisioning external resources (think of creating queues, S3 buckets, or activating accounts) when the connector instance is created, but also need to cleanup these resources (delete, deactivate) when the connector instance is deleted. The connector API (org.apache.kafka.connect.connector.Connector) provides a start() and stop() methods and, while we can probably work around the start() method to check if the initialization of external resources has been done, there is currently no hook that a connector can use to perform any cleanup task when it is deleted. I'm planning to write a KIP that enhances the Connector API by having methods that are invoked by the Herder when connectors are created and/or deleted; but before doing so, I wanted to ask the community if there's already some workaround(s) that we can be used to achieve these tasks. Thank you!
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/INT-7295-solving-antlr-version-mismatch Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7295-solving-antlr-version-mismatch/fea48a-00%40github.com.
[jenkinsci/nexus-platform-plugin] 7da0a8: INT-7295 Solving ANTLR version mismatch (#227)
Branch: refs/heads/main Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 7da0a8ed0e1121973df72051ad93ac299bdc3963 https://github.com/jenkinsci/nexus-platform-plugin/commit/7da0a8ed0e1121973df72051ad93ac299bdc3963 Author: Hector Danilo Hurtado Olaya Date: 2022-10-05 (Wed, 05 Oct 2022) Changed paths: M pom.xml Log Message: --- INT-7295 Solving ANTLR version mismatch (#227) -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/869515-7da0a8%40github.com.
[jenkinsci/nexus-platform-plugin] fea48a: INT-7295 Solving ANTLR version mismatch
Branch: refs/heads/INT-7295-solving-antlr-version-mismatch Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: fea48a3c615dddc78e83b1bdc9ccc3f516dde5a2 https://github.com/jenkinsci/nexus-platform-plugin/commit/fea48a3c615dddc78e83b1bdc9ccc3f516dde5a2 Author: Hector Hurtado Date: 2022-10-04 (Tue, 04 Oct 2022) Changed paths: M pom.xml Log Message: --- INT-7295 Solving ANTLR version mismatch -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7295-solving-antlr-version-mismatch/00-fea48a%40github.com.
[jenkinsci/nexus-platform-plugin] 869515: INT-7293 Making stage unstable for Jenkins and Blu...
Branch: refs/heads/main Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 8695153b5c4c2891b1fb88cd959b5303b67f3373 https://github.com/jenkinsci/nexus-platform-plugin/commit/8695153b5c4c2891b1fb88cd959b5303b67f3373 Author: Hector Danilo Hurtado Olaya Date: 2022-10-04 (Tue, 04 Oct 2022) Changed paths: M pom.xml M src/main/java/org/sonatype/nexus/ci/iq/PolicyEvaluatorExecution.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorIntegrationTest.groovy Log Message: --- INT-7293 Making stage unstable for Jenkins and Blue Ocen graphs (#226) -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/0be6cc-869515%40github.com.
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/INT-7293-making-stage-unstable Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7293-making-stage-unstable/f4f795-00%40github.com.
[jenkinsci/nexus-platform-plugin] f4f795: INT-7293 Making stage unstable for Jenkins and Blu...
Branch: refs/heads/INT-7293-making-stage-unstable Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: f4f7958ab2a8d2b7c7c38d3395400c9ba8316668 https://github.com/jenkinsci/nexus-platform-plugin/commit/f4f7958ab2a8d2b7c7c38d3395400c9ba8316668 Author: Hector Hurtado Date: 2022-10-03 (Mon, 03 Oct 2022) Changed paths: M pom.xml M src/main/java/org/sonatype/nexus/ci/iq/PolicyEvaluatorExecution.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorIntegrationTest.groovy Log Message: --- INT-7293 Making stage unstable for Jenkins and Blue Ocen graphs -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7293-making-stage-unstable/00-f4f795%40github.com.
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/INT-7283-updating-integrations-links Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7283-updating-integrations-links/0d3d78-00%40github.com.
[jenkinsci/nexus-platform-plugin] 0be6cc: INT-7283 Updating integrations help pages links (#...
Branch: refs/heads/main Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 0be6ccc80c3ee91a8bc3f241fb3cb0229903c108 https://github.com/jenkinsci/nexus-platform-plugin/commit/0be6ccc80c3ee91a8bc3f241fb3cb0229903c108 Author: Hector Danilo Hurtado Olaya Date: 2022-09-27 (Tue, 27 Sep 2022) Changed paths: M README.md M docs/overview.md M src/main/resources/org/sonatype/nexus/ci/iq/PolicyEvaluationReportAction/index.jelly Log Message: --- INT-7283 Updating integrations help pages links (#225) -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/5833c3-0be6cc%40github.com.
[jenkinsci/nexus-platform-plugin] 0d3d78: INT-7283 Updating integrations help pages links
Branch: refs/heads/INT-7283-updating-integrations-links Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 0d3d781914e770229407349f7767df966694641d https://github.com/jenkinsci/nexus-platform-plugin/commit/0d3d781914e770229407349f7767df966694641d Author: Hector Hurtado Date: 2022-09-26 (Mon, 26 Sep 2022) Changed paths: M README.md M docs/overview.md M src/main/resources/org/sonatype/nexus/ci/iq/PolicyEvaluationReportAction/index.jelly Log Message: --- INT-7283 Updating integrations help pages links -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7283-updating-integrations-links/00-0d3d78%40github.com.
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/bump-innersource-dependencies-55f298 Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/bump-innersource-dependencies-55f298/0da9b5-00%40github.com.
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/INT-7228-fixing-build-report-for-notify-actions Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7228-fixing-build-report-for-notify-actions/79e1fb-00%40github.com.
[jenkinsci/nexus-platform-plugin] 510954: INT-7228 Fixing build report for notify actions (#...
Branch: refs/heads/main Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 51095484ba67207ffbd9077fb551c5c9d48aa305 https://github.com/jenkinsci/nexus-platform-plugin/commit/51095484ba67207ffbd9077fb551c5c9d48aa305 Author: Hector Danilo Hurtado Olaya Date: 2022-09-20 (Tue, 20 Sep 2022) Changed paths: M src/main/java/org/sonatype/nexus/ci/iq/PolicyEvaluationReportUtil.groovy M src/main/resources/org/sonatype/nexus/ci/iq/PolicyEvaluationReportAction/index.jelly M src/test/java/org/sonatype/nexus/ci/iq/PolicyEvaluationReportActionTest.groovy Log Message: --- INT-7228 Fixing build report for notify actions (#223) -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/a853d2-510954%40github.com.
[jenkinsci/nexus-platform-plugin] 79e1fb: INT-7228 Fixing build report for notify actions
Branch: refs/heads/INT-7228-fixing-build-report-for-notify-actions Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 79e1fbb116daa02803970648f70c81937bdfab2b https://github.com/jenkinsci/nexus-platform-plugin/commit/79e1fbb116daa02803970648f70c81937bdfab2b Author: Hector Hurtado Date: 2022-09-16 (Fri, 16 Sep 2022) Changed paths: M src/main/java/org/sonatype/nexus/ci/iq/PolicyEvaluationReportUtil.groovy M src/main/resources/org/sonatype/nexus/ci/iq/PolicyEvaluationReportAction/index.jelly M src/test/java/org/sonatype/nexus/ci/iq/PolicyEvaluationReportActionTest.groovy Log Message: --- INT-7228 Fixing build report for notify actions -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7228-fixing-build-report-for-notify-actions/00-79e1fb%40github.com.
Re: [DNG] Chimaera CPU stuck
On 9/14/22 14:54, Luciano Mannucci wrote: On Wed, 14 Sep 2022 12:37:41 -0500 Hector Gonzalez Jaime via Dng wrote: kernel:[ 7336.007287] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:0] if I write to the disk via dd nothing wrong happens... Luciano. Check which scheduler you are using, for virtual machine loads you might want to use "deadline", assuming your disk is sda, the first command checks your scheduler, the second changes to deadline. cat /sys/block/sda/queue/scheduler echo "deadline" >/sys/block/sda/queue/schedule Well, the disk seems to be "vda". Issueing root@bobby:~# cat /sys/block/vda/queue/scheduler gives: [mq-deadline] none Is it wrong? It's as it should be. Did you check this on the hypervisor? The use of vda suggests this was checked on a VM, please check the physical host, which is the one doing the I/O for your VM. The physical host is also the one that needs to have a few dedicated processors to perform I/O for the VMs. Luciano. -- Hector Gonzalez ca...@genac.org ___ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/INT-7235-credentials-policy-violation Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7235-credentials-policy-violation/de1b4c-00%40github.com.
Re: [DNG] Chimaera CPU stuck
On 9/14/22 10:02, Luciano Mannucci wrote: On Wed, 14 Sep 2022 12:49:19 +0200 Luciano Mannucci wrote: vm.dirty_background_bytes=67108864 vm.dirty_bytes=268435456 Maybe this additional information is helpful: https://forum.proxmox.com/threads/io-performance-tuning.15893/ https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/ Hope that helps, Yes, it does! Works like a charm! I've been to quick... Now only if the data comes from the local LAN (not drossing routers or firewalls) I still get kernel:[ 7336.007287] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:0] if I write to the disk via dd nothing wrong happens... Luciano. Check which scheduler you are using, for virtual machine loads you might want to use "deadline", assuming your disk is sda, the first command checks your scheduler, the second changes to deadline. cat /sys/block/sda/queue/scheduler echo "deadline" >/sys/block/sda/queue/scheduler -- Hector Gonzalez ca...@genac.org ___ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng
[jenkinsci/nexus-platform-plugin] de1b4c: Adjusting Jenkins patch version
Branch: refs/heads/INT-7235-credentials-policy-violation Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: de1b4cfa44f37a0313be3fe98d17c5c3f25aba27 https://github.com/jenkinsci/nexus-platform-plugin/commit/de1b4cfa44f37a0313be3fe98d17c5c3f25aba27 Author: Hector Hurtado Date: 2022-09-14 (Wed, 14 Sep 2022) Changed paths: M pom.xml Log Message: --- Adjusting Jenkins patch version -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7235-credentials-policy-violation/bd86d6-de1b4c%40github.com.
[jenkinsci/nexus-platform-plugin] bd86d6: Updating Jenkins minimum supported version to 2.289
Branch: refs/heads/INT-7235-credentials-policy-violation Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: bd86d6352eccaa66c47cf7553ff929cc412762a6 https://github.com/jenkinsci/nexus-platform-plugin/commit/bd86d6352eccaa66c47cf7553ff929cc412762a6 Author: Hector Hurtado Date: 2022-09-14 (Wed, 14 Sep 2022) Changed paths: M pom.xml Log Message: --- Updating Jenkins minimum supported version to 2.289 -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7235-credentials-policy-violation/b12954-bd86d6%40github.com.
[jenkinsci/nexus-platform-plugin] b12954: INT-7235 Adding proper scope to provided Jenkins d...
Branch: refs/heads/INT-7235-credentials-policy-violation Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: b12954a50b939f2939cbaa4adb5e2c5fc7476591 https://github.com/jenkinsci/nexus-platform-plugin/commit/b12954a50b939f2939cbaa4adb5e2c5fc7476591 Author: Hector Hurtado Date: 2022-09-14 (Wed, 14 Sep 2022) Changed paths: M pom.xml Log Message: --- INT-7235 Adding proper scope to provided Jenkins dependencies -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7235-credentials-policy-violation/00-b12954%40github.com.
[jenkinsci/nexus-platform-plugin] 5f64dd: INT-7158 Adding Organizations select on UI (#221)
Branch: refs/heads/main Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 5f64dd21ca0ace488ecc53d123681c6c3a51332a https://github.com/jenkinsci/nexus-platform-plugin/commit/5f64dd21ca0ace488ecc53d123681c6c3a51332a Author: Hector Danilo Hurtado Olaya Date: 2022-09-13 (Tue, 13 Sep 2022) Changed paths: M pom.xml M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorDescriptor.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorUtil.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep.groovy M src/main/java/org/sonatype/nexus/ci/iq/PolicyEvaluationHealthAction.groovy M src/main/java/org/sonatype/nexus/ci/iq/SelectedApplication.java M src/main/java/org/sonatype/nexus/ci/util/IqUtil.groovy M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep/config.groovy M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep/help-iqOrganization.html M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep/config.groovy M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep/help-iqOrganization.html M src/main/resources/org/sonatype/nexus/ci/iq/Messages.properties M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorDescriptorTest.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorTest.groovy M src/test/java/org/sonatype/nexus/ci/util/IqUtilTest.groovy Log Message: --- INT-7158 Adding Organizations select on UI (#221) * Adding Organizations select on UI -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/3d30f5-5f64dd%40github.com.
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui/40780f-00%40github.com.
[jenkinsci/nexus-platform-plugin] 40780f: Fixing tests docs
Branch: refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 40780f619e334425d1e554108c9377f115695bc5 https://github.com/jenkinsci/nexus-platform-plugin/commit/40780f619e334425d1e554108c9377f115695bc5 Author: Hector Hurtado Date: 2022-09-13 (Tue, 13 Sep 2022) Changed paths: M src/test/java/org/sonatype/nexus/ci/util/IqUtilTest.groovy Log Message: --- Fixing tests docs -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui/cccba8-40780f%40github.com.
[jenkinsci/nexus-platform-plugin] cccba8: Applying feedback changes
Branch: refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: cccba8b756e1d54cab4e2a88db5c4ccc8f63707a https://github.com/jenkinsci/nexus-platform-plugin/commit/cccba8b756e1d54cab4e2a88db5c4ccc8f63707a Author: Hector Hurtado Date: 2022-09-13 (Tue, 13 Sep 2022) Changed paths: M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorTest.groovy M src/test/java/org/sonatype/nexus/ci/util/IqUtilTest.groovy Log Message: --- Applying feedback changes -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui/8f743e-cccba8%40github.com.
[jenkinsci/nexus-platform-plugin] 8f743e: Applying feedback changes
Branch: refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 8f743e5673c865907e7b946dce172c0c528ba3e5 https://github.com/jenkinsci/nexus-platform-plugin/commit/8f743e5673c865907e7b946dce172c0c528ba3e5 Author: Hector Hurtado Date: 2022-09-12 (Mon, 12 Sep 2022) Changed paths: M src/main/java/org/sonatype/nexus/ci/util/IqUtil.groovy M src/test/java/org/sonatype/nexus/ci/util/IqUtilTest.groovy Log Message: --- Applying feedback changes -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui/d9d0ec-8f743e%40github.com.
Re: [DISCUSS] KIP-848: The Next Generation of the Consumer Rebalance Protocol
Yes, I agree that leadership could be modeled after partition assignment(s). However, I can think of some expanded versions of the 'leader election' use case that exist today in Schema Registry. The advantage of creating a different 'type' that isn't necessarily tied to topics/partitions (and is used only for resource management) would be that we can scale the number of resources it handles (akin to a connect cluster increasing the number of connectors/tasks) without having to change topics/partitions, as these partitions will never have any data (and can't be shrunk), they will be used just for leadership. This is in the spirit of KIP-795. We can table this discussion for after the Connect discussion, as there will be more clarity on how extending the new protocol will look like. From: dev@kafka.apache.org At: 09/12/22 07:58:32 UTC-4:00To: dev@kafka.apache.org Subject: Re: [DISCUSS] KIP-848: The Next Generation of the Consumer Rebalance Protocol Hi Hector, We definitely need to share internals with Connect APIs. That model would not scale otherwise. Regarding the schema registry, I wonder if we could just use the new protocol. At the end of the day, the schema registry wants to elect a single writer for a partition and the owner of the partition can be considered as the leader. I haven't really tried this out but that seems doable. What do you think? Best, David On Fri, Sep 9, 2022 at 8:45 PM Hector Geraldino (BLOOMBERG/ 919 3RD A) wrote: > > So it seems there's a consensus on having dedicated APIs for Connect, which means having data model (group, member, assignment) and APIs (heartbeat request/response, assignment prepare and install) tailored specifically to connect. I wonder if adding support for other coordinator group types (e.g. leadership, in the case of schema registry) will require similar assets (new model classes to express members and resources, custom heartbeats and assignment prepare/install APIs). > > I think that, as new use cases are considered, the core primitives of the new protocol will be generalized, so new types don't have to implement the whole stack (e.g. state machines), but only functions like detecting group metadata changes, or computing assignments of the resources handled by each type (Topic/Partitions in the case of consumer, Connector/Task in the case of Connect, Leadership in the case of Schema Registry, and so on). > > > From: dev@kafka.apache.org At: 08/12/22 09:31:36 UTC-4:00To: dev@kafka.apache.org > Subject: Re: [DISCUSS] KIP-848: The Next Generation of the Consumer Rebalance Protocol > > Thank you Guozhang/David for the feedback. Looks like there's agreement on > using separate APIs for Connect. I would revisit the doc and see what > changes are to be made. > > Thanks! > Sagar. > > On Tue, Aug 9, 2022 at 7:11 PM David Jacot > wrote: > > > Hi Sagar, > > > > Thanks for the feedback and the document. That's really helpful. I > > will take a look at it. > > > > Overall, it seems to me that both Connect and the Consumer could share > > the same underlying "engine". The main difference is that the Consumer > > assigns topic-partitions to members whereas Connect assigns tasks to > > workers. I see two ways to move forward: > > 1) We extend the new proposed APIs to support different resource types > > (e.g. partitions, tasks, etc.); or > > 2) We use new dedicated APIs for Connect. The dedicated APIs would be > > similar to the new ones but different on the content/resources and > > they would rely on the same engine on the coordinator side. > > > > I personally lean towards 2) because I am not a fan of overcharging > > APIs to serve different purposes. That being said, I am not opposed to > > 1) if we can find an elegant way to do it. > > > > I think that we can continue to discuss it here for now in order to > > ensure that this KIP is compatible with what we will do for Connect in > > the future. > > > > Best, > > David > > > > On Mon, Aug 8, 2022 at 2:41 PM David Jacot wrote: > > > > > > Hi all, > > > > > > I am back from vacation. I will go through and address your comments > > > in the coming days. Thanks for your feedback. > > > > > > Cheers, > > > David > > > > > > On Wed, Aug 3, 2022 at 10:05 PM Gregory Harris > > wrote: > > > > > > > > Hey All! > > > > > > > > Thanks for the KIP, it's wonderful to see cooperative rebalancing > > making it > > > > down the stack! > > > > > > > > I had a few questions: > > > > > > > > 1. The 'Rejected Alternatives' section describes how member epoch >
[jenkinsci/nexus-platform-plugin] d9d0ec: Adding some descriptions to new tests
Branch: refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: d9d0ecffb17ff2e4980e9685f539023087e297e4 https://github.com/jenkinsci/nexus-platform-plugin/commit/d9d0ecffb17ff2e4980e9685f539023087e297e4 Author: Hector Hurtado Date: 2022-09-12 (Mon, 12 Sep 2022) Changed paths: M src/test/java/org/sonatype/nexus/ci/util/IqUtilTest.groovy Log Message: --- Adding some descriptions to new tests -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui/20fac0-d9d0ec%40github.com.
[jenkinsci/nexus-platform-plugin] 20fac0: Updating tests descriptions
Branch: refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 20fac010a9994128bb90495a9768ff4cab889739 https://github.com/jenkinsci/nexus-platform-plugin/commit/20fac010a9994128bb90495a9768ff4cab889739 Author: Hector Hurtado Date: 2022-09-12 (Mon, 12 Sep 2022) Changed paths: M src/test/java/org/sonatype/nexus/ci/util/IqUtilTest.groovy Log Message: --- Updating tests descriptions -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui/d7778f-20fac0%40github.com.
[jenkinsci/nexus-platform-plugin] d7778f: Fixing tests
Branch: refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: d7778f373e2224022fb10b18f77499a7b3bb85b2 https://github.com/jenkinsci/nexus-platform-plugin/commit/d7778f373e2224022fb10b18f77499a7b3bb85b2 Author: Hector Hurtado Date: 2022-09-09 (Fri, 09 Sep 2022) Changed paths: M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorTest.groovy Log Message: --- Fixing tests -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui/ec8aa8-d7778f%40github.com.
[jenkinsci/nexus-platform-plugin] ec8aa8: Adding Organizations select on UI
Branch: refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: ec8aa8c7d7d8d023a9ba7646262ff17cf0745aa3 https://github.com/jenkinsci/nexus-platform-plugin/commit/ec8aa8c7d7d8d023a9ba7646262ff17cf0745aa3 Author: Hector Hurtado Date: 2022-09-09 (Fri, 09 Sep 2022) Changed paths: M pom.xml M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorDescriptor.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorUtil.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep.groovy M src/main/java/org/sonatype/nexus/ci/iq/PolicyEvaluationHealthAction.groovy M src/main/java/org/sonatype/nexus/ci/iq/SelectedApplication.java M src/main/java/org/sonatype/nexus/ci/util/IqUtil.groovy M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep/config.groovy M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep/help-iqOrganization.html M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep/config.groovy M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep/help-iqOrganization.html M src/main/resources/org/sonatype/nexus/ci/iq/Messages.properties M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorDescriptorTest.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorTest.groovy M src/test/java/org/sonatype/nexus/ci/util/IqUtilTest.groovy Log Message: --- Adding Organizations select on UI -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7158-receiving-orgId-using-a-select-on-ui/00-ec8aa8%40github.com.
Re:[VOTE] KIP-848: The Next Generation of the Consumer Rebalance Protocol
+1 (non-binding) Really looking forward for the discussion on how other group types (especially connect) will support this new protocol. From: dev@kafka.apache.org At: 09/09/22 04:32:46 UTC-4:00To: dev@kafka.apache.org Subject: [VOTE] KIP-848: The Next Generation of the Consumer Rebalance Protocol Hi all, Thank you all for the very positive discussion about KIP-848. It looks like folks are very positive about it overall. I would like to start a vote on KIP-848, which introduces a brand new consumer rebalance protocol. The KIP is here: https://cwiki.apache.org/confluence/x/HhD1D. Best, David
Re: [DISCUSS] KIP-848: The Next Generation of the Consumer Rebalance Protocol
to perform a detailed analysis of the same and we > can have > > > > a > > > > >> > separate discussion thread for that as that would derail this > > > > discussion > > > > >> > thread. Let me know if that sounds good to you. > > > > >> > > > > > >> > Thanks! > > > > >> > Sagar. > > > > >> > > > > > >> > > > > > >> > > > > > >> > On Fri, Jul 15, 2022 at 5:47 PM David Jacot > > > > > > > >> > > > > > >> > wrote: > > > > >> > > > > > >> > > Hi Sagar, > > > > >> > > > > > > >> > > Thanks for your comments. > > > > >> > > > > > > >> > > 1) Yes. That refers to `Assignment#error`. Sure, I can > mention it. > > > > >> > > > > > > >> > > 2) The idea is to transition C from his current assignment to > his > > > > >> > > target assignment when he can move to epoch 3. When that > happens, > > > > the > > > > >> > > member assignment is updated and persisted with all its > assigned > > > > >> > > partitions even if they are not all revoked yet. In other > words, the > > > > >> > > member assignment becomes the target assignment. This is > basically > > > > an > > > > >> > > optimization to avoid having to write all the changes to the > log. > > > > The > > > > >> > > examples are based on the persisted state so I understand the > > > > >> > > confusion. Let me see if I can improve this in the > description. > > > > >> > > > > > > >> > > 3) Regarding Connect, it could reuse the protocol with a > client side > > > > >> > > assignor if it fits in the protocol. The assignment is about > > > > >> > > topicid-partitions + metadata, could Connect fit into this? > > > > >> > > > > > > >> > > Best, > > > > >> > > David > > > > >> > > > > > > >> > > On Fri, Jul 15, 2022 at 1:55 PM Sagar < > sagarmeansoc...@gmail.com> > > > > >> wrote: > > > > >> > > > > > > > >> > > > Hi David, > > > > >> > > > > > > > >> > > > Thanks for the KIP. I just had minor observations: > > > > >> > > > > > > > >> > > > 1) In the Assignment Error section in Client Side mode > Assignment > > > > >> > > process, > > > > >> > > > you mentioned => `In this case, the client side assignor can > > > > return > > > > >> an > > > > >> > > > error to the group coordinator`. In this case are you > referring to > > > > >> the > > > > >> > > > Assignor returning an AssignmentError that's listed down > towards > > > > the > > > > >> > end? > > > > >> > > > If yes, do you think it would make sense to mention this > > > > explicitly > > > > >> > here? > > > > >> > > > > > > > >> > > > 2) In the Case Studies section, I have a slight confusion, > not > > > > sure > > > > >> if > > > > >> > > > others have the same. Consider this step: > > > > >> > > > > > > > >> > > > When B heartbeats, the group coordinator transitions him to > epoch > > > > 3 > > > > >> > > because > > > > >> > > > B has no partitions to revoke. It persists the change and > reply. > > > > >> > > > > > > > >> > > >- Group (epoch=3) > > > > >> > > > - A > > > > >> > > > - B > > > > >> > > > - C > > > > >> > > >- Target Assignment (epoch=3) > > > > >> > > > - A - partitions=[foo-0] > > > > >> > > > - B - partitions=[foo-2] > > > > >> > > > - C - pa
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax/7a1ebb-00%40github.com.
[jenkinsci/nexus-platform-plugin] 3d30f5: INT-7157 add org id parameter to nexus IQ policy e...
Branch: refs/heads/main Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 3d30f5be329a0185caa8edb423fd9d4ddc688a7c https://github.com/jenkinsci/nexus-platform-plugin/commit/3d30f5be329a0185caa8edb423fd9d4ddc688a7c Author: Hector Danilo Hurtado Olaya Date: 2022-09-08 (Thu, 08 Sep 2022) Changed paths: M pom.xml M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluator.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorDescriptor.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorUtil.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep.groovy M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep/config.groovy A src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep/help-iqOrganization.html M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep/config.groovy A src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep/help-iqOrganization.html M src/main/resources/org/sonatype/nexus/ci/iq/Messages.properties M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorDescriptorTest.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorIntegrationTest.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorSlaveIntegrationTest.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorTest.groovy Log Message: --- INT-7157 add org id parameter to nexus IQ policy evaluation pipeline syntax (#219) * INT-7157 Add OrgId parameter to IQ Policy Evaluation Step -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/fdaeb7-3d30f5%40github.com.
[jenkinsci/nexus-platform-plugin] fdaeb7: Updating ChangeLog
Branch: refs/heads/main Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: fdaeb7d17b1b9e64e6c7d62bb1d45c960ab43132 https://github.com/jenkinsci/nexus-platform-plugin/commit/fdaeb7d17b1b9e64e6c7d62bb1d45c960ab43132 Author: Hector Hurtado Date: 2022-09-07 (Wed, 07 Sep 2022) Changed paths: M README.md Log Message: --- Updating ChangeLog -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/f87a0d-fdaeb7%40github.com.
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/bump-innersource-dependencies-e566d7 Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/bump-innersource-dependencies-e566d7/7e8946-00%40github.com.
[jenkinsci/nexus-platform-plugin] 7a1ebb: Applying feedback changes
Branch: refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 7a1ebbe8d4d8824e5539e6cb6fad179cb33a04bb https://github.com/jenkinsci/nexus-platform-plugin/commit/7a1ebbe8d4d8824e5539e6cb6fad179cb33a04bb Author: Hector Hurtado Date: 2022-09-07 (Wed, 07 Sep 2022) Changed paths: M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep/help-iqOrganization.html M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep/help-iqOrganization.html Log Message: --- Applying feedback changes -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax/0de4b5-7a1ebb%40github.com.
[jenkinsci/nexus-platform-plugin] 0de4b5: Adjusting UI for organization id
Branch: refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 0de4b5f151e5b949f9dd72b68d12a273580a5d77 https://github.com/jenkinsci/nexus-platform-plugin/commit/0de4b5f151e5b949f9dd72b68d12a273580a5d77 Author: Hector Hurtado Date: 2022-09-06 (Tue, 06 Sep 2022) Changed paths: M pom.xml M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep/config.groovy M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep/config.groovy Log Message: --- Adjusting UI for organization id -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax/06b64d-0de4b5%40github.com.
[jenkinsci/nexus-platform-plugin] 06b64d: Adding some tests to excersie the new org id param...
Branch: refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 06b64da81bd1ae8166d2e76d9ffd2d85c33e4b4e https://github.com/jenkinsci/nexus-platform-plugin/commit/06b64da81bd1ae8166d2e76d9ffd2d85c33e4b4e Author: Hector Hurtado Date: 2022-09-06 (Tue, 06 Sep 2022) Changed paths: M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorDescriptor.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorDescriptorTest.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorIntegrationTest.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorTest.groovy Log Message: --- Adding some tests to excersie the new org id parameter -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax/0c2ac5-06b64d%40github.com.
[jenkinsci/nexus-platform-plugin] 0c2ac5: Adding setter for the organization id
Branch: refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 0c2ac52461b22757e65de1ed54fcb672f0616566 https://github.com/jenkinsci/nexus-platform-plugin/commit/0c2ac52461b22757e65de1ed54fcb672f0616566 Author: Hector Hurtado Date: 2022-09-05 (Mon, 05 Sep 2022) Changed paths: M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep.groovy Log Message: --- Adding setter for the organization id -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax/e27a87-0c2ac5%40github.com.
[jenkinsci/nexus-platform-plugin] e27a87: Fixing tests
Branch: refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: e27a8773cb149985dc5323270c8171c510a64089 https://github.com/jenkinsci/nexus-platform-plugin/commit/e27a8773cb149985dc5323270c8171c510a64089 Author: Hector Hurtado Date: 2022-09-05 (Mon, 05 Sep 2022) Changed paths: M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorIntegrationTest.groovy Log Message: --- Fixing tests -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax/fe6a1a-e27a87%40github.com.
[jenkinsci/nexus-platform-plugin] fe6a1a: INT-7157 Add OrgId parameter to IQ Policy Evaluati...
Branch: refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: fe6a1af271956edcc8ebcaa3d633c8658cfa34cb https://github.com/jenkinsci/nexus-platform-plugin/commit/fe6a1af271956edcc8ebcaa3d633c8658cfa34cb Author: Hector Hurtado Date: 2022-09-05 (Mon, 05 Sep 2022) Changed paths: M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluator.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorUtil.groovy M src/main/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep.groovy M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep/config.groovy A src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorBuildStep/help-iqOrganization.html M src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep/config.groovy A src/main/resources/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorWorkflowStep/help-iqOrganization.html M src/main/resources/org/sonatype/nexus/ci/iq/Messages.properties M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorDescriptorTest.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorIntegrationTest.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorSlaveIntegrationTest.groovy M src/test/java/org/sonatype/nexus/ci/iq/IqPolicyEvaluatorTest.groovy Log Message: --- INT-7157 Add OrgId parameter to IQ Policy Evaluation Step -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7157-add-org-id-param-to-pipeline-syntax/00-fe6a1a%40github.com.
[kdenlive] [Bug 458557] New: Composition overlays do not extend to the length of the video when dragged and dropped.
https://bugs.kde.org/show_bug.cgi?id=458557 Bug ID: 458557 Summary: Composition overlays do not extend to the length of the video when dragged and dropped. Product: kdenlive Version: 22.08.0 Platform: Microsoft Windows OS: Microsoft Windows Status: REPORTED Severity: minor Priority: NOR Component: Effects & Transitions Assignee: vpi...@kde.org Reporter: bkast1...@gmail.com Target Milestone: --- Created attachment 151738 --> https://bugs.kde.org/attachment.cgi?id=151738=edit default size image Not a crash, just a "feature" that used to be present in older versions. The behavior in new versions i.e. 22.08.0 is erratic with some composition overlays expanding such as with -> right click -> add composition, but others from the composition menu not expanding upon placement. When dragged and dropped from the composition menu, the default "band" overlay is defaults to a tiny (horizontal) size that is almost useless, and you can't even change it's size without zooming in by miles. I hold that the best behavior is to expand the band upon drag and drop and if a smaller band is required it could be controlled by clipping or resizing. Maybe a config setting can adjust this default behavior if other people have other experiences during editing. STEPS TO REPRODUCE Have two videos, drag a composition between them from the composition menu. Upon placement, it will be nano-scale in size. OBSERVED RESULT During high-speed workflow editing. It's a game-changer no to have to zoom in by five hundred miles to resize the overlay band manually each and every time when using compositions. EXPECTED RESULT Older versions did not have this discrepancy. SOFTWARE/OS VERSIONS Windows: 10 Qt Version: 5.15.5 -- You are receiving this mail because: You are watching all bug changes.
Re: [PATCH] kern/efi/mm: Double the default heap size
On 21/08/2022 21.35, Daniel Axtens wrote: > Hi Hector, > > Thanks for your patch and for taking the trouble to put it together. > >> GRUB is already running out of memory on Apple M1 systems, causing >> graphics init to fail, as of the latest Git changes. Since dynamic >> growing of the heap isn't done yet, double the default heap size for >> now. > > Huh, weird - those changes have landed in git, see commit 887f98f0db43 > ("mm: Allow dynamically requesting additional memory regions") for the > overarching support and commit 1df2934822df ("kern/efi/mm: Implement > runtime addition of pages"). It's not done on PowerPC, but if you're in > EFI-land then it should complete. > > The only reason I can think of off the top of my head where you would be > having issues that your patch fixes is if we somehow need more memory to > even get to the point where we can ask firmware for more memory. I > suppose that's within the realm of possibility. Interesting. I missed the indirection through the function pointer... but either way, I do indeed have those commits in the broken tree that Arch Linux ARM started shipping yesterday (0c6c1aff2a, which isn't actually current master but it's from a couple weeks ago). The previous version was 2f4430cc0, which doesn't have it, so I wonder if there was actually a regression involved? What I see is that GRUB briefly flashes an out of memory error and fails to set the graphics mode, then ends up in text mode. My best guess without digging further is that it fails to allocate a framebuffer or console text buffer (since these machines have higher resolution screens than most, this might not have come up elsewhere). But I don't see why that would have to happen before it's allowed? > I f my maths are right, this bumps up the initial allocation from 1M to > 2M. Correct. > I think your experience tends to disprove the hypothesis that we > could get away with a very small initial allocation (which was the > thinking when the initial dynamic allocation patch set went in), so I'm > wondering if we should take this opportunity to allocate 16M or 32M or > something. My powerpc proposal kept the initial allocation at 32MB, I > think that's probably sane for EFI too? I think that makes sense. - Hector ___ Grub-devel mailing list Grub-devel@gnu.org https://lists.gnu.org/mailman/listinfo/grub-devel
[PATCH] kern/efi/mm: Double the default heap size
GRUB is already running out of memory on Apple M1 systems, causing graphics init to fail, as of the latest Git changes. Since dynamic growing of the heap isn't done yet, double the default heap size for now. Signed-off-by: Hector Martin --- grub-core/kern/efi/mm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/grub-core/kern/efi/mm.c b/grub-core/kern/efi/mm.c index d290c9a76270..377d8d3a1c1b 100644 --- a/grub-core/kern/efi/mm.c +++ b/grub-core/kern/efi/mm.c @@ -39,7 +39,7 @@ #define MEMORY_MAP_SIZE0x3000 /* The default heap size for GRUB itself in bytes. */ -#define DEFAULT_HEAP_SIZE 0x10 +#define DEFAULT_HEAP_SIZE 0x20 static void *finish_mmap_buf = 0; static grub_efi_uintn_t finish_mmap_size = 0; -- 2.35.1 ___ Grub-devel mailing list Grub-devel@gnu.org https://lists.gnu.org/mailman/listinfo/grub-devel
[bug #62925] Out of memory error initializing graphics on some Apple M1 systems
URL: <https://savannah.gnu.org/bugs/?62925> Summary: Out of memory error initializing graphics on some Apple M1 systems Project: GNU GRUB Submitter: marcan Submitted: Sat 20 Aug 2022 11:32:41 AM UTC Category: Booting Severity: Major Priority: 5 - Normal Item Group: None Status: None Privacy: Public Assigned to: None Originator Name: Originator Email: Open/Closed: Open Release: Git master Release: Discussion Lock: Any Reproducibility: Every Time Planned Release: None ___ Follow-up Comments: --- Date: Sat 20 Aug 2022 11:32:41 AM UTC By: Hector Martin As of grub-2:2.06.r297.g0c6c1aff2-1-aarch64 as packaged by Arch Linux ARM (which is vanilla grub as of that git revision, other than grub.d patches and trivial packaging-related changes), initializing graphics fails on Apple M1 Pro MacBook 14" systems, and probably others, with an out of memory error. This leads to a degraded text console with broken menu graphic characters. This didn't happen with the previous package (grub-2.06.r261.g2f4430cc0-1). The open source boot stack on these systems uses U-Boot as an EFI services provider. It seems GRUB is simply running out of heap as of recent changes. This trivial patch fixes it: diff -urN grub-core/kern/efi/mm.c grub-core/kern/efi/mm.c --- grub-core/kern/efi/mm.c 2022-08-20 20:17:23.975902981 +0900 +++ grub-core/kern/efi/mm.c 2022-08-20 20:17:16.268139945 +0900 @@ -39,7 +39,7 @@ #define MEMORY_MAP_SIZE0x3000 /* The default heap size for GRUB itself in bytes. */ -#define DEFAULT_HEAP_SIZE 0x10 +#define DEFAULT_HEAP_SIZE 0x20 static void *finish_mmap_buf = 0; static grub_efi_uintn_t finish_mmap_size = 0; ___ Reply to this item at: <https://savannah.gnu.org/bugs/?62925> ___ Message sent via Savannah https://savannah.gnu.org/
Re: Consumer Lag-Apache_kafka_JMX metrics
As far as I know, such metric does not exist. Strictly speaking, consumer lag can be defined as the difference between the last produced offset (high watermark) and the last committed offset by the group, but such metric has very little value without considering the time dimension. It'd be tricky for the broker to report on consumer 'lag', as the concept of lag itself varies. You already know about Burrow (and I recall reading about Uber's uGroup), and you already see that it considers a consumer lagging if it is not making enough progress in a sliding time window (10 mins?). But other tools/use cases can define lags using a different criteria (e.g. number of messages exceeds a threshold). I think because of these variances, it kinda makes sense for tools like Burrow (and others) to be used for this purpose, instead of having the broker dictating when consumers are lagging. Just my two cents From: dev@kafka.apache.org At: 08/16/22 15:06:16 UTC-4:00To: us...@kafka.apache.org, show...@gmail.com, mmcfarl...@cavulus.com, dev@kafka.apache.org, scante...@gmail.com, ranlupov...@gmail.com, israele...@gmail.com Subject: Re: Consumer Lag-Apache_kafka_JMX metrics Hello Experts, Any info or pointers on my query please. On Mon, Aug 15, 2022 at 11:36 PM Kafka Life wrote: > Dear Kafka Experts > we need to monitor the consumer lag in kafka clusters 2.5.1 and 2.8.0 > versions of kafka in Grafana. > > 1/ What is the correct path for JMX metrics to evaluate Consumer Lag in > kafka cluster. > > 2/ I had thought it is FetcherLag but it looks like it is not as per the > link below. > > https://www.instaclustr.com/support/documentation/kafka/monitoring-information/f etcher-lag-metrics/#:~:text=Aggregated%20Fetcher%20Consumer%20Lag%20This%20metri c%20aggregates%20lag,in%20sync%20with%20partitions%20that%20it%20is%20replicatin g > . > > Could one of you experts please guide on which JMX i should use for > consumer lag apart from kafka burrow or such intermediate tools > > Thanking you in advance > >
[konsole] [Bug 453545] Konsole resets font size after disconnecting from SSH session
https://bugs.kde.org/show_bug.cgi?id=453545 --- Comment #9 from Hector Martin --- Another workaround is to make the actual command running not be ssh, so konsole can't see it. For example, you could alias ssh to be `/usr/bin/time ssh`, that should stop konsole from picking it up. -- You are receiving this mail because: You are watching all bug changes.
[konsole] [Bug 453545] Konsole resets font size after disconnecting from SSH session
https://bugs.kde.org/show_bug.cgi?id=453545 Hector Martin changed: What|Removed |Added CC||hec...@marcansoft.com --- Comment #8 from Hector Martin --- Easy quick fix: `sudo rm /usr/lib64/qt5/plugins/konsoleplugins/konsole_sshmanagerplugin.so`. This is all caused by the new fancypants SSH manager plugin, but inexplicably there is no UI for enabling/disabling plugins. So if you don't use it, just nuke the file (of course, it will come back on package upgrades, but it beats living with text size resets all day). -- You are receiving this mail because: You are watching all bug changes.
Re: Migrating PostgreSQL Stored Procedures to MSSQL 2019 for example
I would suggest no easy way with a tool .. postgresql is powerful because you can write functions in different languages in addition to PLpgSQL so python,perl,tcl,js,c++ and there are many popular extensions. So very possible to encapsulate complex application/business logic within database. But if it is just a database with a bit of PLpgSQL to expose data to a front end application that does all the work then maybe a tool might work. You may have noticed how powerful AWS SCT is converting MSSQL & Oracle to Postgresql but alas will I believe only convert Postgresql to Postres alike databases Aurora and MySQL ... Not to say someone somewhere hasn’t but a claimed strength of Postgresql is probably a hindrance in this instance. Hector Vass Data Engineer 07773 352559 On Fri, 12 Aug 2022, 11:24 Scott Simpson, wrote: > Hi, > > I need to migrate many PostgreSQL Stored Procedures and functions to MSSQL. > > I can find anything online that seems to handle this task. > > Are there any tools that you have that can do this job? > > > *Kind Regards* > > > *Zellis* | Scott Simpson | Senior Engineer > > > > Thorpe Park > > United Kingdom > > Work : +44 (0)20 3986 3523 > > Email : scott.simp...@zellis.com > > Web : www.Zellis.com <http://www.zellis.com/> > > > -- > > *Zellis is the trading name for Zellis Holdings Ltd and its associated > companies “Zellis”.* > > The contents of this email are confidential to Zellis and are solely for > the use of the intended recipient. If you received this email in error, > please inform the sender immediately and delete the email from your system. > Unless Zellis have given you express permission to do so, please do not > disclose, distribute or copy the contents of this email. > > Unless this email expressly states that it is a contractual offer or > acceptance, it is not sent with the intention of creating a legal > relationship and does not constitute an offer or acceptance which could > give rise to a contract. > > Any views expressed in this email are those of the individual sender > unless the email specifically states them to be the views of Zellis. > > Zellis Holdings Ltd - registered in England and Wales - Company No: > 10975623 - Registered Office: 740 Waterside Drive, Aztec West, Almondsbury, > Bristol, BS32 4UF, UK. >
Re: [RFR] po-debconf://lyskom-server
Hola, sólo una pequeña sugerencia que puede ser incorporada o no. Saludos. El mar, 2 ago 2022 a la(s) 10:16, Camaleón (noela...@gmail.com) escribió: > > Hola, > Adjunto la traducción. > Saludos, > -- > Camaleón -- ****** Hector Colina. Debian user, aka e1th0r Santiago de Chile Key fingerprint = E81B 8228 8919 EE27 85B7 A59B 357F 81F5 5CFC B481 Long live and prosperity! es.po.diff Description: Binary data
Re: [RFR] po-debconf://smb2www
Hola, sólo una pequeña sugerencia que puede ser incorporada o no. Anexo el diff con la misma Saludos El mar, 2 ago 2022 a la(s) 10:17, Camaleón (noela...@gmail.com) escribió: > > Hola, > Adjunto la traducción. > Saludos, > -- > Camaleón -- ****** Hector Colina. Debian user, aka e1th0r Santiago de Chile Key fingerprint = E81B 8228 8919 EE27 85B7 A59B 357F 81F5 5CFC B481 Long live and prosperity! es.po.diff Description: Binary data
Re: [RFR] po-debconf://watchdog
Hola, un pequeño aporte, anexo diff con la palabra corregida Saludos. El mar, 2 ago 2022 a la(s) 10:17, Camaleón (noela...@gmail.com) escribió: > > Hola, > Adjunto la traducción. > Saludos, > -- > Camaleón -- ****** Hector Colina. Debian user, aka e1th0r Santiago de Chile Key fingerprint = E81B 8228 8919 EE27 85B7 A59B 357F 81F5 5CFC B481 Long live and prosperity! es.po.diff Description: Binary data
Re: usbhidaction(1) is unvel(2)ed too strictly to run programs.
Hi Ricardo: I tested the patch and it's working great. The solution seems obvious now that I see it :). suzaku@burningdawn:~ $ > doas rcctl stop usbhidaction suzaku@burningdawn:~ $ > doas usbhidaction -v -c /etc/usbhidaction.conf -f /dev/uhid2 PARSE:1 Consumer:Volume_Increment, 1, 'sndioctl output.level=+0.05' PARSE:2 Consumer:Volume_Decrement, 1, 'sndioctl output.level=-0.05' PARSE:3 Consumer:Mute, 1, 'sndioctl output.mute=!' PARSE:4 Consumer:Play/Pause, 1, 'mpc -q toggle' PARSE:5 Consumer:Scan_Previous_Track, 1, 'mpc -q prev' PARSE:6 Consumer:Scan_Next_Track, 1, 'mpc -q next' PARSE:7 Consumer:Random_Play, 1, 'mpc -q random' PARSE:8 Consumer:Stop, 1, 'mpc -q stop' PARSE:9 Consumer:Fast_Forward, 1, 'mpc -q seek +10' PARSE:10 Consumer:Rewind, 1, 'mpc -q seek -10' report size 2 executing 'mpc -q toggle' executing 'mpc -q prev' executing 'mpc -q random' executing 'mpc -q next' executing 'mpc -q seek -10' executing 'mpc -q seek +10' executing 'mpc -q stop' executing 'sndioctl output.level=+0.05' output.level=0.392 executing 'sndioctl output.mute=!' output.mute=0 executing 'sndioctl output.level=-0.05' output.level=0.341 ^C suzaku@burningdawn:~ $ > Thanks for the patch. Regards. HV On Mon, Aug 01, 2022 at 12:11:48PM +0100, Ricardo Mestre wrote: > ouch, how did I miss the call to execl(3) on docmd()? silly me! > > OK? > > Index: usbhidaction.c > === > RCS file: /cvs/src/usr.bin/usbhidaction/usbhidaction.c,v > retrieving revision 1.24 > diff -u -p -u -r1.24 usbhidaction.c > --- usbhidaction.c15 Dec 2021 11:23:09 - 1.24 > +++ usbhidaction.c1 Aug 2022 11:08:31 - > @@ -166,6 +166,8 @@ main(int argc, char **argv) > > if (unveil(conf, "r") == -1) > err(1, "unveil %s", conf); > + if (unveil(_PATH_BSHELL, "x") == -1) > + err(1, "unveil %s", _PATH_BSHELL); > if (unveil(NULL, NULL) == -1) > err(1, "unveil"); > > > > On 15:42 Sat 30 Jul , Theo de Raadt wrote: > > I suspect it should unveil("/", "x") > > > > It is better than not doing anything. > >
[PATCH] nvme: Do a clean NVMe shutdown
The brute-force controller disable method can end up racing controller initilization and causing a crash when we shut down Apple ANS2 NVMe controllers. Do a proper controlled shutdown, which does block until things are quiesced properly. This is nicer in general for all controllers. Signed-off-by: Hector Martin --- drivers/nvme/nvme.c | 25 - 1 file changed, 20 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c index a305305885ec..5fd2fb9ed6a6 100644 --- a/drivers/nvme/nvme.c +++ b/drivers/nvme/nvme.c @@ -27,9 +27,8 @@ #define IO_TIMEOUT 30 #define MAX_PRP_POOL 512 -static int nvme_wait_ready(struct nvme_dev *dev, bool enabled) +static int nvme_wait_csts(struct nvme_dev *dev, u32 mask, u32 val) { - u32 bit = enabled ? NVME_CSTS_RDY : 0; int timeout; ulong start; @@ -38,7 +37,7 @@ static int nvme_wait_ready(struct nvme_dev *dev, bool enabled) start = get_timer(0); while (get_timer(start) < timeout) { - if ((readl(>bar->csts) & NVME_CSTS_RDY) == bit) + if ((readl(>bar->csts) & mask) == val) return 0; } @@ -295,7 +294,7 @@ static int nvme_enable_ctrl(struct nvme_dev *dev) dev->ctrl_config |= NVME_CC_ENABLE; writel(dev->ctrl_config, >bar->cc); - return nvme_wait_ready(dev, true); + return nvme_wait_csts(dev, NVME_CSTS_RDY, NVME_CSTS_RDY); } static int nvme_disable_ctrl(struct nvme_dev *dev) @@ -304,7 +303,16 @@ static int nvme_disable_ctrl(struct nvme_dev *dev) dev->ctrl_config &= ~NVME_CC_ENABLE; writel(dev->ctrl_config, >bar->cc); - return nvme_wait_ready(dev, false); + return nvme_wait_csts(dev, NVME_CSTS_RDY, 0); +} + +static int nvme_shutdown_ctrl(struct nvme_dev *dev) +{ + dev->ctrl_config &= ~NVME_CC_SHN_MASK; + dev->ctrl_config |= NVME_CC_SHN_NORMAL; + writel(dev->ctrl_config, >bar->cc); + + return nvme_wait_csts(dev, NVME_CSTS_SHST_MASK, NVME_CSTS_SHST_CMPLT); } static void nvme_free_queue(struct nvme_queue *nvmeq) @@ -904,6 +912,13 @@ free_nvme: int nvme_shutdown(struct udevice *udev) { struct nvme_dev *ndev = dev_get_priv(udev); + int ret; + + ret = nvme_shutdown_ctrl(ndev); + if (ret < 0) { + printf("Error: %s: Shutdown timed out!\n", udev->name); + return ret; + } return nvme_disable_ctrl(ndev); } -- 2.35.1
usbhidaction(1) is unvel(2)ed too strictly to run programs.
Hello Misc. TL;DR: usbhidaction(1) is unveil(2)ed too strictly to run programs. I'm running: kern.version=OpenBSD 7.1 (GENERIC.MP) #3: Sun May 15 10:27:01 MDT 2022 r...@syspatch-71-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP Complete dmesg at the bottom. I use usbhidaction to make some global mappings for mpd. My usbhidaction.conf looks something like this: /etc/usbhidaction.conf: Consumer:Volume_Increment 1 sndioctl output.level=+0.05 Consumer:Volume_Decrement 1 sndioctl output.level=-0.05 Consumer:Mute 1 sndioctl output.mute=! Consumer:Play/Pause 1 mpc -q toggle Consumer:Scan_Previous_Track1 mpc -q prev Consumer:Scan_Next_Track1 mpc -q next Consumer:Random_Play1 mpc -q random Consumer:Stop 1 mpc -q stop Consumer:Fast_Forward 1 mpc -q seek +10 Consumer:Rewind 1 mpc -q seek -10 The reason for using usbhidaction (as opposed to regular X binds) is that i'm not always running X. My GPU freezes every now and then (amdgpu), so most of the time I'm running X-less. I like those binds to be consistent whether I'm running X or not, basically. On 7.0, ucc(4) was introduced. This driver works for my usb thinkpad kb but not with a home-made with custom firmware. Both work with usbhidaction. 7.0, if I remember correctly, added unveil to usbhidaction, which does its job flawlessly, as in completely blocking access to anything other than its config file, but it also blocks access to any programs configured in it, I think, defeating the point of usbhidaction. The question is then: what's the best approach to solve this? Completely removing usbhidaction's unveil call will decrease security, so I'm sure this is not an option. Unveiling each of the programs named in the config file. This will work for initial setup, but if usbhidaction gets a SIGHUP it won't be able to unveil new programs named in the config file. This in turn forces a restart of the service, defeating the point of reloading. However, it's still an improvement over it not working. Fix my custom kb so it works with ucc. This I will do, as there's obviously something wrong in how I report the keys, but I don't know if there's a way to tell ucc what to do on keypresses. If I have mpd and mpv running, which one should react to it? Can I map this out of X as well? Or, I'm using usbhidaction wrong and I should fix my setup. In which case I'd like some pointers on how to do so. For the time being, I disable ucc on boot and I patched the unveil calls out of usbhidaction. It's working fine and I don't mind a few patches, but I suspect there's a better way to deal with this. Regards. HV -- OpenBSD 7.1 (GENERIC.MP) #3: Sun May 15 10:27:01 MDT 2022 r...@syspatch-71-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP real mem = 8532971520 (8137MB) avail mem = 8257073152 (7874MB) random: good seed from bootblocks mpath0 at root scsibus0 at mpath0: 256 targets mainbus0 at root bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xed530 (58 entries) bios0: vendor American Megatrends Inc. version "F3" date 04/01/2015 bios0: Gigabyte Technology Co., Ltd. 990FXA-UD5 R5 acpi0 at bios0: ACPI 4.0 acpi0: sleep states S0 S3 S4 S5 acpi0: tables DSDT FACP APIC FPDT MCFG HPET SSDT acpi0: wakeup devices SBAZ(S4) P0PC(S4) GEC_(S4) UHC1(S4) UHC2(S4) USB3(S4) UHC4(S4) USB5(S4) UHC6(S4) UHC7(S4) PE20(S4) GBE_(S4) PE21(S4) PE22(S4) PE23(S4) PC02(S4) [...] acpitimer0 at acpi0: 3579545 Hz, 32 bits acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 16 (boot processor) cpu0: AMD FX(tm)-4170 Quad-Core Processor, 4219.97 MHz, 15-01-02 cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,MWAIT,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,AES,XSAVE,AVX,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,IBS,XOP,SKINIT,WDT,FMA4,NODEID,TOPEXT,CPCTR,ITSC cpu0: 64KB 64b/line 2-way I-cache, 16KB 64b/line 4-way D-cache, 2MB 64b/line 16-way L2 cache cpu0: ITLB 48 4KB entries fully associative, 24 4MB entries fully associative cpu0: DTLB 32 4KB entries fully associative, 32 4MB entries fully associative cpu0: smt 0, core 0, package 0 mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges cpu0: apic clock running at 200MHz cpu0: mwait min=64, max=64, IBE cpu1 at mainbus0: apid 17 (application processor) cpu1: AMD FX(tm)-4170 Quad-Core Processor, 421.85 MHz, 15-01-02 cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,MWAIT,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,AES,XSAVE,AVX,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,EAPICSP,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,IBS,XOP,SKINIT,WDT,FMA4,NODEID,TOPEXT,CPCTR,ITSC cpu1: 64KB 64b/line 2-way I-cache,
Re: Thoughts on logcheck?
On 30/07/22 10:20, Andy Smith wrote: Hello, On Fri, Jul 29, 2022 at 04:30:19PM +1200, Richard Hector wrote: My thought is to configure rsyslog to create extra logfiles, equivalent to syslog and auth.log (the two files that logcheck monitors by default), which only log messages at priority 'warning' or above, and configure logcheck to monitor those instead. This should cut down the amount of filter maintenance considerably. Does this sound like a reasonable idea? Personally I wouldn't (and don't) do it. It sounds like a bunch of work only to end up with things that get logged anyway (as you noted) plus the risk of missing other interesting things. I started by enabling the extra logs on one system. I found I saw _more_ interesting things, because they weren't hidden by mountains of other stuff. That's in the boot-time kernel messages, btw. I only got 14 lines (total, not filtered by logcheck) when I was only showing warning or higher, rather than the screeds I normally see. I never had time to go through all those, even to read and understand them, let alone write filters, and having to decide what was important, what not, and whether the same messages with different values would be. I think this will be useful to me, and the work isn't much because it's the same for every system (or at least every system that runs logcheck), which I can push out with ansible, where the filters have to be much more system- (or service-)specific. The full logs are of course still there if I need to go back and look for something. I don't find writing logcheck filters to be a particularly big time sink. But if you do then it might alter the balance for you. Thanks for your input :-) Richard
Thoughts on logcheck?
Hi all, I've used logcheck for ages, to email me about potential problems from my log files. I end up spending a lot of time scanning the emails, and then occasionally a bunch of time updating the filter rules to stop most of those messages coming through. My thought is to configure rsyslog to create extra logfiles, equivalent to syslog and auth.log (the two files that logcheck monitors by default), which only log messages at priority 'warning' or above, and configure logcheck to monitor those instead. This should cut down the amount of filter maintenance considerably. Does this sound like a reasonable idea? A quick test does show that I'll still get messages I can't do much about - eg I telnetted to the ssh port and closed the connection, and my logfile reported that interaction as an error. That kind of thing should still be easily filtered, though. I think I'd want to create a completely fresh set of filters, rather than using the supplied defaults, but I'm not sure about that yet. Cheers, Richard
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/bump-innersource-dependencies-2e24e1 Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/bump-innersource-dependencies-2e24e1/fadb7f-00%40github.com.
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/INT-6954-improve-policy-evaluation-summary-message Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-6954-improve-policy-evaluation-summary-message/8dbfa3-00%40github.com.
[jenkinsci/nexus-platform-plugin] e34f33: INT-6954 Improving summary message for policy eval...
Branch: refs/heads/main Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: e34f3389728ba103f1ecfbeb9c80cf5394d72cd4 https://github.com/jenkinsci/nexus-platform-plugin/commit/e34f3389728ba103f1ecfbeb9c80cf5394d72cd4 Author: Hector Danilo Hurtado Olaya Date: 2022-07-27 (Wed, 27 Jul 2022) Changed paths: M src/main/java/org/sonatype/nexus/ci/iq/PolicyEvaluationHealthAction.groovy M src/main/resources/org/sonatype/nexus/ci/iq/Messages.properties M src/main/resources/org/sonatype/nexus/ci/iq/PolicyEvaluationHealthAction/summary.groovy M src/main/resources/org/sonatype/nexus/ci/iq/PolicyEvaluationProjectAction/jobMain.groovy M src/test/resources/org/sonatype/nexus/ci/quality/RuleSetAll.groovy Log Message: --- INT-6954 Improving summary message for policy evaluations (#214) -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/2bc0bc-e34f33%40github.com.
[jenkinsci/nexus-platform-plugin] 8dbfa3: INT-6954 Improving summary message for policy eval...
Branch: refs/heads/INT-6954-improve-policy-evaluation-summary-message Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 8dbfa38af8128984fe8a253e2c4e1b8d09495b55 https://github.com/jenkinsci/nexus-platform-plugin/commit/8dbfa38af8128984fe8a253e2c4e1b8d09495b55 Author: Hector Hurtado Date: 2022-07-26 (Tue, 26 Jul 2022) Changed paths: M src/main/java/org/sonatype/nexus/ci/iq/PolicyEvaluationHealthAction.groovy M src/main/resources/org/sonatype/nexus/ci/iq/Messages.properties M src/main/resources/org/sonatype/nexus/ci/iq/PolicyEvaluationHealthAction/summary.groovy M src/main/resources/org/sonatype/nexus/ci/iq/PolicyEvaluationProjectAction/jobMain.groovy M src/test/resources/org/sonatype/nexus/ci/quality/RuleSetAll.groovy Log Message: --- INT-6954 Improving summary message for policy evaluations -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-6954-improve-policy-evaluation-summary-message/00-8dbfa3%40github.com.
Re: [DNG] OpenVPN 2.5.1-3+devuan1 packaging vs best practices
On 7/26/22 10:00, Ken Dibble wrote: On 7/25/22 09:29, Ken Dibble wrote: This is the first time I have seen this with any package. I have no idea whether it has happened with packages not installed on my systems. It is my understanding that best practice is noexec on /tmp and that this is a Debian recommendation. Here is the relevant line from /etc/fstab. tmpfs /tmp tmpfs defaults,noatime,mode=1777,nosuid,noexec,nodev 0 0 Here is the error message. sudo apt-get dist-upgrade . . Preconfiguring packages ... Can't exec "/tmp/openvpn.config.NDxHMl": Permission denied at /usr/lib/x86_64-linux-gnu/perl-base/IPC/Open3.pm line 178. open2: exec of /tmp/openvpn.config.NDxHMl configure 2.5.1-3+devuan1 failed: Permission denied at /usr/share/perl5/Debconf/ConfModule.pm line 59. . . The (apparent) recommendation from bug report 129289 in 2002 is to set APT::ExtractTemplates::TempDir in apt.conf to some directory which is mounted with exec and As of version 0.5.8, apt supports TMPDIR for determining where apt-extracttemplates puts its temporary files. If you have a noexec /tmp, use this or other documented means to make apt-extracttemplates use a directory that does accept executables As of 2018 Bug #887099, merged with sundry other bug reports of the same type Control: reassign -1 debconf 1.5.61 Control: forcemerge 566247 -1 This appears to be a generic issue in debconf, so I'm reassigning it to debconf and merging it with the existing bugs tracking the same issue. There doesn't seem to be any activity after that. Is there a best practice for the method of selecting and setting this directory? Thanks, Ken Replying to my own message: It appears that this problem with debconf has been around for 2 decades and the maintainers are at odds with the debian position about "/tmp" and noexec. That being said I am going with echo "APT::ExtractTemplates::TempDir \"/var/tmp\";" >/etc/apt/apt.conf.d/50extracttemplates unless someone has a better idea or a reason not to. I am aware that Debian does not by default clean up /var/tmp and it will be my responsibility to check it for things left around. This would just make /var/tmp the target for attacks instead of /tmp if you protect /tmp with noexec, you should do the same with /var/tmp. I think you could use any root writable dir, I don't see why it would need to be writable by all users, if apt* is running as root. If you think it's simpler, you can create a file, say /etc/apt/apt.conf.d/99-remounttmp.conf with this: DPkg { // Auto re-mounting of a exec-only /tmp Pre-Invoke { "mount -o remount,exec /tmp"; }; Post-Invoke { "test ${NO_APT_REMOUNT:-no} = yes || mount -o remount,noexec /tmp || true"; }; }; I don't remember where I found this, but have used it for a while. Thanks, Ken ___ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng -- Hector Gonzalez ca...@genac.org ___ Dng mailing list Dng@lists.dyne.org https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng
[jenkinsci/nexus-platform-plugin]
Branch: refs/heads/INT-7002-check-compatibility-with-jenkins-2.346.2 Home: https://github.com/jenkinsci/nexus-platform-plugin -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/INT-7002-check-compatibility-with-jenkins-2.346.2/226b91-00%40github.com.
[jenkinsci/nexus-platform-plugin] 2bc0bc: Adding tests for Jenkins 2.346.2 (#213)
Branch: refs/heads/main Home: https://github.com/jenkinsci/nexus-platform-plugin Commit: 2bc0bc7c72e99dba8f24234fa972db680babf113 https://github.com/jenkinsci/nexus-platform-plugin/commit/2bc0bc7c72e99dba8f24234fa972db680babf113 Author: Hector Danilo Hurtado Olaya Date: 2022-07-25 (Mon, 25 Jul 2022) Changed paths: M Jenkinsfile.sonatype.extra-tests Log Message: --- Adding tests for Jenkins 2.346.2 (#213) -- You received this message because you are subscribed to the Google Groups "Jenkins Commits" group. To unsubscribe from this group and stop receiving emails from it, send an email to jenkinsci-commits+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/nexus-platform-plugin/push/refs/heads/main/7b777b-2bc0bc%40github.com.
Bug#1015887: debian-installer: Adding https repo doesn't work without manually installing ca-certificates
On 23/07/22 23:01, Cyril Brulebois wrote: As mentioned by Julien, getting the installer's syslog (compressed, to make sure it reaches the mailing list) would help understand what's going on. Oh - uncompressed, it made it into the BTS, but not to the list. Here's a compressed version. Cheers, Richard syslog.gz Description: application/gzip
Bug#1015887: debian-installer: Adding https repo doesn't work without manually installing ca-certificates
On 23/07/22 23:01, Cyril Brulebois wrote: As mentioned by Julien, getting the installer's syslog (compressed, to make sure it reaches the mailing list) would help understand what's going on. Oh - uncompressed, it made it into the BTS, but not to the list. Here's a compressed version. Cheers, Richard syslog.gz Description: application/gzip
Bug#1015887: debian-installer: Adding https repo doesn't work without manually installing ca-certificates
On 23/07/22 18:07, Geert Stappers wrote: Control: severity -1 wishlist Why? Because there's a workaround? Is everyone expected to be able to find that workaround? https is an option provided in the installer, that apparently doesn't work (at least with the netinst installer), and it's not immediately clear why. Essentially, I think it's a showstopper for anyone who doesn't know how to investigate further. It all works fine using http for the mirror. And the archive mirror content is secured by checksums and signatures. The point being that https isn't necessary? A different issue, I think. I'm happy to do further testing with the VM; the thin client is less convenient as it has a job to do. Another job that will help: Find other bug reports that ask for installing ca-certificates. Yeah, I recall have I seen such requests before. Not sure how to do that. The BTS UI doesn't seem to allow searching on the content of bug discussions; only subject and other metadata. I can't see any other debian-installer bugs that mention ca-certificates in the subject. Cheers, Richard
Bug#1015887: debian-installer: Adding https repo doesn't work without manually installing ca-certificates
On 23/07/22 18:07, Geert Stappers wrote: Control: severity -1 wishlist Why? Because there's a workaround? Is everyone expected to be able to find that workaround? https is an option provided in the installer, that apparently doesn't work (at least with the netinst installer), and it's not immediately clear why. Essentially, I think it's a showstopper for anyone who doesn't know how to investigate further. It all works fine using http for the mirror. And the archive mirror content is secured by checksums and signatures. The point being that https isn't necessary? A different issue, I think. I'm happy to do further testing with the VM; the thin client is less convenient as it has a job to do. Another job that will help: Find other bug reports that ask for installing ca-certificates. Yeah, I recall have I seen such requests before. Not sure how to do that. The BTS UI doesn't seem to allow searching on the content of bug discussions; only subject and other metadata. I can't see any other debian-installer bugs that mention ca-certificates in the subject. Cheers, Richard
Bug#1015887: debian-installer: Adding https repo doesn't work without manually installing ca-certificates
Package: debian-installer Severity: important Dear Maintainer, Using netinst bullseye 11.4 installer: https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-11.4.0-amd64-netinst.iso I chose to add a network mirror, using https, and the default 'deb.debian.org'. I used (non-graphical) Expert Mode. The problem first showed up when tasksel only displayed 'standard system utilities'. When I went ahead with that, the next screen was a red 'Installation step failed' screen. The log on tty4 showed various dependency problems. I tried to 'chroot /target' and 'apt update', which showed certificate problems. I then ran 'apt install ca-certificates', which worked (installing from the cd image?), after which 'apt update' worked, and I was also able to continue successfully with the installer. I was able to reproduce this in a (kvm/qemu) VM (which is where I confirmed my steps); the original problem was on an HP Thin Client (t520). In both cases only 8G of storage was available. It all works fine using http for the mirror. I'm happy to do further testing with the VM; the thin client is less convenient as it has a job to do. -- System Information: Debian Release: 11.4 APT prefers stable-updates APT policy: (500, 'stable-updates'), (500, 'stable-security'), (500, 'stable') Architecture: amd64 (x86_64) Kernel: Linux 5.10.0-16-amd64 (SMP w/2 CPU threads) Locale: LANG=en_NZ.UTF-8, LC_CTYPE=en_NZ.UTF-8 (charmap=UTF-8), LANGUAGE=en_NZ:en Shell: /bin/sh linked to /usr/bin/dash Init: systemd (via /run/systemd/system) LSM: AppArmor: enabled