Re: [PR] [SPARK-48391][CORE]Using addAll instead of add function in fromAccumulatorInfos method of TaskMetrics Class [spark]
JoshRosen commented on code in PR #46705: URL: https://github.com/apache/spark/pull/46705#discussion_r1613904865 ## core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala: ## @@ -328,16 +328,15 @@ private[spark] object TaskMetrics extends Logging { */ def fromAccumulators(accums: Seq[AccumulatorV2[_, _]]): TaskMetrics = { val tm = new TaskMetrics -for (acc <- accums) { +val (innerAccums, externalAccums) = accums. + partition(t => t.name.isDefined && tm.nameToAccums.contains(t.name.get)) Review Comment: Just thinking aloud here: Do we do a lot of intermediate collections allocations in this `.partition` operation itself? e.g. under the hood, would we be resizing an array builder or doing a bunch of linked list construction operations? I am wondering whether instead it might be faster (or at least more predicable / easier to reason about) to construct a Java ArrayBuilder / ArrayBuffer, imperatively append the external accumulators to it, then do a single addAll call at the end, e.g. something like ```scala def fromAccumulators(accums: Seq[AccumulatorV2[_, _]]): TaskMetrics = { val tm = new TaskMetrics val externalAccums = new java.util.ArrayList[AccumulatorV2[Any, Any]]() for (acc <- accums) { val name = acc.name if (name.isDefined && tm.nameToAccums.contains(name.get)) { val tmAcc = tm.nameToAccums(name.get).asInstanceOf[AccumulatorV2[Any, Any]] tmAcc.metadata = acc.metadata tmAcc.merge(acc.asInstanceOf[AccumulatorV2[Any, Any]]) } else { externalAccums.add(acc) } } tm._externalAccums.addAll(externalAccums) tm } ``` This is less net code churn and is probably either comparable in performance or faster, and avoids any performance unpredictability as a function of the Seq type. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
Re: [PR] RATIS-2096. Add a conf to enable/disable zero copy. [ratis]
szetszwo merged PR #1099: URL: https://github.com/apache/ratis/pull/1099 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@ratis.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] GUACAMOLE-1949 Nextcloud JWT Auth extension [guacamole-client]
pp7En commented on code in PR #984: URL: https://github.com/apache/guacamole-client/pull/984#discussion_r1613902183 ## extensions/guacamole-auth-nextcloud/src/main/java/org/apache/guacamole/auth/nextcloud/connection/ConnectionService.java: ## @@ -0,0 +1,309 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + + +package org.apache.guacamole.auth.nextcloud.connection; + +import com.google.inject.Inject; +import com.google.inject.Singleton; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +import org.apache.guacamole.GuacamoleException; +import org.apache.guacamole.GuacamoleResourceNotFoundException; +import org.apache.guacamole.GuacamoleServerException; +import org.apache.guacamole.auth.nextcloud.user.UserData; +import org.apache.guacamole.environment.Environment; +import org.apache.guacamole.io.GuacamoleReader; +import org.apache.guacamole.io.GuacamoleWriter; +import org.apache.guacamole.net.GuacamoleSocket; +import org.apache.guacamole.net.GuacamoleTunnel; +import org.apache.guacamole.net.InetGuacamoleSocket; +import org.apache.guacamole.net.SSLGuacamoleSocket; +import org.apache.guacamole.net.SimpleGuacamoleTunnel; +import org.apache.guacamole.net.auth.GuacamoleProxyConfiguration; +import org.apache.guacamole.protocol.ConfiguredGuacamoleSocket; +import org.apache.guacamole.protocol.GuacamoleClientInformation; +import org.apache.guacamole.protocol.GuacamoleConfiguration; +import org.apache.guacamole.token.TokenFilter; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Service which provides a centralized means of establishing connections, + * tracking/joining active connections, and retrieving associated data. + */ +@Singleton +public class ConnectionService { + +/** + * Logger for this class. + */ +private static final Logger logger = LoggerFactory.getLogger(ConnectionService.class); + +/** + * The Guacamole server environment. + */ +@Inject +private Environment environment; + +/** + * Mapping of the unique IDs of active connections (as specified within the + * UserData.Connection object) to the underlying connection ID (as returned + * via the Guacamole protocol handshake). Only connections with defined IDs + * are tracked here. + */ +private final ConcurrentHashMap activeConnections = Review Comment: Yes, I used guacamole-auth-json as a template for the extension. I forgot to clean it up properly before creating a pull request. I'm sorry about that. ## extensions/guacamole-auth-nextcloud/src/main/java/org/apache/guacamole/auth/nextcloud/connection/ConnectionService.java: ## @@ -0,0 +1,309 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + + +package org.apache.guacamole.auth.nextcloud.connection; + +import com.google.inject.Inject; +import com.google.inject.Singleton; + +import java.util.ArrayList; +import java.util.Collection; +import java.util.Collections; +import java.util.Map; +import java.util.concurrent.ConcurrentHashMap; + +import org.apache.guacamole.GuacamoleException; +import org.apache.guacamole.GuacamoleResourceNotFoundException; +import org.apache.guacamole.GuacamoleServerException; +import org.apache.guacamole.auth.nextcloud.user.UserData; +import org.apache.guacamole.environment.Environment; +import
Re: [PR] GH-40930: [Java] Implement a function to retrieve reference buffers in StringView [arrow]
conbench-apache-arrow[bot] commented on PR #41796: URL: https://github.com/apache/arrow/pull/41796#issuecomment-2130187420 After merging your PR, Conbench analyzed the 6 benchmarking runs that have been run so far on merge-commit 8a76082e3a4b31ba74063093a3f279726625e245. There were no benchmark performance regressions. The [full Conbench report](https://github.com/apache/arrow/runs/25393483187) has more details. It also includes information about 54 possible false positives for unstable benchmarks that are known to sometimes produce them. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Introduces efSearch as a separate parameter in KNN{Byte:Float}VectorQuery [lucene]
shatejas commented on code in PR #13407: URL: https://github.com/apache/lucene/pull/13407#discussion_r1613901301 ## lucene/core/src/java/org/apache/lucene/search/AbstractKnnVectorQuery.java: ## @@ -54,14 +54,20 @@ abstract class AbstractKnnVectorQuery extends Query { protected final String field; protected final int k; + protected final int efSearch; private final Query filter; - public AbstractKnnVectorQuery(String field, int k, Query filter) { + public AbstractKnnVectorQuery(String field, int k, int efSearch, Query filter) { Review Comment: We are forcing the user to think about what efSearch value should be. The existing implementations of this abstract class are updated to keep using k as the efSearch value to maintain backward compatibility -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
Re: [PR] GUACAMOLE-1949 Nextcloud JWT Auth extension [guacamole-client]
pp7En commented on code in PR #984: URL: https://github.com/apache/guacamole-client/pull/984#discussion_r1613900234 ## extensions/guacamole-auth-nextcloud/src/main/java/org/apache/guacamole/auth/nextcloud/user/UserDataService.java: ## @@ -0,0 +1,370 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.guacamole.auth.nextcloud.user; + +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.inject.Inject; +import com.google.inject.Provider; +import com.google.inject.Singleton; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; +import java.util.Set; + +import org.apache.guacamole.GuacamoleException; +import org.apache.guacamole.auth.nextcloud.ConfigurationService; +import org.apache.guacamole.auth.nextcloud.RequestValidationService; +import org.apache.guacamole.net.auth.Connection; +import org.apache.guacamole.net.auth.Credentials; +import org.apache.guacamole.net.auth.Directory; +import org.apache.guacamole.net.auth.User; +import org.apache.guacamole.net.auth.permission.ObjectPermissionSet; +import org.apache.guacamole.net.auth.simple.SimpleDirectory; +import org.apache.guacamole.net.auth.simple.SimpleObjectPermissionSet; +import org.apache.guacamole.net.auth.simple.SimpleUser; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Service for deriving Guacamole extension API data from UserData objects. + */ +@Singleton +public class UserDataService { + +/** + * Logger for this class. + */ +private static final Logger logger = LoggerFactory.getLogger(UserDataService.class); + +/** + * ObjectMapper for deserializing UserData objects. + */ +private static final ObjectMapper mapper = new ObjectMapper(); + +/** + * Denylist of single-use user data objects which have already been used. + */ +private final UserDataDenylist denylist = new UserDataDenylist(); + +/** + * Service for retrieving configuration information regarding the + * NextcloudJwtAuthenticationProvider. + */ +@Inject +private ConfigurationService confService; + +/** + * Service for testing the validity of HTTP requests. + */ +@Inject +private RequestValidationService requestService; + +/** + * Service for handling cryptography-related operations. + */ +//@Inject +//private CryptoService cryptoService; Review Comment: Thanks, has also been removed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@guacamole.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MSHADE-147: Add flag to disable jar signing verification [maven-shade-plugin]
mauro-rizzi-DSP commented on PR #122: URL: https://github.com/apache/maven-shade-plugin/pull/122#issuecomment-2130184980 Hey i'm running into this trying to shade one of my projects and I think not only this should be finished and we should get the option to avoid this but we also need the logging to be more than a generic "Invalid signature file digest for Manifest main attributes". If you're going to tell me one or more of my dependencies has an invalid signature you should at least tell me which ones so I can take action over that instead of filtering out the signature files as if they had no use -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [I] Echarts keep rendering blank page when i tried to integrate it in vue 3, any help would be appreciated [echarts]
JimmyAx commented on issue #19426: URL: https://github.com/apache/echarts/issues/19426#issuecomment-2130184861 This might be #19414 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@echarts.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@echarts.apache.org For additional commands, e-mail: commits-h...@echarts.apache.org
Re: [PR] GUACAMOLE-1949 Nextcloud JWT Auth extension [guacamole-client]
pp7En commented on code in PR #984: URL: https://github.com/apache/guacamole-client/pull/984#discussion_r1613899283 ## extensions/guacamole-auth-nextcloud/src/main/java/org/apache/guacamole/auth/nextcloud/user/UserDataService.java: ## @@ -0,0 +1,370 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +package org.apache.guacamole.auth.nextcloud.user; + +import com.fasterxml.jackson.databind.ObjectMapper; +import com.google.inject.Inject; +import com.google.inject.Provider; +import com.google.inject.Singleton; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; +import java.util.Set; + +import org.apache.guacamole.GuacamoleException; +import org.apache.guacamole.auth.nextcloud.ConfigurationService; +import org.apache.guacamole.auth.nextcloud.RequestValidationService; +import org.apache.guacamole.net.auth.Connection; +import org.apache.guacamole.net.auth.Credentials; +import org.apache.guacamole.net.auth.Directory; +import org.apache.guacamole.net.auth.User; +import org.apache.guacamole.net.auth.permission.ObjectPermissionSet; +import org.apache.guacamole.net.auth.simple.SimpleDirectory; +import org.apache.guacamole.net.auth.simple.SimpleObjectPermissionSet; +import org.apache.guacamole.net.auth.simple.SimpleUser; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Service for deriving Guacamole extension API data from UserData objects. + */ +@Singleton +public class UserDataService { + +/** + * Logger for this class. + */ +private static final Logger logger = LoggerFactory.getLogger(UserDataService.class); + +/** + * ObjectMapper for deserializing UserData objects. + */ +private static final ObjectMapper mapper = new ObjectMapper(); + +/** + * Denylist of single-use user data objects which have already been used. + */ +private final UserDataDenylist denylist = new UserDataDenylist(); + +/** + * Service for retrieving configuration information regarding the + * NextcloudJwtAuthenticationProvider. + */ +@Inject +private ConfigurationService confService; + +/** + * Service for testing the validity of HTTP requests. + */ +@Inject +private RequestValidationService requestService; + +/** + * Service for handling cryptography-related operations. + */ +//@Inject +//private CryptoService cryptoService; + +/** + * Provider for UserDataConnection instances. + */ +@Inject +private Provider userDataConnectionProvider; + +/** + * The name of the HTTP parameter from which base64-encoded, encrypted JSON + * data should be read. The value of this parameter, when decoded and + * decrypted, must be valid JSON prepended with the 32-byte raw binary + * signature generated through signing the JSON with the secret key using + * HMAC/SHA-256. + */ +public static final String ENCRYPTED_DATA_PARAMETER = "data"; + +/** + * Derives a new UserData object from the data contained within the given + * Credentials. If no such data is present, or the data present is invalid, + * null is returned. + * + * @param credentials + * The Credentials from which the new UserData object should be + * derived. + * + * @return + * A new UserData object derived from the data contained within the + * given Credentials, or null if no such data is present or if the data + * present is invalid. + */ +public UserData fromCredentials(Credentials credentials) { + +//String json; +//byte[] correctSignature; Review Comment: Oh, thanks for pointing that out. It has been removed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@guacamole.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Use Duration instead of long+TimeUnit in ReadOnlyTStore.unreserve [accumulo]
DomGarguilo merged PR #4371: URL: https://github.com/apache/accumulo/pull/4371 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@accumulo.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] chore(python): Fix sdist and add packaging check [arrow-nanoarrow]
paleolimbot opened a new pull request, #489: URL: https://github.com/apache/arrow-nanoarrow/pull/489 (no comment) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] chore(deps): bump commons-cli:commons-cli from 1.7.0 to 1.8.0 [shiro]
lprimak merged PR #1496: URL: https://github.com/apache/shiro/pull/1496 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@shiro.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [SPARK-48391][CORE]Using addAll instead of add function in fromAccumulatorInfos method of TaskMetrics Class [spark]
JoshRosen commented on code in PR #46705: URL: https://github.com/apache/spark/pull/46705#discussion_r1613896716 ## core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala: ## @@ -328,16 +328,15 @@ private[spark] object TaskMetrics extends Logging { */ def fromAccumulators(accums: Seq[AccumulatorV2[_, _]]): TaskMetrics = { val tm = new TaskMetrics -for (acc <- accums) { +val (innerAccums, externalAccums) = accums. Review Comment: > tm.nameToAccums is a fixed size LinkedHashMap, why do we need create another local variable? I think the concern was that each call to `.nameToAccums` under the hood goes through a `lazy val` which is slightly more expensive to access than an ordinary field. On the other hand, I think the old and new code are performing the same total number of `nameToAccums` accesses: - Before, we'd call `tm.nameToAccums` once per named accumulator to check whether it was a task metric, then a second time for each task metric to retrieve the actual value. - In the new code we perform the same calls but in a different sequence: we now do all of the initial calls in a batch then do the second "retrieve the actual accum" in a second pass. Given that we're performing the same total number of calls, I don't think that the new code represents a perf. regression w.r.t. those accesses, so the suggestion of storing `val nameToAccums = tm.nameToAccums` would just be a small performance optimization rather than a perf. regression fix. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
Re: [PR] chore(deps): bump commons-cli:commons-cli from 1.7.0 to 1.8.0 [shiro]
lprimak merged PR #1495: URL: https://github.com/apache/shiro/pull/1495 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@shiro.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] remove processing/scheduling logic from StreamingDataflowWorker [beam]
m-trieu commented on code in PR #31317: URL: https://github.com/apache/beam/pull/31317#discussion_r1613894213 ## runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/StreamingModeExecutionContext.java: ## @@ -193,31 +180,33 @@ public void start( for (StepContext stepContext : stepContexts) { stepContext.start( stateReader, -inputDataWatermark, +work.watermarks().inputDataWatermark(), Review Comment: done added TODO, will probably update all usage where `Instant inputDataWatermark, Instant outputDatamark, Instant synchronizedProcessingTime` are used together with `Work.Watermarks`. Since this is the case will move `Watermarks` outside of Work.java -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@beam.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] GUACAMOLE-600: Add support for setting SSH and SFTP timeouts. [guacamole-server]
necouchman commented on PR #414: URL: https://github.com/apache/guacamole-server/pull/414#issuecomment-2130175336 Actually, converting this back to a draft. Since I used a previous change to collect the common socket code together in a single place, this needs to be reworked a bit... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@guacamole.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] NO-ISSUE: Increase dev-deployment memory and fix dev-ui [incubator-kie-tools]
thiagoelg opened a new pull request, #2370: URL: https://github.com/apache/incubator-kie-tools/pull/2370 (no comment) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@kie.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@kie.apache.org For additional commands, e-mail: commits-h...@kie.apache.org
Re: [PR] Bump github/codeql-action from 3.25.5 to 3.25.6 [commons-jxpath]
garydgregory merged PR #152: URL: https://github.com/apache/commons-jxpath/pull/152 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Bump codecov/codecov-action from 4.4.0 to 4.4.1 [commons-jxpath]
garydgregory merged PR #151: URL: https://github.com/apache/commons-jxpath/pull/151 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [I] [Bug] echarts import map bug(empty canvas, but hover can see tips) [echarts]
JimmyAx commented on issue #19414: URL: https://github.com/apache/echarts/issues/19414#issuecomment-2130174480 Here is a minimal sample that will reproduce the bug. The canvas is empty but the tooltip shows up when hovered. ECharts must be imported using importmap for the bug to appear. A script tag will work as expected. JSFiddle: https://jsfiddle.net/saltyprogrammer/ovLq8cd3/1/ For reference the HTML is also provided here, should it no longer reproduce in JSFiddle: [echarts-importmap.html.txt](https://github.com/apache/echarts/files/15438780/echarts-importmap.html.txt) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@echarts.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@echarts.apache.org For additional commands, e-mail: commits-h...@echarts.apache.org
Re: [I] Add filter feature for DataSource section of Router page (druid)
vogievetsky closed issue #16370: Add filter feature for DataSource section of Router page URL: https://github.com/apache/druid/issues/16370 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
Re: [PR] KAFKA-15541: Add oldest-iterator-open-since-ms metric [kafka]
mjsax commented on code in PR #16041: URL: https://github.com/apache/kafka/pull/16041#discussion_r1613892705 ## streams/src/main/java/org/apache/kafka/streams/state/internals/MeteredKeyValueStore.java: ## @@ -169,6 +172,10 @@ private void registerMetrics() { iteratorDurationSensor = StateStoreMetrics.iteratorDurationSensor(taskId.toString(), metricsScope, name(), streamsMetrics); StateStoreMetrics.addNumOpenIteratorsGauge(taskId.toString(), metricsScope, name(), streamsMetrics, (config, now) -> numOpenIterators.get()); +StateStoreMetrics.addOldestOpenIteratorGauge(taskId.toString(), metricsScope, name(), streamsMetrics, +(config, now) -> openIterators.isEmpty() ? null : + openIterators.stream().mapToLong(MeteredIterator::startTimestamp).min().getAsLong() Review Comment: > in particular for two Iterators with the same timestamp. Yes. We something that allows for duplicates... Not sure if there is any Java standard library impl... Some people implement a custom multi-set on top of tree-map: https://stackoverflow.com/questions/12565587/does-java-have-a-multiset-data-structure-like-the-one-in-c-stl (not sure if it better?) -- seems to be similar to what you propose with `ConcurrentSkipListSet`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [Web Console] Datasource page support search datasource by keyword (druid)
vogievetsky merged PR #16371: URL: https://github.com/apache/druid/pull/16371 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@druid.apache.org For additional commands, e-mail: commits-h...@druid.apache.org
Re: [PR] [SPARK-48391][CORE]Using addAll instead of add function in fromAccumulatorInfos method of TaskMetrics Class [spark]
JoshRosen commented on code in PR #46705: URL: https://github.com/apache/spark/pull/46705#discussion_r1613891588 ## core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala: ## @@ -328,16 +328,15 @@ private[spark] object TaskMetrics extends Logging { */ def fromAccumulators(accums: Seq[AccumulatorV2[_, _]]): TaskMetrics = { val tm = new TaskMetrics -for (acc <- accums) { +val (innerAccums, externalAccums) = accums. + partition(t => t.name.isDefined && tm.nameToAccums.contains(t.name.get)) +for (acc <- innerAccums) { val name = acc.name - if (name.isDefined && tm.nameToAccums.contains(name.get)) { -val tmAcc = tm.nameToAccums(name.get).asInstanceOf[AccumulatorV2[Any, Any]] -tmAcc.metadata = acc.metadata -tmAcc.merge(acc.asInstanceOf[AccumulatorV2[Any, Any]]) - } else { -tm._externalAccums.add(acc) - } + val tmAcc = tm.nameToAccums(name.get).asInstanceOf[AccumulatorV2[Any, Any]] + tmAcc.metadata = acc.metadata + tmAcc.merge(acc.asInstanceOf[AccumulatorV2[Any, Any]]) } +tm._externalAccums.addAll(externalAccums.asJava) Review Comment: Yeah, it looks like `CopyOnWriteArrayList.addAll` performs at most one array copy: https://github.com/AdoptOpenJDK/openjdk-jdk12u/blame/master/src/java.base/share/classes/java/util/concurrent/CopyOnWriteArrayList.java#L724-L751 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
Re: [PR] Bump codecov/codecov-action from 4.4.0 to 4.4.1 [commons-fileupload]
garydgregory merged PR #316: URL: https://github.com/apache/commons-fileupload/pull/316 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Bump github/codeql-action from 3.25.5 to 3.25.6 [commons-fileupload]
garydgregory merged PR #317: URL: https://github.com/apache/commons-fileupload/pull/317 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@commons.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] rework remote console [tinkerpop]
vkagamlyk merged PR #2611: URL: https://github.com/apache/tinkerpop/pull/2611 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@tinkerpop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] GUACAMOLE-600: Add support for setting SSH and SFTP timeouts. [guacamole-server]
necouchman commented on code in PR #414: URL: https://github.com/apache/guacamole-server/pull/414#discussion_r1613891115 ## src/common-ssh/ssh.c: ## @@ -453,17 +455,43 @@ guac_common_ssh_session* guac_common_ssh_create_session(guac_client* client, return NULL; } -/* Connect */ -if (connect(fd, current_address->ai_addr, -current_address->ai_addrlen) == 0) { +/* Set socket to non-blocking */ +fcntl(fd, F_SETFL, O_NONBLOCK); + +/* Set up timeout. */ +fd_set fdset; +FD_ZERO(); +FD_SET(fd, ); + +struct timeval tv; +tv.tv_sec = timeout; /* 10 second timeout */ Review Comment: Fixed via rebase. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@guacamole.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [flink] #31390 emit watermark with empty source [beam]
Abacn commented on PR #31391: URL: https://github.com/apache/beam/pull/31391#issuecomment-2130169518 Thanks, taking a look At the same time, have a couple of questions (not directly related to the change) - This sounds similar to #30969, what is the difference here ? - I also observed similar issue on JmsIO on Dataflow runner ("watermark does not increase when there is no incoming data for a while") and the fix #30337 didn't work. I am wondering if #31390 is generic at SDK level and a fix could posed in general ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@beam.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] remove processing/scheduling logic from StreamingDataflowWorker [beam]
m-trieu commented on code in PR #31317: URL: https://github.com/apache/beam/pull/31317#discussion_r1613890068 ## runners/google-cloud-dataflow-java/worker/src/main/java/org/apache/beam/runners/dataflow/worker/StreamingModeExecutionContext.java: ## @@ -152,37 +154,22 @@ public boolean workIsFailed() { public void start( @Nullable Object key, - Windmill.WorkItem work, - Instant inputDataWatermark, - @Nullable Instant outputDataWatermark, - @Nullable Instant synchronizedProcessingTime, + Work work, WindmillStateReader stateReader, SideInputStateFetcher sideInputStateFetcher, - Windmill.WorkItemCommitRequest.Builder outputBuilder, - @Nullable Supplier workFailed) { + Windmill.WorkItemCommitRequest.Builder outputBuilder) { this.key = key; -this.work = work; -this.workIsFailed = (workFailed != null) ? workFailed : () -> Boolean.FALSE; +this.work = work.getWorkItem(); Review Comment: done -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@beam.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Fix Google System tests to satisfy MyPy project_id checks [airflow]
potiuk merged PR #39817: URL: https://github.com/apache/airflow/pull/39817 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] GUACAMOLE-600: Add support for setting SSH and SFTP timeouts. [guacamole-server]
necouchman commented on code in PR #414: URL: https://github.com/apache/guacamole-server/pull/414#discussion_r1613889314 ## src/common-ssh/ssh.c: ## @@ -453,17 +455,43 @@ guac_common_ssh_session* guac_common_ssh_create_session(guac_client* client, return NULL; } -/* Connect */ -if (connect(fd, current_address->ai_addr, -current_address->ai_addrlen) == 0) { +/* Set socket to non-blocking */ +fcntl(fd, F_SETFL, O_NONBLOCK); + +/* Set up timeout. */ +fd_set fdset; +FD_ZERO(); +FD_SET(fd, ); + +struct timeval tv; +tv.tv_sec = timeout; /* 10 second timeout */ Review Comment: Hmmm...going to guess that was some copy-pasta, there, particularly since we don't usually put comments out to the right of the code... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@guacamole.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [FLINK-35435] Add timeout Configuration to Async Sink [flink]
dannycranmer commented on code in PR #24839: URL: https://github.com/apache/flink/pull/24839#discussion_r1613883830 ## flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/AsyncSinkBase.java: ## @@ -54,6 +54,8 @@ public abstract class AsyncSinkBase private final long maxBatchSizeInBytes; private final long maxTimeInBufferMS; private final long maxRecordSizeInBytes; +private Long requestTimeoutMS; +private Boolean failOnTimeout; Review Comment: Why `Boolean` and not `boolean` ? ## flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/AsyncSinkBase.java: ## @@ -54,6 +54,8 @@ public abstract class AsyncSinkBase private final long maxBatchSizeInBytes; private final long maxTimeInBufferMS; private final long maxRecordSizeInBytes; +private Long requestTimeoutMS; +private Boolean failOnTimeout; Review Comment: Also, to avoid null checks could default request timeout ms to -1 for off ## flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/AsyncSinkBase.java: ## @@ -54,6 +54,8 @@ public abstract class AsyncSinkBase private final long maxBatchSizeInBytes; private final long maxTimeInBufferMS; private final long maxRecordSizeInBytes; +private Long requestTimeoutMS; +private Boolean failOnTimeout; Review Comment: Make `final` ## flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/sink/writer/AsyncSinkWriter.java: ## @@ -181,15 +187,88 @@ public abstract class AsyncSinkWriterDuring checkpointing, the sink needs to ensure that there are no outstanding in-flight * requests. * + * This method is deprecated in favor of {@code submitRequestEntries( List + * requestEntries, ResultHandler resultHandler)} + * * @param requestEntries a set of request entries that should be sent to the destination * @param requestToRetry the {@code accept} method should be called on this Consumer once the * processing of the {@code requestEntries} are complete. Any entries that encountered * difficulties in persisting should be re-queued through {@code requestToRetry} by * including that element in the collection of {@code RequestEntryT}s passed to the {@code * accept} method. All other elements are assumed to have been successfully persisted. */ -protected abstract void submitRequestEntries( -List requestEntries, Consumer> requestToRetry); +@Deprecated +protected void submitRequestEntries( +List requestEntries, Consumer> requestToRetry) { +throw new UnsupportedOperationException( +"This method is deprecated. Please override the method that accepts a ResultHandler."); +} + +/** + * This method specifies how to persist buffered request entries into the destination. It is + * implemented when support for a new destination is added. + * + * The method is invoked with a set of request entries according to the buffering hints (and + * the valid limits of the destination). The logic then needs to create and execute the request + * asynchronously against the destination (ideally by batching together multiple request entries + * to increase efficiency). The logic also needs to identify individual request entries that + * were not persisted successfully and resubmit them using the {@code requestToRetry} callback. + * + * From a threading perspective, the mailbox thread will call this method and initiate the + * asynchronous request to persist the {@code requestEntries}. NOTE: The client must support Review Comment: Well... You could spin up a thread pool in the sink, and not necessarily in the client -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] GH-41818: [C++][Parquet] normalize dictionary encoding to use RLE_DICTIONARY [arrow]
github-actions[bot] commented on PR #41819: URL: https://github.com/apache/arrow/pull/41819#issuecomment-2130166538 :warning: GitHub issue #41818 **has been automatically assigned in GitHub** to PR creator. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-16832: LeaveGroup API for upgrading ConsumerGroup [kafka]
dongnuo123 commented on code in PR #16057: URL: https://github.com/apache/kafka/pull/16057#discussion_r1613887650 ## group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupMetadataManager.java: ## @@ -4424,14 +4425,128 @@ private ConsumerGroupMember validateConsumerGroupMember( * @param contextThe request context. * @param requestThe actual LeaveGroup request. * + * @return The LeaveGroup response and the records to append. + */ +public CoordinatorResult classicGroupLeave( +RequestContext context, +LeaveGroupRequestData request +) throws UnknownMemberIdException, GroupIdNotFoundException { +Group group = groups.get(request.groupId(), Long.MAX_VALUE); + +if (group == null) { +throw new UnknownMemberIdException(String.format("Group %s not found.", request.groupId())); +} + +if (group.type() == CLASSIC) { +return classicGroupLeaveToClassicGroup((ClassicGroup) group, context, request); +} else { +return classicGroupLeaveToConsumerGroup((ConsumerGroup) group, context, request); +} +} + +/** + * Handle a classic LeaveGroupRequest to a ConsumerGroup. + * + * @param group The ConsumerGroup. + * @param contextThe request context. + * @param requestThe actual LeaveGroup request. + * + * @return The LeaveGroup response and the records to append. + */ +private CoordinatorResult classicGroupLeaveToConsumerGroup( +ConsumerGroup group, +RequestContext context, +LeaveGroupRequestData request +) throws UnknownMemberIdException { +String groupId = group.groupId(); +List memberResponses = new ArrayList<>(); +Set validLeaveGroupMembers = new HashSet<>(); +List records = new ArrayList<>(); + +for (MemberIdentity memberIdentity: request.members()) { +String memberId = memberIdentity.memberId(); +String instanceId = memberIdentity.groupInstanceId(); +String reason = memberIdentity.reason() != null ? memberIdentity.reason() : "not provided"; + +ConsumerGroupMember member; +try { +if (instanceId == null) { +member = group.getOrMaybeCreateMember(memberId, false); +throwIfMemberDoesNotUseClassicProtocol(member); + +log.info("[Group {}] Dynamic Member {} has left group " + +"through explicit `LeaveGroup` request; client reason: {}", +groupId, memberId, reason); +} else { +member = group.staticMember(instanceId); +throwIfStaticMemberIsUnknown(member, instanceId); +// The LeaveGroup API allows administrative removal of members by GroupInstanceId +// in which case we expect the MemberId to be undefined. +if (!UNKNOWN_MEMBER_ID.equals(memberId)) { +throwIfInstanceIdIsFenced(member, groupId, memberId, instanceId); +} +throwIfMemberDoesNotUseClassicProtocol(member); + +memberId = member.memberId(); +log.info("[Group {}] Static Member {} with instance id {} has left group " + +"through explicit `LeaveGroup` request; client reason: {}", +groupId, memberId, instanceId, reason); +} + +removeMember(records, groupId, memberId); +cancelTimers(groupId, memberId); +memberResponses.add( +new MemberResponse() +.setMemberId(memberId) +.setGroupInstanceId(instanceId) +); +validLeaveGroupMembers.add(member); +} catch (KafkaException e) { +memberResponses.add( +new MemberResponse() +.setMemberId(memberId) +.setGroupInstanceId(instanceId) +.setErrorCode(Errors.forException(e).code()) +); +} +} + +if (!records.isEmpty()) { +// Maybe update the subscription metadata. +Map subscriptionMetadata = group.computeSubscriptionMetadata( +group.computeSubscribedTopicNames(new ArrayList<>(validLeaveGroupMembers)), Review Comment: It was because `validLeaveGroupMembers` is a set, but I feel it may be better to directly make `computeSubscribedTopicNames` take a set -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this
Re: [I] [Feature] Violin Series [echarts]
2nPlusOne commented on issue #18878: URL: https://github.com/apache/echarts/issues/18878#issuecomment-2130166077 Looking forward to using this feature when it's ready! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@echarts.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@echarts.apache.org For additional commands, e-mail: commits-h...@echarts.apache.org
[PR] GH-41818: [C++][Parquet] normalize dictionary encoding to use RLE_DICTIONARY [arrow]
mapleFU opened a new pull request, #41819: URL: https://github.com/apache/arrow/pull/41819 ### Rationale for this change There're some points: 1. https://github.com/apache/arrow/blob/main/cpp/src/parquet/encoding.cc#L444-L445 . encoding is not passed in Encoder 2. But, it's RLE in decoder: https://github.com/apache/arrow/blob/main/cpp/src/parquet/encoding.cc#L1607 it will be detect and normalized in other place, like: 3. https://github.com/apache/arrow/blob/main/cpp/src/parquet/column_reader.cc#L876 We'd better unifying them ### What changes are included in this PR? Unify dict encoding to `RLE_DICTIONARY`. ### Are these changes tested? No need ### Are there any user-facing changes? No -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [DRAFT] Use previous eventTime for lease arbitration of reminder dagActions [gobblin]
arjun4084346 commented on PR #3952: URL: https://github.com/apache/gobblin/pull/3952#issuecomment-2130165776 so how does using previous dag action's timestamp helps MALA understand that the previous lease action is completed and reminder should exit? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@gobblin.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Revert "License header check" [incubator-xtable]
the-other-tim-brown merged PR #445: URL: https://github.com/apache/incubator-xtable/pull/445 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@xtable.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Lucene.Net.Util.Constants: Updated to detect .NET Framework 4.8.1 [lucenenet]
NightOwl888 merged PR #939: URL: https://github.com/apache/lucenenet/pull/939 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@lucenenet.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] feat: Adding literals [iceberg-go]
zeroshade commented on code in PR #76: URL: https://github.com/apache/iceberg-go/pull/76#discussion_r1613885354 ## literals.go: ## @@ -0,0 +1,777 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package iceberg + +import ( + "bytes" + "cmp" + "errors" + "fmt" + "math" + "reflect" + "strconv" + "time" + + "github.com/apache/arrow/go/v16/arrow" + "github.com/apache/arrow/go/v16/arrow/decimal128" + "github.com/google/uuid" +) + +// LiteralType is a generic type constraint for the explicit Go types that we allow +// for literal values. This represents the actual primitive types that exist in Iceberg +type LiteralType interface { + bool | int32 | int64 | float32 | float64 | Date | + Time | Timestamp | string | []byte | uuid.UUID | Decimal +} + +// Comparator is a comparison function for specific literal types: +// +// returns 0 if v1 == v2 +// returns <0 if v1 < v2 +// returns >0 if v1 > v2 +type Comparator[T LiteralType] func(v1, v2 T) int + +// Literal is a non-null literal value. It can be casted using To and be checked for +// equality against other literals. +type Literal interface { + fmt.Stringer + + Type() Type + To(Type) (Literal, error) + Equals(Literal) bool +} + +// TypedLiteral is a generic interface for Literals so that you can retrieve the value. +// This is based on the physical representative type, which means that FixedLiteral and +// BinaryLiteral will both return []byte, etc. +type TypedLiteral[T LiteralType] interface { + Literal + + Value() T + Comparator() Comparator[T] +} + +// NewLiteral provides a literal based on the type of T +func NewLiteral[T LiteralType](val T) Literal { + switch v := any(val).(type) { + case bool: + return BoolLiteral(v) + case int32: + return Int32Literal(v) + case int64: + return Int64Literal(v) + case float32: + return Float32Literal(v) + case float64: + return Float64Literal(v) + case Date: + return DateLiteral(v) + case Time: + return TimeLiteral(v) + case Timestamp: + return TimestampLiteral(v) + case string: + return StringLiteral(v) + case []byte: + return BinaryLiteral(v) + case uuid.UUID: + return UUIDLiteral(v) + case Decimal: + return DecimalLiteral(v) + } + panic("can't happen due to literal type constraint") +} + +// convenience to avoid repreating this pattern for primitive types +func literalEq[L interface { + comparable + LiteralType +}, T TypedLiteral[L]](lhs T, other Literal) bool { + rhs, ok := other.(T) + if !ok { + return false + } + + return lhs.Value() == rhs.Value() +} + +// AboveMaxLiteral represents values that are above the maximum for their type +// such as values > math.MaxInt32 for an Int32Literal +type AboveMaxLiteral interface { + Literal + + aboveMax() +} + +// BelowMinLiteral represents values that are below the minimum for their type +// such as values < math.MinInt32 for an Int32Literal +type BelowMinLiteral interface { + Literal + + belowMin() +} + +type aboveMaxLiteral[T int32 | int64 | float32 | float64] struct { + value T +} + +func (ab aboveMaxLiteral[T]) aboveMax() {} + +func (ab aboveMaxLiteral[T]) Type() Type { + var z T + switch any(z).(type) { + case int32: + return PrimitiveTypes.Int32 + case int64: + return PrimitiveTypes.Int64 + case float32: + return PrimitiveTypes.Float32 + case float64: + return PrimitiveTypes.Float64 + default: + panic("should never happen") + } +} + +func (ab aboveMaxLiteral[T]) To(t Type) (Literal, error) { + if ab.Type().Equals(t) { + return ab, nil + } + return nil, fmt.Errorf("%w: cannot change type of AboveMax%sLiteral", + ErrBadCast, reflect.TypeOf(T(0)).String()) +} +
Re: [PR] Build: support JRE17 for building and sonar check [cloudstack]
JoaoJandre commented on PR #8609: URL: https://github.com/apache/cloudstack/pull/8609#issuecomment-2130161145 > Maybe a good time for 4.20 to drop support for EL7, Ubuntu 18.04 and 20.04 ? cc @sureshanaparti @shwstppr @DaanHoogland @JoaoJandre and others? Many distros will only have JRE17 as the new default, so either with 4.20 or 4.21 should deprecate support for older distros (both as mgmt/usage server host and KVM host) and move on to using newer LTS JRE. To add to what @weizhouapache said, I think that dropping support for EL7 and Ubuntu 18.04 is should be done in 4.20.0.0. However, since Ubuntu 20.04 is still in LTS, we should not drop support for it, the EOL for Ubuntu 20.04 is Apr 2025. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cloudstack.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] feat: Adding literals [iceberg-go]
zeroshade commented on code in PR #76: URL: https://github.com/apache/iceberg-go/pull/76#discussion_r1613884699 ## literals.go: ## @@ -0,0 +1,777 @@ +// Licensed to the Apache Software Foundation (ASF) under one +// or more contributor license agreements. See the NOTICE file +// distributed with this work for additional information +// regarding copyright ownership. The ASF licenses this file +// to you under the Apache License, Version 2.0 (the +// "License"); you may not use this file except in compliance +// with the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, +// software distributed under the License is distributed on an +// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +// KIND, either express or implied. See the License for the +// specific language governing permissions and limitations +// under the License. + +package iceberg + +import ( + "bytes" + "cmp" + "errors" + "fmt" + "math" + "reflect" + "strconv" + "time" + + "github.com/apache/arrow/go/v16/arrow" + "github.com/apache/arrow/go/v16/arrow/decimal128" + "github.com/google/uuid" +) + +// LiteralType is a generic type constraint for the explicit Go types that we allow +// for literal values. This represents the actual primitive types that exist in Iceberg +type LiteralType interface { + bool | int32 | int64 | float32 | float64 | Date | + Time | Timestamp | string | []byte | uuid.UUID | Decimal +} + +// Comparator is a comparison function for specific literal types: +// +// returns 0 if v1 == v2 +// returns <0 if v1 < v2 +// returns >0 if v1 > v2 +type Comparator[T LiteralType] func(v1, v2 T) int + +// Literal is a non-null literal value. It can be casted using To and be checked for +// equality against other literals. +type Literal interface { + fmt.Stringer + + Type() Type + To(Type) (Literal, error) + Equals(Literal) bool +} + +// TypedLiteral is a generic interface for Literals so that you can retrieve the value. +// This is based on the physical representative type, which means that FixedLiteral and +// BinaryLiteral will both return []byte, etc. +type TypedLiteral[T LiteralType] interface { + Literal + + Value() T + Comparator() Comparator[T] +} + +// NewLiteral provides a literal based on the type of T +func NewLiteral[T LiteralType](val T) Literal { + switch v := any(val).(type) { + case bool: + return BoolLiteral(v) + case int32: + return Int32Literal(v) + case int64: + return Int64Literal(v) + case float32: + return Float32Literal(v) + case float64: + return Float64Literal(v) + case Date: + return DateLiteral(v) + case Time: + return TimeLiteral(v) + case Timestamp: + return TimestampLiteral(v) + case string: + return StringLiteral(v) + case []byte: + return BinaryLiteral(v) + case uuid.UUID: + return UUIDLiteral(v) + case Decimal: + return DecimalLiteral(v) + } + panic("can't happen due to literal type constraint") +} + +// convenience to avoid repreating this pattern for primitive types +func literalEq[L interface { + comparable + LiteralType +}, T TypedLiteral[L]](lhs T, other Literal) bool { + rhs, ok := other.(T) + if !ok { + return false + } + + return lhs.Value() == rhs.Value() +} + +// AboveMaxLiteral represents values that are above the maximum for their type +// such as values > math.MaxInt32 for an Int32Literal +type AboveMaxLiteral interface { + Literal + + aboveMax() +} + +// BelowMinLiteral represents values that are below the minimum for their type +// such as values < math.MinInt32 for an Int32Literal +type BelowMinLiteral interface { + Literal + + belowMin() +} + +type aboveMaxLiteral[T int32 | int64 | float32 | float64] struct { + value T +} + +func (ab aboveMaxLiteral[T]) aboveMax() {} + +func (ab aboveMaxLiteral[T]) Type() Type { + var z T + switch any(z).(type) { + case int32: + return PrimitiveTypes.Int32 + case int64: + return PrimitiveTypes.Int64 + case float32: + return PrimitiveTypes.Float32 + case float64: + return PrimitiveTypes.Float64 + default: + panic("should never happen") + } +} + +func (ab aboveMaxLiteral[T]) To(t Type) (Literal, error) { + if ab.Type().Equals(t) { + return ab, nil + } + return nil, fmt.Errorf("%w: cannot change type of AboveMax%sLiteral", + ErrBadCast, reflect.TypeOf(T(0)).String()) +} +
Re: [PR] [MINOR][SQL] Remove outdated `TODO`s from `UnsafeHashedRelation` [spark]
JoshRosen commented on code in PR #46736: URL: https://github.com/apache/spark/pull/46736#discussion_r1613884389 ## sql/core/src/main/scala/org/apache/spark/sql/execution/joins/HashedRelation.scala: ## @@ -409,9 +407,6 @@ private[joins] class UnsafeHashedRelation( val pageSizeBytes = Option(SparkEnv.get).map(_.memoryManager.pageSizeBytes) .getOrElse(new SparkConf().get(BUFFER_PAGESIZE).getOrElse(16L * 1024 * 1024)) -// TODO(josh): We won't need this dummy memory manager after future refactorings; revisit Review Comment: These are ancient TOODs, but my recollection is that they were related to some limitations in storage memory management related to transferring memory between execution and storage: at the time (and even now) I don't think we had a reliable way to transfer task-allocated memory pages into a shared context where they can be used by multiple tasks and counted towards storage memory, hence some of the hacks here. I'm not firmly opposed to removing TODOs like this, but I also don't particularly understand the motivation for doing so: sure, it's a bit of clutter in the code but if we're not replacing them with a newer comment or making a code change then it just seems like cleanup for cleanup's sake, which I am generally opposed to for code churn / review bandwidth reasons. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
Re: [PR] [FLINK-30687][table] Support pushdown for aggregate filters [flink]
jeyhunkarimov commented on PR #24660: URL: https://github.com/apache/flink/pull/24660#issuecomment-2130159573 Hi @JingGe , I added support for multiple aggregates with the same filter (to push down their filters). Could you please check the PR in your available time? Thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [I] Remove Scorer#getWeight. [lucene]
navneet1v commented on issue #13410: URL: https://github.com/apache/lucene/issues/13410#issuecomment-2130159284 @jpountz what would be the alternative to `getWeight` function? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
Re: [PR] [HUDI-7776] Simplify HoodieStorage instance fetching [hudi]
yihua commented on code in PR #11259: URL: https://github.com/apache/hudi/pull/11259#discussion_r1613880811 ## hudi-common/src/main/java/org/apache/hudi/common/config/HoodieStorageConfig.java: ## @@ -243,7 +243,12 @@ public class HoodieStorageConfig extends HoodieConfig { .withDocumentation("The fully-qualified class name of the factory class to return readers and writers of files used " + "by Hudi. The provided class should implement `org.apache.hudi.io.storage.HoodieIOFactory`."); - + public static final ConfigProperty HOODIE_STORAGE_CLASS = ConfigProperty Review Comment: Created HUDI-7789 for future effort of moving `HoodieIOFactory` to `hudi-io` module and keeping one config only. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Do not retrieve VM's stats on normal VM listing [cloudstack]
JoaoJandre commented on code in PR #8782: URL: https://github.com/apache/cloudstack/pull/8782#discussion_r1613880989 ## api/src/main/java/org/apache/cloudstack/query/QueryService.java: ## @@ -125,6 +125,10 @@ public interface QueryService { static final ConfigKey SharePublicTemplatesWithOtherDomains = new ConfigKey<>("Advanced", Boolean.class, "share.public.templates.with.other.domains", "true", "If false, templates of this domain will not show up in the list templates of other domains.", true, ConfigKey.Scope.Domain); +ConfigKey ReturnVmStatsOnVmList = new ConfigKey<>("Advanced", Boolean.class, "return.vm.stats.on.vm.list", "true", Review Comment: If we are breaking backwards compatibility anyway, why add the configuration at all? If users want to get the VMs metrics, they can use the `listVirtualMachineMetrics` API, or inform the `stats` detail on this API. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@cloudstack.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [SPARK-47257][SQL] Assign names to error classes _LEGACY_ERROR_TEMP_105[3-4] and _LEGACY_ERROR_TEMP_1113 [spark]
wayneguow commented on code in PR #46731: URL: https://github.com/apache/spark/pull/46731#discussion_r1613880314 ## sql/core/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveSessionCatalog.scala: ## @@ -90,8 +91,8 @@ class ResolveSessionCatalog(val catalogManager: CatalogManager) table.schema.findNestedField(Seq(colName), resolver = conf.resolver) .map(_._2.dataType) .getOrElse { -throw QueryCompilationErrors.alterColumnCannotFindColumnInV1TableError( - quoteIfNeeded(colName), table) +throw QueryCompilationErrors.unresolvedColumnError( Review Comment: Would it be better if we could provide users with table and context information? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
Re: [I] Chrome extension: Remove BpmnPrTest and DmnPrTest e2e tests [incubator-kie-tools]
tiagobento closed issue #2326: Chrome extension: Remove BpmnPrTest and DmnPrTest e2e tests URL: https://github.com/apache/incubator-kie-tools/issues/2326 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@kie.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@kie.apache.org For additional commands, e-mail: commits-h...@kie.apache.org
Re: [PR] NO-ISSUE: Skip `BpmnPrTest` and `DmnPrTest` on Chrome Extension E2E tests [incubator-kie-tools]
tiagobento merged PR #2327: URL: https://github.com/apache/incubator-kie-tools/pull/2327 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@kie.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@kie.apache.org For additional commands, e-mail: commits-h...@kie.apache.org
[PR] TIKA-4261 -- add attachment type metadata filter [tika]
tballison opened a new pull request, #1777: URL: https://github.com/apache/tika/pull/1777 Thanks for your contribution to [Apache Tika](https://tika.apache.org/)! Your help is appreciated! Before opening the pull request, please verify that * there is an open issue on the [Tika issue tracker](https://issues.apache.org/jira/projects/TIKA) which describes the problem or the improvement. We cannot accept pull requests without an issue because the change wouldn't be listed in the release notes. * the issue ID (`TIKA-`) - is referenced in the title of the pull request - and placed in front of your commit messages surrounded by square brackets (`[TIKA-] Issue or pull request title`) * commits are squashed into a single one (or few commits for larger changes) * Tika is successfully built and unit tests pass by running `mvn clean test` * there should be no conflicts when merging the pull request branch into the *recent* `main` branch. If there are conflicts, please try to rebase the pull request branch on top of a freshly pulled `main` branch * if you add new module that downstream users will depend upon add it to relevant group in `tika-bom/pom.xml`. We will be able to faster integrate your pull request if these conditions are met. If you have any questions how to fix your problem or about using Tika in general, please sign up for the [Tika mailing list](http://tika.apache.org/mail-lists.html). Thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@tika.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [HUDI-7776] Simplify HoodieStorage instance fetching [hudi]
yihua commented on code in PR #11259: URL: https://github.com/apache/hudi/pull/11259#discussion_r1613879501 ## hudi-common/src/main/java/org/apache/hudi/common/config/HoodieStorageConfig.java: ## @@ -243,7 +243,12 @@ public class HoodieStorageConfig extends HoodieConfig { .withDocumentation("The fully-qualified class name of the factory class to return readers and writers of files used " + "by Hudi. The provided class should implement `org.apache.hudi.io.storage.HoodieIOFactory`."); - + public static final ConfigProperty HOODIE_STORAGE_CLASS = ConfigProperty Review Comment: Rethinking about this, the reason we cannot directly use a getter to return the `HoodieIOFactory` instance from `HoodieStorage` instance is that `HoodieIOFactory` and related classes are in `hudi-common` module because they use Hudi concepts such as `HoodieRecord`, `BloomFilter`, etc., while the `HoodieStorage` is in `hudi-io` module (`hudi-common` module depends on `hudi-io`). For now, the best way is to keep two configs. For Hadoop-based implementation, no configs are required as the default are the `HoodieHadoopIOFactory` and `HoodieHadoopStorage`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] kie-issues#1245: Adapt KIE Tools release jobs to sign and upload release candidate artifacts to https://dist.apache.org/repos/dist/dev/incubator/kie (Jenkins only) [incubator-kie-tools]
tiagobento commented on code in PR #2366: URL: https://github.com/apache/incubator-kie-tools/pull/2366#discussion_r1613876156 ## .ci/jenkins/Jenkinsfile.release-candidate: ## @@ -114,7 +115,17 @@ pipeline { steps { dir('kie-tools') { script { - buildUtils.pnpmUpdateKogitoVersion(params.RELEASE_CANDIDATE_VERSION, params.RELEASE_CANDIDATE_VERSION) + buildUtils.pnpmUpdateKogitoVersion(params.RELEASE_VERSION, params.RELEASE_VERSION) +} +} +} +} + +stage('Update stream name') { +steps { +dir('kie-tools') { +script { +buildUtils.pnpmUpdateStreamName(params.BRANCH_NAME) Review Comment: params.RELEASE_VERSION ## .ci/jenkins/release-jobs/Jenkinsfile.chrome-extensions: ## @@ -350,8 +351,8 @@ pipeline { } steps { script { -env.CHROME_EXTENSION_RELEASE_ZIP_FILE = "incubator-kie-${params.RELEASE_VERSION}-business-automation-chrome-extension.zip" -env.SWF_CHROME_EXTENSION_RELEASE_ZIP_FILE = "incubator-kie-${params.RELEASE_VERSION}-sonataflow-chrome-extension.zip" +env.CHROME_EXTENSION_RELEASE_ZIP_FILE = "incubator-kie-${params.RELEASE_CANDIDATE_VERSION}-business-automation-chrome-extension.zip" +env.SWF_CHROME_EXTENSION_RELEASE_ZIP_FILE = "incubator-kie-${params.RELEASE_CANDIDATE_VERSION}-sonataflow-chrome-extension.zip" Review Comment: params.RELEASE_CANDIDATE_VERSION is `10.0.0` or `10.0.0-rc1`? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@kie.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: commits-unsubscr...@kie.apache.org For additional commands, e-mail: commits-h...@kie.apache.org
Re: [PR] chore(料): bump python requests-oauthlib 1.3.1 -> 2.0.0 [superset]
mistercrunch merged PR #28681: URL: https://github.com/apache/superset/pull/28681 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For additional commands, e-mail: notifications-h...@superset.apache.org
Re: [PR] [HUDI-7788] Fixing exception handling in AverageRecordSizeUtils [hudi]
hudi-bot commented on PR #11290: URL: https://github.com/apache/hudi/pull/11290#issuecomment-2130151684 ## CI report: * 853d6b5d25c5724250bb8d9d03bef07aa5ed73bb UNKNOWN Bot commands @hudi-bot supports the following commands: - `@hudi-bot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] add support for ConnectionFactory ProviderFn in JmsIO [beam]
Abacn commented on PR #31264: URL: https://github.com/apache/beam/pull/31264#issuecomment-2130151490 waiting on author -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@beam.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [HUDI-7788] Fixing exception handling in AverageRecordSizeUtils [hudi]
hudi-bot commented on PR #11289: URL: https://github.com/apache/hudi/pull/11289#issuecomment-2130151416 ## CI report: * 1fa21effcf407ec954ae805c2604e6de2239ae22 UNKNOWN Bot commands @hudi-bot supports the following commands: - `@hudi-bot run azure` re-run the last Azure build -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Pipe: Fixed the bug that BatchActivateTemplateStatement is not handled correctly when some of the timeSeries already exists [iotdb]
SteveYurongSu merged PR #12587: URL: https://github.com/apache/iotdb/pull/12587 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@iotdb.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[Github-comments] Re: [geany/geany-plugins] Add LSP plugin (PR #1331)
@techee pushed 4 commits. 0e2b0bd8d8008fc4a89b04c9d2e0720636e2de22 Fix runtime warning 86bfa0668fca887f6f277d0a5217a2bb78c84f50 Allow specifying JSON for initialization and formatting directly in the config file 19a425916a20c4ea150216bc91d6ed899d62 Add config option enabling/disabling project LSP use by default 95f44a488774170baaa3d40ab41092d52c8edfd9 Minor config file updates -- View it on GitHub: https://github.com/geany/geany-plugins/pull/1331/files/7794a3f9a6f2e18293ddc78993a00e54b2afdafc..95f44a488774170baaa3d40ab41092d52c8edfd9 You are receiving this because you are subscribed to this thread. Message ID:
Re: [PR] fix: Only delegate to DataFusion cast when we know that it is compatible with Spark [datafusion-comet]
kazuyukitanimura commented on code in PR #461: URL: https://github.com/apache/datafusion-comet/pull/461#discussion_r1613863992 ## core/src/execution/datafusion/expressions/cast.rs: ## @@ -622,14 +590,89 @@ impl Cast { self.eval_mode, from_type, to_type, -)? +) +} +_ if Self::is_datafusion_spark_compatible(from_type, to_type) => { +// use DataFusion cast only when we know that it is compatible with Spark +Ok(cast_with_options(, to_type, _OPTIONS)?) } _ => { -// when we have no Spark-specific casting we delegate to DataFusion -cast_with_options(, to_type, _OPTIONS)? +// we should never reach this code because the Scala code should be checking +// for supported cast operations and falling back to Spark for anything that +// is not yet supported +Err(CometError::Internal(format!( +"Native cast invoked for unsupported cast from {from_type:?} to {to_type:?}" +))) } }; -Ok(spark_cast(cast_result, from_type, to_type)) +Ok(spark_cast(cast_result?, from_type, to_type)) +} + +/// Determines if DataFusion supports the given cast in a way that is +/// compatible with Spark +fn is_datafusion_spark_compatible(from_type: , to_type: ) -> bool { +if from_type == to_type { +return true; +} +match from_type { +DataType::Boolean => matches!( +to_type, +DataType::Int8 +| DataType::Int16 +| DataType::Int32 +| DataType::Int64 +| DataType::Float32 +| DataType::Float64 +| DataType::Utf8 Review Comment: So right now, there is not `Int8` to `Decimal128` cast supported, looks like? ## core/src/execution/datafusion/expressions/cast.rs: ## @@ -622,14 +590,89 @@ impl Cast { self.eval_mode, from_type, to_type, -)? +) +} +_ if Self::is_datafusion_spark_compatible(from_type, to_type) => { +// use DataFusion cast only when we know that it is compatible with Spark +Ok(cast_with_options(, to_type, _OPTIONS)?) } _ => { -// when we have no Spark-specific casting we delegate to DataFusion -cast_with_options(, to_type, _OPTIONS)? +// we should never reach this code because the Scala code should be checking +// for supported cast operations and falling back to Spark for anything that +// is not yet supported +Err(CometError::Internal(format!( +"Native cast invoked for unsupported cast from {from_type:?} to {to_type:?}" +))) } }; -Ok(spark_cast(cast_result, from_type, to_type)) +Ok(spark_cast(cast_result?, from_type, to_type)) +} + +/// Determines if DataFusion supports the given cast in a way that is +/// compatible with Spark +fn is_datafusion_spark_compatible(from_type: , to_type: ) -> bool { +if from_type == to_type { +return true; +} +match from_type { +DataType::Boolean => matches!( +to_type, +DataType::Int8 +| DataType::Int16 +| DataType::Int32 +| DataType::Int64 +| DataType::Float32 +| DataType::Float64 +| DataType::Utf8 +), +DataType::Int8 | DataType::Int16 | DataType::Int32 | DataType::Int64 => matches!( +to_type, +DataType::Boolean +| DataType::Int8 +| DataType::Int16 +| DataType::Int32 +| DataType::Int64 +| DataType::Float32 +| DataType::Float64 +| DataType::Decimal128(_, _) +| DataType::Utf8 +), +DataType::Float32 | DataType::Float64 => matches!( +to_type, +DataType::Boolean +| DataType::Int8 +| DataType::Int16 +| DataType::Int32 +| DataType::Int64 +| DataType::Float32 +| DataType::Float64 +), +DataType::Decimal128(_, _) | DataType::Decimal256(_, _) => matches!( +to_type, +DataType::Int8 +| DataType::Int16 +
Re: [PR] Avoid setting test constants as pytest module attributes [airflow]
potiuk merged PR #39819: URL: https://github.com/apache/airflow/pull/39819 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] Bump github/codeql-action from 3.25.5 to 3.25.6 [commons-fileupload]
dependabot[bot] opened a new pull request, #317: URL: https://github.com/apache/commons-fileupload/pull/317 Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.25.5 to 3.25.6. Changelog Sourced from https://github.com/github/codeql-action/blob/main/CHANGELOG.md;>github/codeql-action's changelog. CodeQL Action Changelog See the https://github.com/github/codeql-action/releases;>releases page for the relevant changes to the CodeQL CLI and language packs. Note that the only difference between v2 and v3 of the CodeQL Action is the node version they support, with v3 running on node 20 while we continue to release v2 to support running on node 16. For example 3.22.11 was the first v3 release and is functionally identical to 2.22.11. This approach ensures an easy way to track exactly which features are included in different versions, indicated by the minor and patch version numbers. [UNRELEASED] We are rolling out a feature in May/June 2024 that will reduce the Actions cache usage of the Action by keeping only the newest TRAP cache for each language. https://redirect.github.com/github/codeql-action/pull/2306;>#2306 3.25.6 - 20 May 2024 Update default CodeQL bundle version to 2.17.3. https://redirect.github.com/github/codeql-action/pull/2295;>#2295 3.25.5 - 13 May 2024 Add a compatibility matrix of supported CodeQL Action, CodeQL CLI, and GitHub Enterprise Server versions to the https://github.com/github/codeql-action/blob/main/README.md;>https://github.com/github/codeql-action/blob/main/README.md. https://redirect.github.com/github/codeql-action/pull/2273;>#2273 Avoid printing out a warning for a missing on.push trigger when the CodeQL Action is triggered via a workflow_call event. https://redirect.github.com/github/codeql-action/pull/2274;>#2274 The tools: latest input to the init Action has been renamed to tools: linked. This option specifies that the Action should use the tools shipped at the same time as the Action. The old name will continue to work for backwards compatibility, but we recommend that new workflows use the new name. https://redirect.github.com/github/codeql-action/pull/2281;>#2281 3.25.4 - 08 May 2024 Update default CodeQL bundle version to 2.17.2. https://redirect.github.com/github/codeql-action/pull/2270;>#2270 3.25.3 - 25 Apr 2024 Update default CodeQL bundle version to 2.17.1. https://redirect.github.com/github/codeql-action/pull/2247;>#2247 Workflows running on macos-latest using CodeQL CLI versions before v2.15.1 will need to either upgrade their CLI version to v2.15.1 or newer, or change the platform to an Intel MacOS runner, such as macos-12. ARM machines with SIP disabled, including the newest macos-latest image, are unsupported for CLI versions before 2.15.1. https://redirect.github.com/github/codeql-action/pull/2261;>#2261 3.25.2 - 22 Apr 2024 No user facing changes. 3.25.1 - 17 Apr 2024 We are rolling out a feature in April/May 2024 that improves the reliability and performance of analyzing code when analyzing a compiled language with the autobuild https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages#codeql-build-modes;>build mode. https://redirect.github.com/github/codeql-action/pull/2235;>#2235 Fix a bug where the init Action would fail if --overwrite was specified in CODEQL_ACTION_EXTRA_OPTIONS. https://redirect.github.com/github/codeql-action/pull/2245;>#2245 3.25.0 - 15 Apr 2024 The deprecated feature for extracting dependencies for a Python analysis has been removed. https://redirect.github.com/github/codeql-action/pull/2224;>#2224 As a result, the following inputs and environment variables are now ignored: The setup-python-dependencies input to the init Action The CODEQL_ACTION_DISABLE_PYTHON_DEPENDENCY_INSTALLATION environment variable We recommend removing any references to these from your workflows. For more information, see the release notes for CodeQL Action v3.23.0 and v2.23.0. Automatically overwrite an existing database if found on the filesystem. https://redirect.github.com/github/codeql-action/pull/2229;>#2229 Bump the minimum CodeQL bundle version to 2.12.6. https://redirect.github.com/github/codeql-action/pull/2232;>#2232 ... (truncated) Commits https://github.com/github/codeql-action/commit/9fdb3e49720b44c48891d036bb502feb25684276;>9fdb3e4 Merge pull request https://redirect.github.com/github/codeql-action/issues/2300;>#2300 from github/update-v3.25.6-63d519c0a https://github.com/github/codeql-action/commit/00792ab1e0a5e45d2ff0c2426424bf7044bb27d0;>00792ab Update changelog for v3.25.6 https://github.com/github/codeql-action/commit/63d519c0ae6a4b739e3377a517400c352a7d829b;>63d519c
[PR] Bump codecov/codecov-action from 4.4.0 to 4.4.1 [commons-fileupload]
dependabot[bot] opened a new pull request, #316: URL: https://github.com/apache/commons-fileupload/pull/316 Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4.4.0 to 4.4.1. Release notes Sourced from https://github.com/codecov/codecov-action/releases;>codecov/codecov-action's releases. v4.4.1 What's Changed build(deps-dev): bump @typescript-eslint/eslint-plugin from 7.8.0 to 7.9.0 by https://github.com/dependabot;>@dependabot in https://redirect.github.com/codecov/codecov-action/pull/1427;>codecov/codecov-action#1427 fix: prevent xlarge from running on forks by https://github.com/thomasrockhu-codecov;>@thomasrockhu-codecov in https://redirect.github.com/codecov/codecov-action/pull/1432;>codecov/codecov-action#1432 build(deps): bump github/codeql-action from 3.25.4 to 3.25.5 by https://github.com/dependabot;>@dependabot in https://redirect.github.com/codecov/codecov-action/pull/1439;>codecov/codecov-action#1439 build(deps): bump actions/checkout from 4.1.5 to 4.1.6 by https://github.com/dependabot;>@dependabot in https://redirect.github.com/codecov/codecov-action/pull/1438;>codecov/codecov-action#1438 fix: isPullRequestFromFork returns false for any PR by https://github.com/shahar-h;>@shahar-h in https://redirect.github.com/codecov/codecov-action/pull/1437;>codecov/codecov-action#1437 chore(release): 4.4.1 by https://github.com/thomasrockhu-codecov;>@thomasrockhu-codecov in https://redirect.github.com/codecov/codecov-action/pull/1441;>codecov/codecov-action#1441 New Contributors https://github.com/shahar-h;>@shahar-h made their first contribution in https://redirect.github.com/codecov/codecov-action/pull/1437;>codecov/codecov-action#1437 Full Changelog: https://github.com/codecov/codecov-action/compare/v4.4.0...v4.4.1;>https://github.com/codecov/codecov-action/compare/v4.4.0...v4.4.1 What's Changed build(deps-dev): bump @typescript-eslint/eslint-plugin from 7.8.0 to 7.9.0 by https://github.com/dependabot;>@dependabot in https://redirect.github.com/codecov/codecov-action/pull/1427;>codecov/codecov-action#1427 fix: prevent xlarge from running on forks by https://github.com/thomasrockhu-codecov;>@thomasrockhu-codecov in https://redirect.github.com/codecov/codecov-action/pull/1432;>codecov/codecov-action#1432 build(deps): bump github/codeql-action from 3.25.4 to 3.25.5 by https://github.com/dependabot;>@dependabot in https://redirect.github.com/codecov/codecov-action/pull/1439;>codecov/codecov-action#1439 build(deps): bump actions/checkout from 4.1.5 to 4.1.6 by https://github.com/dependabot;>@dependabot in https://redirect.github.com/codecov/codecov-action/pull/1438;>codecov/codecov-action#1438 fix: isPullRequestFromFork returns false for any PR by https://github.com/shahar-h;>@shahar-h in https://redirect.github.com/codecov/codecov-action/pull/1437;>codecov/codecov-action#1437 chore(release): 4.4.1 by https://github.com/thomasrockhu-codecov;>@thomasrockhu-codecov in https://redirect.github.com/codecov/codecov-action/pull/1441;>codecov/codecov-action#1441 New Contributors https://github.com/shahar-h;>@shahar-h made their first contribution in https://redirect.github.com/codecov/codecov-action/pull/1437;>codecov/codecov-action#1437 Full Changelog: https://github.com/codecov/codecov-action/compare/v4.4.0...v4.4.1;>https://github.com/codecov/codecov-action/compare/v4.4.0...v4.4.1 Commits https://github.com/codecov/codecov-action/commit/125fc84a9a348dbcf27191600683ec096ec9021c;>125fc84 chore(release): 4.4.1 (https://redirect.github.com/codecov/codecov-action/issues/1441;>#1441) https://github.com/codecov/codecov-action/commit/c9dbf6a905508ed0cfcffd35fa2756ed11a43bc2;>c9dbf6a fix: isPullRequestFromFork returns false for any PR (https://redirect.github.com/codecov/codecov-action/issues/1437;>#1437) https://github.com/codecov/codecov-action/commit/59fc46f14a76cc297efa232ad153d77fbe2a018b;>59fc46f build(deps): bump actions/checkout from 4.1.5 to 4.1.6 (https://redirect.github.com/codecov/codecov-action/issues/1438;>#1438) https://github.com/codecov/codecov-action/commit/3889fddabba3b7d0b612ec29a5735eb7fd9e55f3;>3889fdd build(deps): bump github/codeql-action from 3.25.4 to 3.25.5 (https://redirect.github.com/codecov/codecov-action/issues/1439;>#1439) https://github.com/codecov/codecov-action/commit/d42a336584211c4f940f39382853503887003197;>d42a336 fix: prevent xlarge from running on forks (https://redirect.github.com/codecov/codecov-action/issues/1432;>#1432) https://github.com/codecov/codecov-action/commit/fd624e50e79ee493cc04fa548dbb550c038281da;>fd624e5 build(deps-dev): bump @typescript-eslint/eslint-plugin from 7.8.0 to 7.9.0 (#... See full diff in https://github.com/codecov/codecov-action/compare/6d798873df2b1b8e5846dba6fb86631229fbcb17...125fc84a9a348dbcf27191600683ec096ec9021c;>compare view
Re: [PR] KAFKA-16625: Reverse lookup map from topic partitions to members [kafka]
rreddy-22 commented on code in PR #15974: URL: https://github.com/apache/kafka/pull/15974#discussion_r1613874639 ## jmh-benchmarks/src/main/java/org/apache/kafka/jmh/assignor/ServerSideAssignorBenchmark.java: ## @@ -29,10 +29,12 @@ import org.apache.kafka.coordinator.group.assignor.UniformAssignor; import org.apache.kafka.coordinator.group.consumer.SubscribedTopicMetadata; import org.apache.kafka.coordinator.group.consumer.TopicMetadata; + Review Comment: yes -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Fix Google System tests to satisfy MyPy project_id checks [airflow]
potiuk commented on PR #39817: URL: https://github.com/apache/airflow/pull/39817#issuecomment-2130146844 Had to add defaults to satisfy "importable" tests. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Introduces efSearch as a separate parameter in KNN{Byte:Float}VectorQuery [lucene]
navneet1v commented on code in PR #13407: URL: https://github.com/apache/lucene/pull/13407#discussion_r1613873312 ## lucene/core/src/java/org/apache/lucene/search/AbstractKnnVectorQuery.java: ## @@ -54,14 +54,20 @@ abstract class AbstractKnnVectorQuery extends Query { protected final String field; protected final int k; + protected final int efSearch; private final Query filter; - public AbstractKnnVectorQuery(String field, int k, Query filter) { + public AbstractKnnVectorQuery(String field, int k, int efSearch, Query filter) { Review Comment: @shatejas how we ensuring the BWC is maintained for the new query interface? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
Re: [PR] Add try-excepts around data sampler encoding [beam]
github-actions[bot] commented on PR #31396: URL: https://github.com/apache/beam/pull/31396#issuecomment-2130145185 Assigning reviewers. If you would like to opt out of this review, comment `assign to next reviewer`: R: @riteshghorse for label python. Available commands: - `stop reviewer notifications` - opt out of the automated review tooling - `remind me after tests pass` - tag the comment author after tests pass - `waiting on author` - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers) The PR bot will only process comments in the main thread (not review comments). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@beam.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [I] Add support for reloading the SPI for KnnVectorsFormat class [lucene]
navneet1v commented on issue #13393: URL: https://github.com/apache/lucene/issues/13393#issuecomment-2130144565 > Thanks for the clarification @navneet1v I didn't know folks were dynamically loading jars for different vector formats. > > The idea sounds good to me. I haven't reviewed the PR. > > I just wanted to make sure we weren't adding code without a practical and current use. @benwtrent Sounds good to me. Will wait for your review on the PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[PR] KAFKA-15541: Use LongAdder instead of AtomicInteger [kafka]
nicktelford opened a new pull request, #16076: URL: https://github.com/apache/kafka/pull/16076 `LongAdder` performs better than `AtomicInteger` when under contention from many threads. Since it's possible that many Interactive Query threads could create a large number of `KeyValueIterator`s, we don't want contention on a metric to be a performance bottleneck. The trade-off is memory, as `LongAdder` uses more memory to space out independent counters across different cache lines. In practice, I don't expect this to cause too many problems, as we're only constructing 1 per-store. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] fix: serialization of decimal [arrow-rs]
yjshen opened a new pull request, #5801: URL: https://github.com/apache/arrow-rs/pull/5801 # Which issue does this PR close? Closes #. # Rationale for this change Fix the issue when decoding to a decimal array fails. # What changes are included in this PR? # Are there any user-facing changes? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-15541: Use LongAdder instead of AtomicInteger [kafka]
nicktelford commented on PR #16076: URL: https://github.com/apache/kafka/pull/16076#issuecomment-2130144100 @mjsax -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [I] GCStoS3 Operator [airflow]
hardikdedhia1992 commented on issue #39737: URL: https://github.com/apache/airflow/issues/39737#issuecomment-2130142543 you can try with bashoperator use command such as gsutil -m rsync -r gs://gcs_bucket_path s3://sc_bucket_path with providing access to AWS in GCP . you can use : rsync_task = BashOperator(task_id='rsync_task', bash_command='command', dag=dag) Hope fully it should work -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] remove deprecations mssql [airflow]
vincbeck commented on PR #39734: URL: https://github.com/apache/airflow/pull/39734#issuecomment-2130140968 Please resolve conflict -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Fix common.sql mypy errors [airflow]
potiuk merged PR #39820: URL: https://github.com/apache/airflow/pull/39820 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@airflow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[I] [C++][Parquet] Unify normalize dictionary encoding handling [arrow]
mapleFU opened a new issue, #41818: URL: https://github.com/apache/arrow/issues/41818 ### Describe the enhancement requested This is mentioned here: https://github.com/apache/arrow/pull/40957#discussion_r1562703901 There're some points: 1. https://github.com/apache/arrow/blob/main/cpp/src/parquet/encoding.cc#L444-L445 . encoding is not passed in Encoder 2. But shit, it's RLE in decoder : https://github.com/apache/arrow/blob/main/cpp/src/parquet/encoding.cc#L1607 it will be detect and normalized in other place, like: 3. https://github.com/apache/arrow/blob/main/cpp/src/parquet/column_reader.cc#L876 We'd better unifying them ### Component(s) C++, Parquet -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@arrow.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] (chores) multiple small cleanups [camel]
orpiske opened a new pull request, #14239: URL: https://github.com/apache/camel/pull/14239 (no comment) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@camel.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] Bump codecov/codecov-action from 4.4.0 to 4.4.1 [commons-jxpath]
dependabot[bot] opened a new pull request, #151: URL: https://github.com/apache/commons-jxpath/pull/151 Bumps [codecov/codecov-action](https://github.com/codecov/codecov-action) from 4.4.0 to 4.4.1. Release notes Sourced from https://github.com/codecov/codecov-action/releases;>codecov/codecov-action's releases. v4.4.1 What's Changed build(deps-dev): bump @typescript-eslint/eslint-plugin from 7.8.0 to 7.9.0 by https://github.com/dependabot;>@dependabot in https://redirect.github.com/codecov/codecov-action/pull/1427;>codecov/codecov-action#1427 fix: prevent xlarge from running on forks by https://github.com/thomasrockhu-codecov;>@thomasrockhu-codecov in https://redirect.github.com/codecov/codecov-action/pull/1432;>codecov/codecov-action#1432 build(deps): bump github/codeql-action from 3.25.4 to 3.25.5 by https://github.com/dependabot;>@dependabot in https://redirect.github.com/codecov/codecov-action/pull/1439;>codecov/codecov-action#1439 build(deps): bump actions/checkout from 4.1.5 to 4.1.6 by https://github.com/dependabot;>@dependabot in https://redirect.github.com/codecov/codecov-action/pull/1438;>codecov/codecov-action#1438 fix: isPullRequestFromFork returns false for any PR by https://github.com/shahar-h;>@shahar-h in https://redirect.github.com/codecov/codecov-action/pull/1437;>codecov/codecov-action#1437 chore(release): 4.4.1 by https://github.com/thomasrockhu-codecov;>@thomasrockhu-codecov in https://redirect.github.com/codecov/codecov-action/pull/1441;>codecov/codecov-action#1441 New Contributors https://github.com/shahar-h;>@shahar-h made their first contribution in https://redirect.github.com/codecov/codecov-action/pull/1437;>codecov/codecov-action#1437 Full Changelog: https://github.com/codecov/codecov-action/compare/v4.4.0...v4.4.1;>https://github.com/codecov/codecov-action/compare/v4.4.0...v4.4.1 What's Changed build(deps-dev): bump @typescript-eslint/eslint-plugin from 7.8.0 to 7.9.0 by https://github.com/dependabot;>@dependabot in https://redirect.github.com/codecov/codecov-action/pull/1427;>codecov/codecov-action#1427 fix: prevent xlarge from running on forks by https://github.com/thomasrockhu-codecov;>@thomasrockhu-codecov in https://redirect.github.com/codecov/codecov-action/pull/1432;>codecov/codecov-action#1432 build(deps): bump github/codeql-action from 3.25.4 to 3.25.5 by https://github.com/dependabot;>@dependabot in https://redirect.github.com/codecov/codecov-action/pull/1439;>codecov/codecov-action#1439 build(deps): bump actions/checkout from 4.1.5 to 4.1.6 by https://github.com/dependabot;>@dependabot in https://redirect.github.com/codecov/codecov-action/pull/1438;>codecov/codecov-action#1438 fix: isPullRequestFromFork returns false for any PR by https://github.com/shahar-h;>@shahar-h in https://redirect.github.com/codecov/codecov-action/pull/1437;>codecov/codecov-action#1437 chore(release): 4.4.1 by https://github.com/thomasrockhu-codecov;>@thomasrockhu-codecov in https://redirect.github.com/codecov/codecov-action/pull/1441;>codecov/codecov-action#1441 New Contributors https://github.com/shahar-h;>@shahar-h made their first contribution in https://redirect.github.com/codecov/codecov-action/pull/1437;>codecov/codecov-action#1437 Full Changelog: https://github.com/codecov/codecov-action/compare/v4.4.0...v4.4.1;>https://github.com/codecov/codecov-action/compare/v4.4.0...v4.4.1 Commits https://github.com/codecov/codecov-action/commit/125fc84a9a348dbcf27191600683ec096ec9021c;>125fc84 chore(release): 4.4.1 (https://redirect.github.com/codecov/codecov-action/issues/1441;>#1441) https://github.com/codecov/codecov-action/commit/c9dbf6a905508ed0cfcffd35fa2756ed11a43bc2;>c9dbf6a fix: isPullRequestFromFork returns false for any PR (https://redirect.github.com/codecov/codecov-action/issues/1437;>#1437) https://github.com/codecov/codecov-action/commit/59fc46f14a76cc297efa232ad153d77fbe2a018b;>59fc46f build(deps): bump actions/checkout from 4.1.5 to 4.1.6 (https://redirect.github.com/codecov/codecov-action/issues/1438;>#1438) https://github.com/codecov/codecov-action/commit/3889fddabba3b7d0b612ec29a5735eb7fd9e55f3;>3889fdd build(deps): bump github/codeql-action from 3.25.4 to 3.25.5 (https://redirect.github.com/codecov/codecov-action/issues/1439;>#1439) https://github.com/codecov/codecov-action/commit/d42a336584211c4f940f39382853503887003197;>d42a336 fix: prevent xlarge from running on forks (https://redirect.github.com/codecov/codecov-action/issues/1432;>#1432) https://github.com/codecov/codecov-action/commit/fd624e50e79ee493cc04fa548dbb550c038281da;>fd624e5 build(deps-dev): bump @typescript-eslint/eslint-plugin from 7.8.0 to 7.9.0 (#... See full diff in https://github.com/codecov/codecov-action/compare/6d798873df2b1b8e5846dba6fb86631229fbcb17...125fc84a9a348dbcf27191600683ec096ec9021c;>compare view
Re: [PR] (chores) multiple small cleanups [camel]
github-actions[bot] commented on PR #14239: URL: https://github.com/apache/camel/pull/14239#issuecomment-2130138732 :star2: Thank you for your contribution to the Apache Camel project! :star2: :robot: CI automation will test this PR automatically. :camel: Apache Camel Committers, please review the following items: * First-time contributors **require MANUAL approval** for the GitHub Actions to run * You can use the command `/component-test (camel-)component-name1 (camel-)component-name2..` to request a test from the test bot. * You can label PRs using `build-all`, `build-dependents`, `skip-tests` and `test-dependents` to fine-tune the checks executed by this PR. * Build and test logs are available in the Summary page. **Only** [Apache Camel committers](https://camel.apache.org/community/team/#committers) have access to the summary. * :warning: Be careful when sharing logs. Review their contents before sharing them publicly. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@camel.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] Bump github/codeql-action from 3.25.5 to 3.25.6 [commons-jxpath]
dependabot[bot] opened a new pull request, #152: URL: https://github.com/apache/commons-jxpath/pull/152 Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3.25.5 to 3.25.6. Changelog Sourced from https://github.com/github/codeql-action/blob/main/CHANGELOG.md;>github/codeql-action's changelog. CodeQL Action Changelog See the https://github.com/github/codeql-action/releases;>releases page for the relevant changes to the CodeQL CLI and language packs. Note that the only difference between v2 and v3 of the CodeQL Action is the node version they support, with v3 running on node 20 while we continue to release v2 to support running on node 16. For example 3.22.11 was the first v3 release and is functionally identical to 2.22.11. This approach ensures an easy way to track exactly which features are included in different versions, indicated by the minor and patch version numbers. [UNRELEASED] We are rolling out a feature in May/June 2024 that will reduce the Actions cache usage of the Action by keeping only the newest TRAP cache for each language. https://redirect.github.com/github/codeql-action/pull/2306;>#2306 3.25.6 - 20 May 2024 Update default CodeQL bundle version to 2.17.3. https://redirect.github.com/github/codeql-action/pull/2295;>#2295 3.25.5 - 13 May 2024 Add a compatibility matrix of supported CodeQL Action, CodeQL CLI, and GitHub Enterprise Server versions to the https://github.com/github/codeql-action/blob/main/README.md;>https://github.com/github/codeql-action/blob/main/README.md. https://redirect.github.com/github/codeql-action/pull/2273;>#2273 Avoid printing out a warning for a missing on.push trigger when the CodeQL Action is triggered via a workflow_call event. https://redirect.github.com/github/codeql-action/pull/2274;>#2274 The tools: latest input to the init Action has been renamed to tools: linked. This option specifies that the Action should use the tools shipped at the same time as the Action. The old name will continue to work for backwards compatibility, but we recommend that new workflows use the new name. https://redirect.github.com/github/codeql-action/pull/2281;>#2281 3.25.4 - 08 May 2024 Update default CodeQL bundle version to 2.17.2. https://redirect.github.com/github/codeql-action/pull/2270;>#2270 3.25.3 - 25 Apr 2024 Update default CodeQL bundle version to 2.17.1. https://redirect.github.com/github/codeql-action/pull/2247;>#2247 Workflows running on macos-latest using CodeQL CLI versions before v2.15.1 will need to either upgrade their CLI version to v2.15.1 or newer, or change the platform to an Intel MacOS runner, such as macos-12. ARM machines with SIP disabled, including the newest macos-latest image, are unsupported for CLI versions before 2.15.1. https://redirect.github.com/github/codeql-action/pull/2261;>#2261 3.25.2 - 22 Apr 2024 No user facing changes. 3.25.1 - 17 Apr 2024 We are rolling out a feature in April/May 2024 that improves the reliability and performance of analyzing code when analyzing a compiled language with the autobuild https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages#codeql-build-modes;>build mode. https://redirect.github.com/github/codeql-action/pull/2235;>#2235 Fix a bug where the init Action would fail if --overwrite was specified in CODEQL_ACTION_EXTRA_OPTIONS. https://redirect.github.com/github/codeql-action/pull/2245;>#2245 3.25.0 - 15 Apr 2024 The deprecated feature for extracting dependencies for a Python analysis has been removed. https://redirect.github.com/github/codeql-action/pull/2224;>#2224 As a result, the following inputs and environment variables are now ignored: The setup-python-dependencies input to the init Action The CODEQL_ACTION_DISABLE_PYTHON_DEPENDENCY_INSTALLATION environment variable We recommend removing any references to these from your workflows. For more information, see the release notes for CodeQL Action v3.23.0 and v2.23.0. Automatically overwrite an existing database if found on the filesystem. https://redirect.github.com/github/codeql-action/pull/2229;>#2229 Bump the minimum CodeQL bundle version to 2.12.6. https://redirect.github.com/github/codeql-action/pull/2232;>#2232 ... (truncated) Commits https://github.com/github/codeql-action/commit/9fdb3e49720b44c48891d036bb502feb25684276;>9fdb3e4 Merge pull request https://redirect.github.com/github/codeql-action/issues/2300;>#2300 from github/update-v3.25.6-63d519c0a https://github.com/github/codeql-action/commit/00792ab1e0a5e45d2ff0c2426424bf7044bb27d0;>00792ab Update changelog for v3.25.6 https://github.com/github/codeql-action/commit/63d519c0ae6a4b739e3377a517400c352a7d829b;>63d519c Merge
Re: [PR] [YUNIKORN-2633] Unnecessary warning from Partition when adding an application [yunikorn-core]
pbacsko commented on code in PR #872: URL: https://github.com/apache/yunikorn-core/pull/872#discussion_r1613865759 ## pkg/scheduler/partition.go: ## @@ -339,13 +339,18 @@ func (pc *PartitionContext) AddApplication(app *objects.Application) error { return fmt.Errorf("failed to find queue %s for application %s", queueName, appID) } - // set resources based on tags, but only if the queue is dynamic (unmanaged) - if queue.IsManaged() { - log.Log(log.SchedQueue).Warn("Trying to set resources on a queue that is not an unmanaged leaf", - zap.String("queueName", queue.QueuePath)) - } else { - queue.SetResources(app.GetGuaranteedResource(), app.GetMaxResource()) + guaranteedRes := app.GetGuaranteedResource() + maxRes := app.GetMaxResource() + if guaranteedRes != nil || maxRes != nil { + // set resources based on tags, but only if the queue is dynamic (unmanaged) + if queue.IsManaged() { + log.Log(log.SchedQueue).Warn("Trying to set resources on a queue that is not an unmanaged leaf", + zap.String("queueName", queue.QueuePath)) + } else { + queue.SetResources(guaranteedRes, maxRes) Review Comment: Created [YUNIKORN-2642](https://issues.apache.org/jira/browse/YUNIKORN-2642). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@yunikorn.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] GUACAMOLE-600: Add support for setting SSH and SFTP timeouts. [guacamole-server]
jmuehlner commented on code in PR #414: URL: https://github.com/apache/guacamole-server/pull/414#discussion_r1613864540 ## src/common-ssh/ssh.c: ## @@ -453,17 +455,43 @@ guac_common_ssh_session* guac_common_ssh_create_session(guac_client* client, return NULL; } -/* Connect */ -if (connect(fd, current_address->ai_addr, -current_address->ai_addrlen) == 0) { +/* Set socket to non-blocking */ +fcntl(fd, F_SETFL, O_NONBLOCK); + +/* Set up timeout. */ +fd_set fdset; +FD_ZERO(); +FD_SET(fd, ); + +struct timeval tv; +tv.tv_sec = timeout; /* 10 second timeout */ Review Comment: "10 second timeout"? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@guacamole.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [SPARK-48286] Fix analysis and creation of column with exists default expression [spark]
cloud-fan commented on PR #46594: URL: https://github.com/apache/spark/pull/46594#issuecomment-2130135726 can we add a test? Besically any default column value that is unfoldable, like `current_date()`, can trigger this bug -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
Re: [PR] chore(料): bump python flask-limiter 3.3.1 -> 3.7.0 [superset]
mistercrunch merged PR #28670: URL: https://github.com/apache/superset/pull/28670 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For additional commands, e-mail: notifications-h...@superset.apache.org
Re: [PR] chore: remove ipython from development dependencies [superset]
mistercrunch closed pull request #28703: chore: remove ipython from development dependencies URL: https://github.com/apache/superset/pull/28703 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For additional commands, e-mail: notifications-h...@superset.apache.org
Re: [PR] [SPARK-48286] Fix analysis and creation of column with exists default expression [spark]
cloud-fan commented on PR #46594: URL: https://github.com/apache/spark/pull/46594#issuecomment-2130132833 Can you retry the github action job? Seems flaky -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
Re: [PR] RATIS-2093. Decouple metadata and configuration entries from appendEntries buffer for stateMachineCache' [ratis]
duongkame commented on PR #1096: URL: https://github.com/apache/ratis/pull/1096#issuecomment-2130129609 @szetszwo can you have a look? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@ratis.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] [HUDI-7776] Simplify HoodieStorage instance fetching [hudi]
yihua commented on PR #11259: URL: https://github.com/apache/hudi/pull/11259#issuecomment-2130129532 @hudi-bot run azure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[I] Error on `NULL["field_name"]`: The expression to get an indexed field is only valid for `List`, `Struct`, or `Map` types, got Null [datafusion]
alamb opened a new issue, #10654: URL: https://github.com/apache/datafusion/issues/10654 ### Describe the bug Expr::field is broken for ScalarValue::Null After https://github.com/apache/datafusion/pull/10375 merged `Expr::field` is broken when we try and do it on `ScalarValue::Null` (in addition to https://github.com/apache/datafusion/issues/10565) If you try to use it, you get an error: > The expression to get an indexed field is only valid for `List`, `Struct`, or `Map` types, got Null ### To Reproduce Add this test to expr_fn ```rust #[test] fn test_get_field_null() { evaluate_expr_test( lit(ScalarValue::Null).field("a"), vec![ "++", "| NULL literal", ], ); } ``` Fails with: > called `Result::unwrap()` on an `Err` value: Plan("The expression to get an indexed field is only valid for `List`, `Struct`, or `Map` types, got Null") ### Expected behavior Result should also be a NULL scalar ### Additional context _No response_ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For additional commands, e-mail: github-h...@datafusion.apache.org
Re: [PR] [SPARK-48320][CORE][DOCS] Add external third-party ecosystem access guide to the `scala/java` doc [spark]
gengliangwang commented on code in PR #46634: URL: https://github.com/apache/spark/pull/46634#discussion_r1613860819 ## common/utils/src/main/scala/org/apache/spark/internal/README.md: ## Review Comment: I will wait for @mridulm's response until this weekend. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
Re: [PR] MINOR: Aesthetic, Uniformity Changes and Reducing warnings [kafka]
dongnuo123 commented on code in PR #16026: URL: https://github.com/apache/kafka/pull/16026#discussion_r1613860809 ## group-coordinator/src/test/java/org/apache/kafka/coordinator/group/OffsetMetadataManagerTest.java: ## @@ -1013,7 +1013,7 @@ public void testSimpleGroupOffsetCommit() { result.records() ); -// A generic should have been created. +// A Classic group should have been created. Review Comment: nit: `A ClassicGroup` or `A classic group` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Aesthetic, Uniformity Changes and Reducing warnings [kafka]
dongnuo123 commented on code in PR #16026: URL: https://github.com/apache/kafka/pull/16026#discussion_r1613860946 ## group-coordinator/src/main/java/org/apache/kafka/coordinator/group/classic/ClassicGroup.java: ## @@ -869,11 +860,7 @@ public void validateOffsetCommit( } /** - * Validates the OffsetFetch request. - * - * @param memberId The member id. This is not provided for classic groups. - * @param memberEpoch The member epoch for consumer groups. This is not provided for classic groups. Review Comment: Not sure if we want to keep these? `This is not provided for classic groups` is not mentioned in the comment in the Group interface -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] link to trademark responsibilities site [comdev-site]
rbowen merged PR #181: URL: https://github.com/apache/comdev-site/pull/181 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@community.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] link to trademark responsibilities site [comdev-site]
rbowen opened a new pull request, #181: URL: https://github.com/apache/comdev-site/pull/181 (no comment) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@community.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] chore(料): bump python marshmallow 3.19.0 -> 3.21.2 [superset]
mistercrunch merged PR #28655: URL: https://github.com/apache/superset/pull/28655 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For additional commands, e-mail: notifications-h...@superset.apache.org
Re: [PR] chore(料): bump python bcrypt 4.0.1 -> 4.1.3 [superset]
mistercrunch merged PR #28590: URL: https://github.com/apache/superset/pull/28590 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For additional commands, e-mail: notifications-h...@superset.apache.org
Re: [PR] KAFKA-16832: LeaveGroup API for upgrading ConsumerGroup [kafka]
jeffkbkim commented on code in PR #16057: URL: https://github.com/apache/kafka/pull/16057#discussion_r1613845801 ## group-coordinator/src/test/java/org/apache/kafka/coordinator/group/GroupMetadataManagerTest.java: ## @@ -12801,6 +12801,365 @@ public void testConsumerGroupMemberUsingClassicProtocolFencedWhenJoinTimeout() { ); } +@Test +public void testConsumerGroupMemberUsingClassicProtocolBatchLeaveGroup() { +String groupId = "group-id"; +String memberId1 = Uuid.randomUuid().toString(); +String memberId2 = Uuid.randomUuid().toString(); +String memberId3 = Uuid.randomUuid().toString(); +String instanceId2 = "instance-id-2"; +String instanceId3 = "instance-id-3"; + +Uuid fooTopicId = Uuid.randomUuid(); +String fooTopicName = "foo"; +Uuid barTopicId = Uuid.randomUuid(); +String barTopicName = "bar"; + +List protocol1 = Collections.singletonList( +new ConsumerGroupMemberMetadataValue.ClassicProtocol() +.setName("range") + .setMetadata(Utils.toArray(ConsumerProtocol.serializeSubscription(new ConsumerPartitionAssignor.Subscription( +Arrays.asList(fooTopicName, barTopicName), +null, +Collections.singletonList(new TopicPartition(fooTopicName, 0)) + +); +List protocol2 = Collections.singletonList( +new ConsumerGroupMemberMetadataValue.ClassicProtocol() +.setName("range") + .setMetadata(Utils.toArray(ConsumerProtocol.serializeSubscription(new ConsumerPartitionAssignor.Subscription( +Arrays.asList(fooTopicName, barTopicName), +null, +Collections.singletonList(new TopicPartition(fooTopicName, 1)) + +); + +ConsumerGroupMember member1 = new ConsumerGroupMember.Builder(memberId1) +.setState(MemberState.STABLE) +.setMemberEpoch(10) +.setPreviousMemberEpoch(9) +.setClientId("client") +.setClientHost("localhost/127.0.0.1") +.setSubscribedTopicNames(Arrays.asList("foo", "bar")) +.setServerAssignorName("range") +.setRebalanceTimeoutMs(45000) +.setClassicMemberMetadata( +new ConsumerGroupMemberMetadataValue.ClassicMemberMetadata() +.setSessionTimeoutMs(5000) +.setSupportedProtocols(protocol1) +) +.setAssignedPartitions(mkAssignment(mkTopicAssignment(fooTopicId, 0))) +.build(); +ConsumerGroupMember member2 = new ConsumerGroupMember.Builder(memberId2) +.setInstanceId(instanceId2) +.setState(MemberState.STABLE) +.setMemberEpoch(9) +.setPreviousMemberEpoch(8) +.setClientId("client") +.setClientHost("localhost/127.0.0.1") +.setSubscribedTopicNames(Arrays.asList("foo", "bar")) +.setServerAssignorName("range") +.setRebalanceTimeoutMs(45000) +.setClassicMemberMetadata( +new ConsumerGroupMemberMetadataValue.ClassicMemberMetadata() +.setSessionTimeoutMs(5000) +.setSupportedProtocols(protocol2) +) +.setAssignedPartitions(mkAssignment(mkTopicAssignment(fooTopicId, 1))) +.build(); +ConsumerGroupMember member3 = new ConsumerGroupMember.Builder(memberId3) +.setInstanceId(instanceId3) +.setState(MemberState.STABLE) +.setMemberEpoch(10) +.setPreviousMemberEpoch(9) +.setClientId("client") +.setClientHost("localhost/127.0.0.1") +.setSubscribedTopicNames(Arrays.asList("foo", "bar")) +.setServerAssignorName("range") +.setRebalanceTimeoutMs(45000) +.setAssignedPartitions(mkAssignment(mkTopicAssignment(barTopicId, 0))) +.build(); + +// Consumer group with three members. +// Dynamic member 1 uses the classic protocol; +// static member 2 uses the classic protocol; Review Comment: nit: can we change the ";"s to "."s? ## group-coordinator/src/main/java/org/apache/kafka/coordinator/group/GroupMetadataManager.java: ## @@ -4424,14 +4425,128 @@ private ConsumerGroupMember validateConsumerGroupMember( * @param contextThe request context. * @param requestThe actual LeaveGroup request. * + * @return The LeaveGroup response and the records to append. + */ +public CoordinatorResult classicGroupLeave( +RequestContext context, +LeaveGroupRequestData request +) throws UnknownMemberIdException, GroupIdNotFoundException { +Group group = groups.get(request.groupId(),
Re: [PR] chore(料): bump python bottleneck 1.3.7 -> 1.3.8 [superset]
mistercrunch merged PR #28657: URL: https://github.com/apache/superset/pull/28657 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For additional commands, e-mail: notifications-h...@superset.apache.org
Re: [PR] chore(料): bump python cattrs 23.2.1 -> 23.2.3 [superset]
mistercrunch merged PR #28658: URL: https://github.com/apache/superset/pull/28658 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For additional commands, e-mail: notifications-h...@superset.apache.org
Re: [PR] chore(料): bump python typing-extensions 4.11.0 -> 4.12.0 [superset]
mistercrunch merged PR #28659: URL: https://github.com/apache/superset/pull/28659 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: notifications-unsubscr...@superset.apache.org For additional commands, e-mail: notifications-h...@superset.apache.org