Team,

In light of Nandor's finding [1] and an issue Gresock [2] is working
from a community report over last week [3] and into the weekend RC3 is
cancelled.  I'll get RC4 up as soon as [2] lands.

[1] https://issues.apache.org/jira/browse/NIFI-10574
[2] https://issues.apache.org/jira/browse/NIFI-10572
[3] https://apachenifi.slack.com/archives/C0L9S92JY/p1664285128489789

Thanks

On Sun, Oct 2, 2022 at 4:28 AM Nandor Soma Abonyi
<nsabo...@icloud.com.invalid> wrote:
>
> Hello,
>
> Sorry for ruining the voting party…
>
> -1 (non-binding)
>
> The simplest GenerateFlowFile -> PutAzureDataLakeStorage flow throws an 
> exception and results in failure. Verified that tests in 
> ITPutAzureDataLakeStorage are also failing.
> I think the reason is the Azure SDK upgrade.
>
> Stack trace:
> 2022-10-02 12:14:47,389 ERROR [Timer-Driven Process Thread-9] 
> o.a.n.p.a.s.PutAzureDataLakeStorage 
> PutAzureDataLakeStorage[id=23aee885-b94b-3dc3-367c-5506568d5c16] Failed to 
> create file on Azure Data Lake Storage
> com.azure.storage.file.datalake.models.DataLakeStorageException: Status code 
> 412, "{"error":{"code":"ConditionNotMet","message":"The condition specified 
> using HTTP conditional header(s) is not 
> met.\nRequestId:bc39f465-e01f-001b-1b47-d68104000000\nTime:2022-10-02T10:14:47.2730857Z"}}"
>         at 
> java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:627)
>         at 
> com.azure.core.implementation.http.rest.ResponseExceptionConstructorCache.invoke(ResponseExceptionConstructorCache.java:56)
>         at 
> com.azure.core.implementation.http.rest.RestProxyBase.instantiateUnexpectedException(RestProxyBase.java:377)
>         at 
> com.azure.core.implementation.http.rest.AsyncRestProxy.lambda$ensureExpectedStatus$1(AsyncRestProxy.java:117)
>         at 
> reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:113)
>         at 
> reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
>         at 
> reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.request(FluxMapFuseable.java:171)
>         at 
> reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2194)
>         at 
> reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onSubscribe(Operators.java:2068)
>         at 
> reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onSubscribe(FluxMapFuseable.java:96)
>         at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
>         at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
>         at 
> reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
>         at 
> reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:129)
>         at 
> reactor.core.publisher.FluxHide$SuppressFuseableSubscriber.onNext(FluxHide.java:137)
>         at 
> reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:129)
>         at 
> reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:129)
>         at 
> reactor.core.publisher.FluxHide$SuppressFuseableSubscriber.onNext(FluxHide.java:137)
>         at 
> reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
>         at 
> reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
>         at 
> reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
>         at 
> reactor.core.publisher.FluxDelaySubscription$DelaySubscriptionMainSubscriber.onNext(FluxDelaySubscription.java:189)
>         at 
> reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
>         at 
> reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
>         at 
> reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.onNext(FluxTimeout.java:180)
>         at 
> reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:129)
>         at 
> reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:129)
>         at 
> reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:129)
>         at 
> reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1816)
>         at 
> reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:151)
>         at 
> reactor.core.publisher.SerializedSubscriber.onNext(SerializedSubscriber.java:99)
>         at 
> reactor.core.publisher.FluxRetryWhen$RetryWhenMainSubscriber.onNext(FluxRetryWhen.java:174)
>         at 
> reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
>         at 
> reactor.core.publisher.Operators$MonoInnerProducerBase.complete(Operators.java:2664)
>         at 
> reactor.core.publisher.MonoSingle$SingleSubscriber.onComplete(MonoSingle.java:180)
>         at 
> reactor.core.publisher.MonoFlatMapMany$FlatMapManyInner.onComplete(MonoFlatMapMany.java:260)
>         at 
> reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:144)
>         at 
> reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onComplete(FluxDoFinally.java:128)
>         at 
> reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onComplete(FluxMapFuseable.java:152)
>         at 
> reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1817)
>         at 
> reactor.core.publisher.MonoCollect$CollectSubscriber.onComplete(MonoCollect.java:160)
>         at 
> reactor.core.publisher.FluxHandle$HandleSubscriber.onComplete(FluxHandle.java:220)
>         at 
> reactor.core.publisher.FluxMap$MapConditionalSubscriber.onComplete(FluxMap.java:275)
>         at 
> reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:400)
>         at 
> reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:419)
>         at 
> reactor.netty.channel.ChannelOperations.terminate(ChannelOperations.java:473)
>         at 
> reactor.netty.http.client.HttpClientOperations.onInboundNext(HttpClientOperations.java:703)
>         at 
> reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:93)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>         at 
> io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
>         at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:336)
>         at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:308)
>         at 
> io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>         at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1373)
>         at 
> io.netty.handler.ssl.SslHandler.decodeNonJdkCompatible(SslHandler.java:1247)
>         at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1287)
>         at 
> io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:519)
>         at 
> io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:458)
>         at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:280)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
>         at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
>         at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
>         at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
>         at 
> io.netty.channel.kqueue.AbstractKQueueStreamChannel$KQueueStreamUnsafe.readReady(AbstractKQueueStreamChannel.java:544)
>         at 
> io.netty.channel.kqueue.AbstractKQueueChannel$AbstractKQueueUnsafe.readReady(AbstractKQueueChannel.java:383)
>         at 
> io.netty.channel.kqueue.KQueueEventLoop.processReady(KQueueEventLoop.java:213)
>         at 
> io.netty.channel.kqueue.KQueueEventLoop.run(KQueueEventLoop.java:291)
>         at 
> io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
>         at 
> io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>         at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.lang.Thread.run(Thread.java:750)
>         Suppressed: java.lang.Exception: #block terminated with an error
>                 at 
> reactor.core.publisher.BlockingSingleSubscriber.blockingGet(BlockingSingleSubscriber.java:99)
>                 at reactor.core.publisher.Mono.block(Mono.java:1707)
>                 at 
> com.azure.storage.common.implementation.StorageImplUtils.blockWithOptionalTimeout(StorageImplUtils.java:191)
>                 at 
> com.azure.storage.file.datalake.DataLakeFileClient.flushWithResponse(DataLakeFileClient.java:802)
>                 at 
> com.azure.storage.file.datalake.DataLakeFileClient.flush(DataLakeFileClient.java:754)
>                 at 
> com.azure.storage.file.datalake.DataLakeFileClient.flush(DataLakeFileClient.java:722)
>                 at 
> org.apache.nifi.processors.azure.storage.PutAzureDataLakeStorage.uploadContent(PutAzureDataLakeStorage.java:218)
>                 at 
> org.apache.nifi.processors.azure.storage.PutAzureDataLakeStorage.appendContent(PutAzureDataLakeStorage.java:193)
>                 at 
> org.apache.nifi.processors.azure.storage.PutAzureDataLakeStorage.onTrigger(PutAzureDataLakeStorage.java:145)
>                 at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>                 at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1354)
>                 at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:246)
>                 at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102)
>                 at 
> org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
>                 at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>                 at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>                 at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>                 at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>                 at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>                 at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>                 ... 1 common frames omitted
>
>
> Br,
> Nandor
>
> > On Sep 29, 2022, at 9:21 PM, Joe Witt <joew...@apache.org> wrote:
> >
> > Hello,
> >
> > I am pleased to be calling this vote for the source release of Apache
> > NiFi 1.18.0.
> >
> > The source zip, including signatures, digests, etc. can be found at:
> > https://repository.apache.org/content/repositories/orgapachenifi-1213
> >
> > The source being voted upon and the convenience binaries can be found at:
> > https://dist.apache.org/repos/dist/dev/nifi/nifi-1.18.0/
> >
> > A helpful reminder on how the release candidate verification process works:
> > https://cwiki.apache.org/confluence/display/NIFI/How+to+help+verify+an+Apache+NiFi+release+candidate
> >
> > The Git tag is nifi-1.18.0-RC3
> > The Git commit ID is 5bc64c812b2c76ee2879d8081ceadf62d5e3c702
> > https://gitbox.apache.org/repos/asf?p=nifi.git;a=commit;h=5bc64c812b2c76ee2879d8081ceadf62d5e3c702
> >
> > Checksums of nifi-1.18.0-source-release.zip:
> > SHA256: bd1b675f17dbf712089a79f7bc043eae2df63bcc2e08b2012a6431641037679f
> > SHA512: 
> > cea43af57089128ff65bb53e76b2fdfa8dec7397e2bf45d41e35b758b731355075839b9c018ee6284cb15e293b105e248d88748148960ad80ae387824139f52b
> >
> > Release artifacts are signed with the following key:
> > https://people.apache.org/keys/committer/joewitt.asc
> >
> > KEYS file available here:
> > https://dist.apache.org/repos/dist/release/nifi/KEYS
> >
> > 171 issues were closed/resolved for this release:
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12316020&version=12352150
> >
> > Release note highlights can be found here:
> > https://cwiki.apache.org/confluence/display/NIFI/Release+Notes#ReleaseNotes-Version1.18.0
> >
> > The vote will be open for 72 hours.
> > Please download the release candidate and evaluate the necessary items
> > including checking hashes, signatures, build
> > from source, and test. Then please vote:
> >
> > [ ] +1 Release this package as nifi-1.18.0
> > [ ] +0 no opinion
> > [ ] -1 Do not release this package because...
>

Reply via email to