回复: Re:Re: get NoSuchMethodError when using flink flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar

2022-09-01 Thread Liting Liu (litiliu)
Tks for your suggestion xuyang.
I have solved this problem by packing a new 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar with the conflict jar shaded.




发件人: Xuyang 
发送时间: 2022年9月1日 0:27
收件人: user@flink.apache.org 
抄送: luoyu...@alumni.sjtu.edu.cn ; Liting Liu 
(litiliu) 
主题: Re:Re: get NoSuchMethodError when using flink 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar

Hi, Liu.
It seems that you may use other own jars and thay has the common-lang3 with 
other versions, which may cause the version conflict.
My suggesstion is that you can shade this dependency in your own jars or in 
'flink-table-planner', and the latter may require you to compile flink manually.


--

Best!
Xuyang


在 2022-08-31 20:28:43,"yuxia"  写道:

How do you use `flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar`? Do you use sql 
client ? Do you put it in FLINK_HOME/lib?
If it's for sql client, I think you can remove the jar from  FLINK_HOME/lib, 
but add it in Flink SQL client using `add jar 
'flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar' `, and set 
'org.apache.commons.' the to parent-first[1]

But I think the better way is to relocate the class.
[1] 
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#classloader-parent-first-patterns-default

Best regards,
Yuxia


发件人: "Liting Liu (litiliu)" 
收件人: "User" 
发送时间: 星期三, 2022年 8 月 31日 下午 5:14:35
主题: get NoSuchMethodError when using flink 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar

Hi, i got NoSuchMethodError when using flink 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar.
Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
Unexpected exception. This is a bug. Please consider filing an issue.
at 
org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
Caused by: java.lang.NoSuchMethodError: 
org.apache.commons.lang3.StringUtils.join([IC)Ljava/lang/String;
at 
org.apache.flink.table.planner.plan.utils.RankProcessStrategy$UpdateFastStrategy.toString(RankProcessStrategy.java:129)
at java.lang.String.valueOf(String.java:2994)
at java.lang.StringBuilder.append(StringBuilder.java:136)
at 
org.apache.flink.table.planner.plan.utils.RelDescriptionWriterImpl.explain(RelDescriptionWriterImpl.java:67)
at 
org.apache.flink.table.planner.plan.utils.RelDescriptionWriterImpl.done(RelDescriptionWriterImpl.java:96)
at 
org.apache.calcite.rel.AbstractRelNode.explain(AbstractRelNode.java:246)
at 
org.apache.flink.table.planner.plan.nodes.FlinkRelNode.getRelDetailedDescription(FlinkRelNode.scala:50)
at 
org.apache.flink.table.planner.plan.nodes.FlinkRelNode.getRelDetailedDescription$(FlinkRelNode.scala:46)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamPhysicalRank.getRelDetailedDescription(StreamPhysicalRank.scala:41)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.createNewNode(FlinkChangelogModeInferenceProgram.scala:701)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.$anonfun$visitRankStrategies$1(FlinkChangelogModeInferenceProgram.scala:738)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.$anonfun$visitRankStrategies$1$adapted(FlinkChangelogModeInferenceProgram.scala:730)
at scala.collection.Iterator.foreach(Iterator.scala:937)
at scala.collection.Iterator.foreach$(Iterator.scala:937)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
at scala.collection.IterableLike.foreach(IterableLike.scala:70)
at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.visitRankStrategies(FlinkChangelogModeInferenceProgram.scala:730)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:489)

Seems there is an embeded StringUtils in 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar. which confilict with other 
class.

What should i do?
Do I have to manually excude StringUtils.class in 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar?




Re:Re: get NoSuchMethodError when using flink flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar

2022-08-31 Thread Xuyang
Hi, Liu.
It seems that you may use other own jars and thay has the common-lang3 with 
other versions, which may cause the version conflict.
My suggesstion is that you can shade this dependency in your own jars or in 
'flink-table-planner', and the latter may require you to compile flink manually.




--

Best!
Xuyang




在 2022-08-31 20:28:43,"yuxia"  写道:

How do you use `flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar`? Do you use sql 
client ? Do you put it in FLINK_HOME/lib?
If it's for sql client, I think you can remove the jar from  FLINK_HOME/lib, 
but add it in Flink SQL client using `add jar 
'flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar' `, and set 
'org.apache.commons.' the to parent-first[1]

But I think the better way is to relocate the class.
[1] 
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#classloader-parent-first-patterns-default


Best regards,
Yuxia


发件人: "Liting Liu (litiliu)" 
收件人: "User" 
发送时间: 星期三, 2022年 8 月 31日 下午 5:14:35
主题: get NoSuchMethodError when using flink 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar



Hi, i got NoSuchMethodError when using flink 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar.
Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
Unexpected exception. This is a bug. Please consider filing an issue.
at 
org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
Caused by: java.lang.NoSuchMethodError: 
org.apache.commons.lang3.StringUtils.join([IC)Ljava/lang/String;
at 
org.apache.flink.table.planner.plan.utils.RankProcessStrategy$UpdateFastStrategy.toString(RankProcessStrategy.java:129)
at java.lang.String.valueOf(String.java:2994)
at java.lang.StringBuilder.append(StringBuilder.java:136)
at 
org.apache.flink.table.planner.plan.utils.RelDescriptionWriterImpl.explain(RelDescriptionWriterImpl.java:67)
at 
org.apache.flink.table.planner.plan.utils.RelDescriptionWriterImpl.done(RelDescriptionWriterImpl.java:96)
at 
org.apache.calcite.rel.AbstractRelNode.explain(AbstractRelNode.java:246)
at 
org.apache.flink.table.planner.plan.nodes.FlinkRelNode.getRelDetailedDescription(FlinkRelNode.scala:50)
at 
org.apache.flink.table.planner.plan.nodes.FlinkRelNode.getRelDetailedDescription$(FlinkRelNode.scala:46)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamPhysicalRank.getRelDetailedDescription(StreamPhysicalRank.scala:41)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.createNewNode(FlinkChangelogModeInferenceProgram.scala:701)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.$anonfun$visitRankStrategies$1(FlinkChangelogModeInferenceProgram.scala:738)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.$anonfun$visitRankStrategies$1$adapted(FlinkChangelogModeInferenceProgram.scala:730)
at scala.collection.Iterator.foreach(Iterator.scala:937)
at scala.collection.Iterator.foreach$(Iterator.scala:937)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
at scala.collection.IterableLike.foreach(IterableLike.scala:70)
at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.visitRankStrategies(FlinkChangelogModeInferenceProgram.scala:730)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:489)
   
Seems there is an embeded StringUtils in 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar. which confilict with other 
class.


What should i do?
Do I have to manually excude StringUtils.class in 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar?
   




Re: get NoSuchMethodError when using flink flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar

2022-08-31 Thread yuxia
How do you use `flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar`? Do you use sql 
client ? Do you put it in FLINK_HOME/lib? 
If it's for sql client, I think you can remove the jar from FLINK_HOME/lib, but 
add it in Flink SQL client using `add jar 
'flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar' `, and set 
'org.apache.commons.' the to parent-first[1] 

But I think the better way is to relocate the class. 
[1] 
https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#classloader-parent-first-patterns-default
 

Best regards, 
Yuxia 


发件人: "Liting Liu (litiliu)"  
收件人: "User"  
发送时间: 星期三, 2022年 8 月 31日 下午 5:14:35 
主题: get NoSuchMethodError when using flink 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar 

Hi, i got NoSuchMethodError when using flink 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar. 
Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
Unexpected exception. This is a bug. Please consider filing an issue. 
at org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201) 
at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161) 
Caused by: java.lang.NoSuchMethodError: 
org.apache.commons.lang3.StringUtils.join([IC)Ljava/lang/String; 
at 
org.apache.flink.table.planner.plan.utils.RankProcessStrategy$UpdateFastStrategy.toString(RankProcessStrategy.java:129)
 
at java.lang.String.valueOf(String.java:2994) 
at java.lang.StringBuilder.append(StringBuilder.java:136) 
at 
org.apache.flink.table.planner.plan.utils.RelDescriptionWriterImpl.explain(RelDescriptionWriterImpl.java:67)
 
at 
org.apache.flink.table.planner.plan.utils.RelDescriptionWriterImpl.done(RelDescriptionWriterImpl.java:96)
 
at org.apache.calcite.rel.AbstractRelNode.explain(AbstractRelNode.java:246) 
at 
org.apache.flink.table.planner.plan.nodes.FlinkRelNode.getRelDetailedDescription(FlinkRelNode.scala:50)
 
at 
org.apache.flink.table.planner.plan.nodes.FlinkRelNode.getRelDetailedDescription$(FlinkRelNode.scala:46)
 
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamPhysicalRank.getRelDetailedDescription(StreamPhysicalRank.scala:41)
 
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.createNewNode(FlinkChangelogModeInferenceProgram.scala:701)
 
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.$anonfun$visitRankStrategies$1(FlinkChangelogModeInferenceProgram.scala:738)
 
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.$anonfun$visitRankStrategies$1$adapted(FlinkChangelogModeInferenceProgram.scala:730)
 
at scala.collection.Iterator.foreach(Iterator.scala:937) 
at scala.collection.Iterator.foreach$(Iterator.scala:937) 
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425) 
at scala.collection.IterableLike.foreach(IterableLike.scala:70) 
at scala.collection.IterableLike.foreach$(IterableLike.scala:69) 
at scala.collection.AbstractIterable.foreach(Iterable.scala:54) 
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.visitRankStrategies(FlinkChangelogModeInferenceProgram.scala:730)
 
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:489)
 

Seems there is an embeded StringUtils in 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar. which confilict with other 
class. 

What should i do? 
Do I have to manually excude StringUtils.class in 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar? 




get NoSuchMethodError when using flink flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar

2022-08-31 Thread Liting Liu (litiliu)
Hi, i got NoSuchMethodError when using flink 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar.
Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
Unexpected exception. This is a bug. Please consider filing an issue.
at 
org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
Caused by: java.lang.NoSuchMethodError: 
org.apache.commons.lang3.StringUtils.join([IC)Ljava/lang/String;
at 
org.apache.flink.table.planner.plan.utils.RankProcessStrategy$UpdateFastStrategy.toString(RankProcessStrategy.java:129)
at java.lang.String.valueOf(String.java:2994)
at java.lang.StringBuilder.append(StringBuilder.java:136)
at 
org.apache.flink.table.planner.plan.utils.RelDescriptionWriterImpl.explain(RelDescriptionWriterImpl.java:67)
at 
org.apache.flink.table.planner.plan.utils.RelDescriptionWriterImpl.done(RelDescriptionWriterImpl.java:96)
at 
org.apache.calcite.rel.AbstractRelNode.explain(AbstractRelNode.java:246)
at 
org.apache.flink.table.planner.plan.nodes.FlinkRelNode.getRelDetailedDescription(FlinkRelNode.scala:50)
at 
org.apache.flink.table.planner.plan.nodes.FlinkRelNode.getRelDetailedDescription$(FlinkRelNode.scala:46)
at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamPhysicalRank.getRelDetailedDescription(StreamPhysicalRank.scala:41)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.createNewNode(FlinkChangelogModeInferenceProgram.scala:701)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.$anonfun$visitRankStrategies$1(FlinkChangelogModeInferenceProgram.scala:738)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.$anonfun$visitRankStrategies$1$adapted(FlinkChangelogModeInferenceProgram.scala:730)
at scala.collection.Iterator.foreach(Iterator.scala:937)
at scala.collection.Iterator.foreach$(Iterator.scala:937)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1425)
at scala.collection.IterableLike.foreach(IterableLike.scala:70)
at scala.collection.IterableLike.foreach$(IterableLike.scala:69)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.visitRankStrategies(FlinkChangelogModeInferenceProgram.scala:730)
at 
org.apache.flink.table.planner.plan.optimize.program.FlinkChangelogModeInferenceProgram$SatisfyUpdateKindTraitVisitor.visit(FlinkChangelogModeInferenceProgram.scala:489)

Seems there is an embeded StringUtils in 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar. which confilict with other 
class.

What should i do?
Do I have to manually excude StringUtils.class in 
flink-sql-connector-hive-2.2.0_2.11-1.14.4.jar?



StateFun NoSuchMethodError when deploying to existing Cluster

2022-02-08 Thread Christopher Gustafson
Hi!


I am having continued issues running a StateFun job in an existing Flink 
Cluster. My Flink cluster is using Flink version 1.14.3 and the StateFun job is 
using version 3.2.0 of the java SDK and statefun distribution. I get the 
following error:


Caused by: java.lang.NoSuchMethodError: 
org.apache.flink.statefun.sdk.reqreply.generated.TypedValue$Builder.setValue(
Lcom/google/protobuf/ByteString;)Lorg/apache/flink/statefun/sdk/reqreply/generated/TypedValue$Builder;


I found the same error in an earlier thread which suggested removing the java 
SDK as a dependency, but I can't do it since I am using it in my StateFun job. 
The error makes sense since when I am looking trough the StateFun source code I 
can't find any package called org.apache.flink.statefun.sdk.reqreply.generated. 
is there some step I am missing in configuring Flink or the job to be able to 
run in an existing cluster without the docker method?


Best,

Christopher


hbase NoSuchMethodError: org.apache.hadoop.hbase.client.HTable.getTableName()[B

2022-01-26 Thread 潘明文
HI 您好,


hbase-client 包是2.1.0 flink 1.12.4
hbase 代码如下:
hbase代码extends TableInputFormat>
try {
connection = ConnectionFactory.createConnection(hbaseConf);
//   Table table=connection.getTable(TableName.valueOf(tableName));
table = (HTable) connection.getTable(TableName.valueOf(tableName));
} catch (IOException e) {
logger.error("HBase连接异常", e.getCause());
System.out.println("--");
}
   System.out.println("--aaa");
scan = new Scan().addFamily(Bytes.toBytes(family));
scan.withStartRow(startRow.getBytes());
scan.withStopRow(endRow.getBytes());
System.out.println("--");
错误如下:
 Exception in thread "main" org.apache.flink.util.FlinkException: Failed to 
execute job 'Flink Streaming Job'.
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1918)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1796)
at 
org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:69)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1782)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1765)
at com.example.app.hbasesource.main(hbasesource.java:25)
Caused by: java.lang.RuntimeException: 
org.apache.flink.runtime.client.JobInitializationException: Could not 
instantiate JobManager.
at org.apache.flink.util.ExceptionUtils.rethrow(ExceptionUtils.java:316)
at 
org.apache.flink.util.function.FunctionUtils.lambda$uncheckedFunction$2(FunctionUtils.java:75)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
at 
java.util.concurrent.CompletableFuture$Completion.exec(CompletableFuture.java:443)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:157)
Caused by: org.apache.flink.runtime.client.JobInitializationException: Could 
not instantiate JobManager.
at 
org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:494)
at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.runtime.JobException: Creating the input splits 
caused an error: org.apache.hadoop.hbase.client.HTable.getTableName()[B
at 
org.apache.flink.runtime.executiongraph.ExecutionJobVertex.(ExecutionJobVertex.java:260)
at 
org.apache.flink.runtime.executiongraph.ExecutionGraph.attachJobGraph(ExecutionGraph.java:866)
at 
org.apache.flink.runtime.executiongraph.ExecutionGraphBuilder.buildGraph(ExecutionGraphBuilder.java:257)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.createExecutionGraph(SchedulerBase.java:322)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.createAndRestoreExecutionGraph(SchedulerBase.java:276)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.(SchedulerBase.java:249)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.(DefaultScheduler.java:133)
at 
org.apache.flink.runtime.scheduler.DefaultSchedulerFactory.createInstance(DefaultSchedulerFactory.java:111)
at 
org.apache.flink.runtime.jobmaster.JobMaster.createScheduler(JobMaster.java:342)
at org.apache.flink.runtime.jobmaster.JobMaster.(JobMaster.java:327)
at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:95)
at 
org.apache.flink.runtime.jobmaster.factories.DefaultJobMasterServiceFactory.createJobMasterService(DefaultJobMasterServiceFactory.java:39)
at 
org.apache.flink.runtime.jobmaster.JobManagerRunnerImpl.(JobManagerRunnerImpl.java:162)
at 
org.apache.flink.runtime.dispatcher.DefaultJobManagerRunnerFactory.createJobManagerRunner(DefaultJobManagerRunnerFactory.java:86)
at 
org.apache.flink.runtime.dispatcher.Dispatcher.lambda$createJobManagerRunner$5(Dispatcher.java:478)
... 4 more
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.client.HTable.getTableName()[B
at 
org.apache.flink.addons.hbase.AbstractTableInputFormat.createInputSplits(AbstractTableInputFormat.java:232)
at 
org.apache.flink.addons.hbase.AbstractTableInputFormat.createInputSplits(AbstractTableInputFormat.java:44)
at 
org.apache.flink.runtime.executiongraph.ExecutionJobVertex.(ExecutionJobVertex.java:247)
... 18 more

Re: NoSuchMethodError - getColumnIndexTruncateLength after upgrading Flink from 1.11.2 to 1.12.1

2021-06-30 Thread Matthias Pohl
Dependending on the build system used, you could check the dependency tree,
e.g. for Maven it would be `mvn dependency:tree
-Dincludes=org.apache.parquet`

Matthias

On Wed, Jun 30, 2021 at 8:40 AM Thomas Wang  wrote:

> Thanks Matthias. Could you advise how I can confirm this in my environment?
>
> Thomas
>
> On Tue, Jun 29, 2021 at 1:41 AM Matthias Pohl 
> wrote:
>
>> Hi Rommel, Hi Thomas,
>> Apache Parquet was bumped from 1.10.0 to 1.11.1 for Flink 1.12 in
>> FLINK-19137 [1]. The error you're seeing looks like some dependency issue
>> where you have a version other than 1.11.1
>> of org.apache.parquet:parquet-column:jar on your classpath?
>>
>> Matthias
>>
>> [1] https://issues.apache.org/jira/browse/FLINK-19137
>>
>> On Wed, Jun 23, 2021 at 1:50 AM Rommel Holmes 
>> wrote:
>>
>>> To give more information
>>>
>>> parquet-avro version 1.10.0 with Flink 1.11.2 and it was running fine.
>>>
>>> now Flink 1.12.1, the error msg shows up.
>>>
>>> Thank you for help.
>>>
>>> Rommel
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Jun 22, 2021 at 2:41 PM Thomas Wang  wrote:
>>>
 Hi,

 We recently upgraded our Flink version from 1.11.2 to 1.12.1 and one of
 our jobs that used to run ok, now sees the following error. This error
 doesn't seem to be related to any user code. Can someone help me take a
 look?

 Thanks.

 Thomas

 java.lang.NoSuchMethodError:
 org.apache.parquet.column.ParquetProperties.getColumnIndexTruncateLength()I
 at
 org.apache.parquet.hadoop.ParquetWriter.(ParquetWriter.java:282)
 ~[?:?]
 at
 org.apache.parquet.hadoop.ParquetWriter$Builder.build(ParquetWriter.java:564)
 ~[?:?]
 at
 org.apache.flink.formats.parquet.avro.ParquetAvroWriters.createAvroParquetWriter(ParquetAvroWriters.java:90)
 ~[?:?]
 at
 org.apache.flink.formats.parquet.avro.ParquetAvroWriters.lambda$forGenericRecord$abd75386$1(ParquetAvroWriters.java:65)
 ~[?:?]
 at
 org.apache.flink.formats.parquet.ParquetWriterFactory.create(ParquetWriterFactory.java:56)
 ~[?:?]
 at
 org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNew(BulkBucketWriter.java:75)
 ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.api.functions.sink.filesystem.OutputStreamBasedPartFileWriter$OutputStreamBasedBucketWriter.openNewInProgressFile(OutputStreamBasedPartFileWriter.java:90)
 ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNewInProgressFile(BulkBucketWriter.java:36)
 ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.rollPartFile(Bucket.java:243)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:220)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:305)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.onElement(StreamingFileSinkHelper.java:103)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:492)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:54)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:75)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:32)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
 ~[flink-dist_2.12-1.12.1.jar:1.12.1]
 at
 org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)
 

Re: NoSuchMethodError - getColumnIndexTruncateLength after upgrading Flink from 1.11.2 to 1.12.1

2021-06-30 Thread Thomas Wang
Thanks Matthias. Could you advise how I can confirm this in my environment?

Thomas

On Tue, Jun 29, 2021 at 1:41 AM Matthias Pohl 
wrote:

> Hi Rommel, Hi Thomas,
> Apache Parquet was bumped from 1.10.0 to 1.11.1 for Flink 1.12 in
> FLINK-19137 [1]. The error you're seeing looks like some dependency issue
> where you have a version other than 1.11.1
> of org.apache.parquet:parquet-column:jar on your classpath?
>
> Matthias
>
> [1] https://issues.apache.org/jira/browse/FLINK-19137
>
> On Wed, Jun 23, 2021 at 1:50 AM Rommel Holmes 
> wrote:
>
>> To give more information
>>
>> parquet-avro version 1.10.0 with Flink 1.11.2 and it was running fine.
>>
>> now Flink 1.12.1, the error msg shows up.
>>
>> Thank you for help.
>>
>> Rommel
>>
>>
>>
>>
>>
>> On Tue, Jun 22, 2021 at 2:41 PM Thomas Wang  wrote:
>>
>>> Hi,
>>>
>>> We recently upgraded our Flink version from 1.11.2 to 1.12.1 and one of
>>> our jobs that used to run ok, now sees the following error. This error
>>> doesn't seem to be related to any user code. Can someone help me take a
>>> look?
>>>
>>> Thanks.
>>>
>>> Thomas
>>>
>>> java.lang.NoSuchMethodError:
>>> org.apache.parquet.column.ParquetProperties.getColumnIndexTruncateLength()I
>>> at
>>> org.apache.parquet.hadoop.ParquetWriter.(ParquetWriter.java:282)
>>> ~[?:?]
>>> at
>>> org.apache.parquet.hadoop.ParquetWriter$Builder.build(ParquetWriter.java:564)
>>> ~[?:?]
>>> at
>>> org.apache.flink.formats.parquet.avro.ParquetAvroWriters.createAvroParquetWriter(ParquetAvroWriters.java:90)
>>> ~[?:?]
>>> at
>>> org.apache.flink.formats.parquet.avro.ParquetAvroWriters.lambda$forGenericRecord$abd75386$1(ParquetAvroWriters.java:65)
>>> ~[?:?]
>>> at
>>> org.apache.flink.formats.parquet.ParquetWriterFactory.create(ParquetWriterFactory.java:56)
>>> ~[?:?]
>>> at
>>> org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNew(BulkBucketWriter.java:75)
>>> ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.api.functions.sink.filesystem.OutputStreamBasedPartFileWriter$OutputStreamBasedBucketWriter.openNewInProgressFile(OutputStreamBasedPartFileWriter.java:90)
>>> ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNewInProgressFile(BulkBucketWriter.java:36)
>>> ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.rollPartFile(Bucket.java:243)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:220)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:305)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.onElement(StreamingFileSinkHelper.java:103)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:492)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:54)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:75)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:32)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
>>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>>> at
>>> 

Re: NoSuchMethodError - getColumnIndexTruncateLength after upgrading Flink from 1.11.2 to 1.12.1

2021-06-29 Thread Matthias Pohl
Hi Rommel, Hi Thomas,
Apache Parquet was bumped from 1.10.0 to 1.11.1 for Flink 1.12 in
FLINK-19137 [1]. The error you're seeing looks like some dependency issue
where you have a version other than 1.11.1
of org.apache.parquet:parquet-column:jar on your classpath?

Matthias

[1] https://issues.apache.org/jira/browse/FLINK-19137

On Wed, Jun 23, 2021 at 1:50 AM Rommel Holmes 
wrote:

> To give more information
>
> parquet-avro version 1.10.0 with Flink 1.11.2 and it was running fine.
>
> now Flink 1.12.1, the error msg shows up.
>
> Thank you for help.
>
> Rommel
>
>
>
>
>
> On Tue, Jun 22, 2021 at 2:41 PM Thomas Wang  wrote:
>
>> Hi,
>>
>> We recently upgraded our Flink version from 1.11.2 to 1.12.1 and one of
>> our jobs that used to run ok, now sees the following error. This error
>> doesn't seem to be related to any user code. Can someone help me take a
>> look?
>>
>> Thanks.
>>
>> Thomas
>>
>> java.lang.NoSuchMethodError:
>> org.apache.parquet.column.ParquetProperties.getColumnIndexTruncateLength()I
>> at org.apache.parquet.hadoop.ParquetWriter.(ParquetWriter.java:282)
>> ~[?:?]
>> at
>> org.apache.parquet.hadoop.ParquetWriter$Builder.build(ParquetWriter.java:564)
>> ~[?:?]
>> at
>> org.apache.flink.formats.parquet.avro.ParquetAvroWriters.createAvroParquetWriter(ParquetAvroWriters.java:90)
>> ~[?:?]
>> at
>> org.apache.flink.formats.parquet.avro.ParquetAvroWriters.lambda$forGenericRecord$abd75386$1(ParquetAvroWriters.java:65)
>> ~[?:?]
>> at
>> org.apache.flink.formats.parquet.ParquetWriterFactory.create(ParquetWriterFactory.java:56)
>> ~[?:?]
>> at
>> org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNew(BulkBucketWriter.java:75)
>> ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.functions.sink.filesystem.OutputStreamBasedPartFileWriter$OutputStreamBasedBucketWriter.openNewInProgressFile(OutputStreamBasedPartFileWriter.java:90)
>> ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNewInProgressFile(BulkBucketWriter.java:36)
>> ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.rollPartFile(Bucket.java:243)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:220)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:305)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.onElement(StreamingFileSinkHelper.java:103)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:492)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:54)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:75)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:32)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
>> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>> at
>> 

Re: NoSuchMethodError - getColumnIndexTruncateLength after upgrading Flink from 1.11.2 to 1.12.1

2021-06-22 Thread Rommel Holmes
To give more information

parquet-avro version 1.10.0 with Flink 1.11.2 and it was running fine.

now Flink 1.12.1, the error msg shows up.

Thank you for help.

Rommel





On Tue, Jun 22, 2021 at 2:41 PM Thomas Wang  wrote:

> Hi,
>
> We recently upgraded our Flink version from 1.11.2 to 1.12.1 and one of
> our jobs that used to run ok, now sees the following error. This error
> doesn't seem to be related to any user code. Can someone help me take a
> look?
>
> Thanks.
>
> Thomas
>
> java.lang.NoSuchMethodError:
> org.apache.parquet.column.ParquetProperties.getColumnIndexTruncateLength()I
> at org.apache.parquet.hadoop.ParquetWriter.(ParquetWriter.java:282)
> ~[?:?]
> at
> org.apache.parquet.hadoop.ParquetWriter$Builder.build(ParquetWriter.java:564)
> ~[?:?]
> at
> org.apache.flink.formats.parquet.avro.ParquetAvroWriters.createAvroParquetWriter(ParquetAvroWriters.java:90)
> ~[?:?]
> at
> org.apache.flink.formats.parquet.avro.ParquetAvroWriters.lambda$forGenericRecord$abd75386$1(ParquetAvroWriters.java:65)
> ~[?:?]
> at
> org.apache.flink.formats.parquet.ParquetWriterFactory.create(ParquetWriterFactory.java:56)
> ~[?:?]
> at
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNew(BulkBucketWriter.java:75)
> ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.functions.sink.filesystem.OutputStreamBasedPartFileWriter$OutputStreamBasedBucketWriter.openNewInProgressFile(OutputStreamBasedPartFileWriter.java:90)
> ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNewInProgressFile(BulkBucketWriter.java:36)
> ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.rollPartFile(Bucket.java:243)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:220)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:305)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.onElement(StreamingFileSinkHelper.java:103)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:492)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:54)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:75)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:32)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.runtime.operators.TimestampsAndWatermarksOperator.processElement(TimestampsAndWatermarksOperator.java:104)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> at
> 

NoSuchMethodError - getColumnIndexTruncateLength after upgrading Flink from 1.11.2 to 1.12.1

2021-06-22 Thread Thomas Wang
Hi,

We recently upgraded our Flink version from 1.11.2 to 1.12.1 and one of our
jobs that used to run ok, now sees the following error. This error doesn't
seem to be related to any user code. Can someone help me take a look?

Thanks.

Thomas

java.lang.NoSuchMethodError:
org.apache.parquet.column.ParquetProperties.getColumnIndexTruncateLength()I
at org.apache.parquet.hadoop.ParquetWriter.(ParquetWriter.java:282)
~[?:?]
at
org.apache.parquet.hadoop.ParquetWriter$Builder.build(ParquetWriter.java:564)
~[?:?]
at
org.apache.flink.formats.parquet.avro.ParquetAvroWriters.createAvroParquetWriter(ParquetAvroWriters.java:90)
~[?:?]
at
org.apache.flink.formats.parquet.avro.ParquetAvroWriters.lambda$forGenericRecord$abd75386$1(ParquetAvroWriters.java:65)
~[?:?]
at
org.apache.flink.formats.parquet.ParquetWriterFactory.create(ParquetWriterFactory.java:56)
~[?:?]
at
org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNew(BulkBucketWriter.java:75)
~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.functions.sink.filesystem.OutputStreamBasedPartFileWriter$OutputStreamBasedBucketWriter.openNewInProgressFile(OutputStreamBasedPartFileWriter.java:90)
~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.functions.sink.filesystem.BulkBucketWriter.openNewInProgressFile(BulkBucketWriter.java:36)
~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.rollPartFile(Bucket.java:243)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.functions.sink.filesystem.Bucket.write(Bucket.java:220)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.onElement(Buckets.java:305)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSinkHelper.onElement(StreamingFileSinkHelper.java:103)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.invoke(StreamingFileSink.java:492)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:54)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:75)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:32)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:38)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.runtime.operators.TimestampsAndWatermarksOperator.processElement(TimestampsAndWatermarksOperator.java:104)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:71)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
~[flink-dist_2.12-1.12.1.jar:1.12.1]
at

Scala REPL YARN 运行模式报 NoSuchMethodError setPrintSpaceAfterFullCompletion

2021-01-26 Thread macia kk
 bin/start-scala-shell.sh  yarn


scala> Exception in thread "main" java.lang.NoSuchMethodError:
jline.console.completer.CandidateListCompletionHandler.setPrintSpaceAfterFullCompletion(Z)V
at
scala.tools.nsc.interpreter.jline.JLineConsoleReader.initCompletion(JLineReader.scala:139)
at
scala.tools.nsc.interpreter.jline.InteractiveReader.postInit(JLineReader.scala:54)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$25.apply(ILoop.scala:899)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$25.apply(ILoop.scala:897)
at
scala.tools.nsc.interpreter.SplashReader.postInit(InteractiveReader.scala:130)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$scala$tools$nsc$interpreter$ILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(ILoop.scala:926)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$scala$tools$nsc$interpreter$ILoop$$anonfun$$loopPostInit$1$1.apply(ILoop.scala:908)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$scala$tools$nsc$interpreter$ILoop$$anonfun$$loopPostInit$1$1.apply(ILoop.scala:908)
at scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(ILoop.scala:979)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:990)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:891)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:891)
at
scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:891)
at org.apache.flink.api.scala.FlinkShell$.startShell(FlinkShell.scala:184)
at org.apache.flink.api.scala.FlinkShell$.main(FlinkShell.scala:131)
at org.apache.flink.api.scala.FlinkShell.main(FlinkShell.scala)
va
Exception in thread "Thread-2" java.lang.InterruptedException
at java.util.concurrent.SynchronousQueue.put(SynchronousQueue.java:879)
at scala.tools.nsc.interpreter.SplashLoop.run(InteractiveReader.scala:77)
at java.lang.Thread.run(Thread.java:748)
```


Re: Re:Re: Scala REPL YARN 运行模式报 NoSuchMethodError

2020-12-16 Thread Jacob
感觉像是jline和Scala 某些包冲突所致,Scala我不太了解,你可以从以下方面做些尝试

1.在pom.xml或者其他相关文件中, 排除hadoop(以及其他涉及到jline的依赖)依赖中的jline子依赖,单独引入jline的依赖
我当时遇到的问题是,hadoop-common出现了版本冲突,在某个依赖中包含hadoop-common包,我在该依赖中排除了hadoop-common,然后在单独引入hadoop-common依赖,问题得以解决。

2. 改变(升级)Scala的版本



Thanks!
Jacob



--
Sent from: http://apache-flink.147419.n8.nabble.com/

Re:Re: Scala REPL YARN 运行模式报 NoSuchMethodError

2020-12-16 Thread 卢国庆
谢谢回复。


我已经采用 `hadoop classpath` 方式完成了 Hadoop 的集成。当前的问题是在 CDH 5.16.2 + Flink 环境下遇到的


补充下丢失的截图信息


使用 Scala REPL Yarn 运行模式报 NoSuchMethodError,详细错误信息如下:
$ ./bin/start-scala-shell.sh yarn
|
scala> Exception in thread "main" java.lang.NoSuchMethodError: 
jline.console.completer.CandidateListCompletionHandler.setPrintSpaceAfterFullCompletion(Z)V
at 
scala.tools.nsc.interpreter.jline.JLineConsoleReader.initCompletion(JLineReader.scala:139)
at 
scala.tools.nsc.interpreter.jline.InteractiveReader.postInit(JLineReader.scala:54)
at 
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$25.apply(ILoop.scala:899)
at 
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$25.apply(ILoop.scala:897)
at 
scala.tools.nsc.interpreter.SplashReader.postInit(InteractiveReader.scala:130)
at 
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$scala$tools$nsc$interpreter$ILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(ILoop.scala:926)
at 
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$scala$tools$nsc$interpreter$ILoop$$anonfun$$loopPostInit$1$1.apply(ILoop.scala:908)
at 
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$scala$tools$nsc$interpreter$ILoop$$anonfun$$loopPostInit$1$1.apply(ILoop.scala:908)
at scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186)
at 
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(ILoop.scala:979)
at 
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:990)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:891)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:891)
at 
scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:891)
at org.apache.flink.api.scala.FlinkShell$.startShell(FlinkShell.scala:187)
at org.apache.flink.api.scala.FlinkShell$.main(FlinkShell.scala:131)
at org.apache.flink.api.scala.FlinkShell.main(FlinkShell.scala)
|







环境说明
CDH 5.16.2
测试 Flink 1.10.2 和 1.11.2 都能复现该问题


已分析内容
使用 Arthas 查看已加载类,加载的是 CDH 相关依赖
|
$ sc *CandidateListCompletionHandler
jline.console.completer.CandidateListCompletionHandler
Affect(row-cnt:1) cost in 113 ms.
[arthas@23856]$ sc -d jline.console.completer.CandidateListCompletionHandler
 class-infojline.console.completer.CandidateListCompletionHandler   

  
 code-source   
/opt/cloudera/parcels/CDH-5.16.2-1.cdh5.16.2.p0.8/jars/jline-2.11.jar   

   
 name  jline.console.completer.CandidateListCompletionHandler   

  
 isInterface   false

  
 isAnnotation  false

  
 isEnumfalse

  
 isAnonymousClass  false

  
 isArray   false

  
 isLocalClass  false

  
 isMemberClass false

  
 isPrimitiv

Re: Scala REPL YARN 运行模式报 NoSuchMethodError

2020-12-16 Thread Jacob
hi,
你的截图好像没有上传成功,通过你的描述,大概是NoSuchMethod之类的错误,我前几天在升级flink版本时候也遇到过类似问题,后来的解决方案是
导入hadoop classpath (export HADOOP_CLASSPATH=`hadoop
classpath`)解决的,如果没有解决你的问题,尝试把flink-shaded-hadoop-2-uber*-*.jar放在 flink/lib下面




Thanks!
Jacob



--
Sent from: http://apache-flink.147419.n8.nabble.com/

Scala REPL YARN 运行模式报 NoSuchMethodError

2020-12-16 Thread 卢国庆
各位好,辛苦帮忙看个问题


使用 Scala REPL Yarn 运行模式报 NoSuchMethodError,截图如下:
$ ./bin/start-scala-shell.sh yarn   


环境说明
CDH 5.16.2
测试 Flink 1.10.2 和 1.11.2 都能复现该问题


已分析内容
使用 Arthas 查看已加载类,加载的是 CDH 相关依赖



删除这个 CDH 依赖 jline-2.11.jar,不再报 NoSuchMethodError。但 Arthas 没有找到 
jline.console.completer.CandidateListCompletionHandler,而是找到 
scala.tools.jline_embedded.console.completer.CandidateListCompletionHandler,详见下面截图





Re: Flink SQL使用Tumble窗口函数报NoSuchMethodError functions/AggregateFunction 异常

2020-12-03 Thread JasonLee
hi

从报错信息看应该jar包冲突了,可以贴一下相关的依赖包吗



-
Best Wishes
JasonLee
--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re:Flink SQL使用Tumble窗口函数报NoSuchMethodError functions/AggregateFunction 异常

2020-12-02 Thread hailongwang
Hi,
   你的 Flink 版本是哪个呢。从报错来看你在用 legacy planner,可以使用 blink planner 试试。

Best,
Hailong
在 2020-12-03 10:02:08,"18293503878" <18293503...@163.com> 写道:
>大家使用Flink SQL的tumble函数时,将结果表转换为流,报如下错误的异常吗
>Exception in thread "main" java.lang.NoSuchMethodError: 
>org.apache.flink.streaming.api.datastream.WindowedStream.aggregate(Lorg/apache/flink/api/common/functions/AggregateFunction;Lorg/apache/flink/streaming/api/functions/windowing/WindowFunction;Lorg/apache/flink/api/common/typeinfo/TypeInformation;Lorg/apache/flink/api/common/typeinfo/TypeInformation;Lorg/apache/flink/api/common/typeinfo/TypeInformation;)Lorg/apache/flink/streaming/api/datastream/SingleOutputStreamOperator;
>at 
>org.apache.flink.table.plan.nodes.datastream.DataStreamGroupWindowAggregate.translateToPlan(DataStreamGroupWindowAggregate.scala:214)


Re: NoSuchMethodError: org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory.(Ljava/lang/String;Lorg/apache/flink/fs/s3presto/common/HadoopConfigLoader

2020-06-09 Thread Guowei Ma
Hi,
In 1.10 there is no
'Lorg/apache/flink/fs/s3presto/common/HadoopConfigLoader' . So I think
there might be a legacy S3FileSystemFactory in your jar. You could check
whether there is a 'org.apache.flink.fs.s3presto.common.HadoopConfigLoader'
in your jar or not. If there is one you could remove the
old S3FileSystemFactory and try again.

Btw I think you might not copy both flink-s3-fs-hadoop-1.10.0
and  flink-s3-fs-presto-1.10.0.jar to the same plugin dir. [1]

[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/filesystems/s3.html#hadooppresto-s3-file-systems-plugins
Best,
Guowei


Claude Murad  于2020年6月10日周三 上午4:06写道:

> Hello,
>
> I'm trying to upgrade Flink from 1.7 to 1.10 retaining our Hadoop
> integration.  I copied the jar
> file flink-shaded-hadoop-2-uber-2.7.5-10.0.jar into /opt/flink/lib.  I also
> copied the files flink-s3-fs-hadoop-1.10.0.jar and
> flink-s3-fs-presto-1.10.0.jar into /opt/flink/plugins/s3 folder.  The error
> below occurs after deploying and launching docker image 1.10.0-scala_2.11.
> I saw that S3FileSystemFactory.java is now importing
> org.apache.flink.runtime.util.HadoopConfigLoader instead of
> org.apache.flink.fs.s3.common.HadoopConfigLoader which is how it was
> before.  I see the jar file flink-dist_2.11-1.10.0.jar contains
> the org.apache.flink.runtime.util.HadoopConfigLoader and it is under the
> folder /opt/flink/lib.  Any ideas on how to resolve this error?  Any help
> would be greatly appreciated, thank you.
>
>
> ERROR org.apache.flink.core.fs.FileSystem   -
> Failed to load a file system via services
> java.util.ServiceConfigurationError:
> org.apache.flink.core.fs.FileSystemFactory: Provider
> org.apache.flink.fs.s3presto.S3PFileSystemFactory could not be instantiated
> at java.util.ServiceLoader.fail(ServiceLoader.java:232)
> at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
> at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
> at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
> at
> org.apache.flink.core.fs.FileSystem.addAllFactoriesToList(FileSystem.java:1024)
> at
> org.apache.flink.core.fs.FileSystem.loadFileSystemFactories(FileSystem.java:1006)
> at org.apache.flink.core.fs.FileSystem.initialize(FileSystem.java:303)
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.configureFileSystems(ClusterEntrypoint.java:194)
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:164)
> at
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
> at
> org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:64)
> Caused by: java.lang.NoSuchMethodError:
> org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory.(Ljava/lang/String;Lorg/apache/flink/fs/s3presto/common/HadoopConfigLoader;)V
> at
> org.apache.flink.fs.s3presto.S3FileSystemFactory.(S3FileSystemFactory.java:50)
> at
> org.apache.flink.fs.s3presto.S3PFileSystemFactory.(S3PFileSystemFactory.java:24)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
>


NoSuchMethodError: org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory.(Ljava/lang/String;Lorg/apache/flink/fs/s3presto/common/HadoopConfigLoader

2020-06-09 Thread Claude Murad
Hello,

I'm trying to upgrade Flink from 1.7 to 1.10 retaining our Hadoop
integration.  I copied the jar
file flink-shaded-hadoop-2-uber-2.7.5-10.0.jar into /opt/flink/lib.  I also
copied the files flink-s3-fs-hadoop-1.10.0.jar and
flink-s3-fs-presto-1.10.0.jar into /opt/flink/plugins/s3 folder.  The error
below occurs after deploying and launching docker image 1.10.0-scala_2.11.
I saw that S3FileSystemFactory.java is now importing
org.apache.flink.runtime.util.HadoopConfigLoader instead of
org.apache.flink.fs.s3.common.HadoopConfigLoader which is how it was
before.  I see the jar file flink-dist_2.11-1.10.0.jar contains
the org.apache.flink.runtime.util.HadoopConfigLoader and it is under the
folder /opt/flink/lib.  Any ideas on how to resolve this error?  Any help
would be greatly appreciated, thank you.


ERROR org.apache.flink.core.fs.FileSystem   -
Failed to load a file system via services
java.util.ServiceConfigurationError:
org.apache.flink.core.fs.FileSystemFactory: Provider
org.apache.flink.fs.s3presto.S3PFileSystemFactory could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at
org.apache.flink.core.fs.FileSystem.addAllFactoriesToList(FileSystem.java:1024)
at
org.apache.flink.core.fs.FileSystem.loadFileSystemFactories(FileSystem.java:1006)
at org.apache.flink.core.fs.FileSystem.initialize(FileSystem.java:303)
at
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.configureFileSystems(ClusterEntrypoint.java:194)
at
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:164)
at
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
at
org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:64)
Caused by: java.lang.NoSuchMethodError:
org.apache.flink.fs.s3.common.AbstractS3FileSystemFactory.(Ljava/lang/String;Lorg/apache/flink/fs/s3presto/common/HadoopConfigLoader;)V
at
org.apache.flink.fs.s3presto.S3FileSystemFactory.(S3FileSystemFactory.java:50)
at
org.apache.flink.fs.s3presto.S3PFileSystemFactory.(S3PFileSystemFactory.java:24)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)


Re: Flink 1.8: Using the RocksDB state backend causes "NoSuchMethodError" when trying to stop a pipeline

2019-08-14 Thread Kaymak, Tobias
You are right, my bad. We had a company internal java dependency that was
referring to an older version of RocksDB. I've spotted it via running

mvn dependency:tree

while investigating with a colleague.
Thank you!

On Tue, Aug 13, 2019 at 8:01 PM Yun Tang  wrote:

> Hi Tobias
>
> First of all, I think you would not need to ADD the
> flink-statebackend-rocksdb jar package into your docker image's lib folder,
> as the flink-dist jar package within lib folder already include all classes
> of flink-statebackend-rocksdb.
>
> I think the root cause is that you might assemble the rocksdbjni jar
> package in your user application jar which was rocksdbjni-5.7.5.jar in
> Flink-1.7. As Flink would load classes first from the user code jar [1],
> however, method org.rocksdb.ColumnFamilyHandle.getDescriptor() is not
> existed in rocksdbjni-5.7.5.jar but in rocksdbjni-5.17.2 (or we can say
> frocksdbjni-5.17.2-artisans-1.0 in Flink-1.8). That's why you come across
> this NoSuchMethodError exception.
>
> If no necessary, please do not assemble rocksdbjni package in your user
> code jar as flink-dist already provide all needed classes. Moreover, adding
> dependency of flink-statebackend-rocksdb_2.11 in your pom.xml should be
> enough as it already includes the dependency of rocksdbjni.
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#classloader-resolve-order
>
> Best
> Yun Tang
>
> --
> *From:* Kaymak, Tobias 
> *Sent:* Tuesday, August 13, 2019 21:20
> *To:* user@flink.apache.org 
> *Subject:* Flink 1.8: Using the RocksDB state backend causes
> "NoSuchMethodError" when trying to stop a pipeline
>
> Hi,
>
> I am using Apache Beam 2.14.0 with Flink 1.8.0 and I have included the
> RocksDb dependency in my projects pom.xml as well as baked it into the
> Dockerfile like this:
>
> FROM flink:1.8.0-scala_2.11
>
> ADD --chown=flink:flink
> http://central.maven.org/maven2/org/apache/flink/flink-statebackend-rocksdb_2.11/1.8.0/flink-statebackend-rocksdb_2.11-1.8.0.jar
> /opt/flink/lib/flink-statebackend-rocksdb_2.11-1.8.0.jar
>
>
> Everything seems to be normal up to the point when I try to stop and
> cleanly shutdown my pipeline. I get the following error:
>
> java.lang.NoSuchMethodError:
> org.rocksdb.ColumnFamilyHandle.getDescriptor()Lorg/rocksdb/ColumnFamilyDescriptor;
> at
> org.apache.flink.contrib.streaming.state.RocksDBOperationUtils.addColumnFamilyOptionsToCloseLater(RocksDBOperationUtils.java:160)
> at
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.dispose(RocksDBKeyedStateBackend.java:331)
> at
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.dispose(AbstractStreamOperator.java:362)
> at
> org.apache.beam.runners.flink.translation.wrappers.streaming.DoFnOperator.dispose(DoFnOperator.java:470)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.tryDisposeAllOperators(StreamTask.java:454)
> at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:337)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
> at java.lang.Thread.run(Thread.java:748)
>
> I can cancel my pipeline and snapshotting in general works, however. Flink
> 1.7.2 with Beam 2.12.0 did not have any problem, could it be that this is
> caused by the switch to FRocksDb?[0]
>
> Best,
> Tobias
>
> [0]
> https://ci.apache.org/projects/flink/flink-docs-stable/release-notes/flink-1.8.html#rocksdb-version-bump-and-switch-to-frocksdb-flink-10471
>


Re: Flink 1.8: Using the RocksDB state backend causes "NoSuchMethodError" when trying to stop a pipeline

2019-08-13 Thread Yun Tang
Hi Tobias

First of all, I think you would not need to ADD the flink-statebackend-rocksdb 
jar package into your docker image's lib folder, as the flink-dist jar package 
within lib folder already include all classes of flink-statebackend-rocksdb.

I think the root cause is that you might assemble the rocksdbjni jar package in 
your user application jar which was rocksdbjni-5.7.5.jar in Flink-1.7. As Flink 
would load classes first from the user code jar [1], however, method 
org.rocksdb.ColumnFamilyHandle.getDescriptor() is not existed in 
rocksdbjni-5.7.5.jar but in rocksdbjni-5.17.2 (or we can say 
frocksdbjni-5.17.2-artisans-1.0 in Flink-1.8). That's why you come across this 
NoSuchMethodError exception.

If no necessary, please do not assemble rocksdbjni package in your user code 
jar as flink-dist already provide all needed classes. Moreover, adding 
dependency of flink-statebackend-rocksdb_2.11 in your pom.xml should be enough 
as it already includes the dependency of rocksdbjni.

[1] 
https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#classloader-resolve-order

Best
Yun Tang


From: Kaymak, Tobias 
Sent: Tuesday, August 13, 2019 21:20
To: user@flink.apache.org 
Subject: Flink 1.8: Using the RocksDB state backend causes "NoSuchMethodError" 
when trying to stop a pipeline

Hi,

I am using Apache Beam 2.14.0 with Flink 1.8.0 and I have included the RocksDb 
dependency in my projects pom.xml as well as baked it into the Dockerfile like 
this:

FROM flink:1.8.0-scala_2.11

ADD --chown=flink:flink 
http://central.maven.org/maven2/org/apache/flink/flink-statebackend-rocksdb_2.11/1.8.0/flink-statebackend-rocksdb_2.11-1.8.0.jar
 /opt/flink/lib/flink-statebackend-rocksdb_2.11-1.8.0.jar


Everything seems to be normal up to the point when I try to stop and cleanly 
shutdown my pipeline. I get the following error:

java.lang.NoSuchMethodError: 
org.rocksdb.ColumnFamilyHandle.getDescriptor()Lorg/rocksdb/ColumnFamilyDescriptor;
at 
org.apache.flink.contrib.streaming.state.RocksDBOperationUtils.addColumnFamilyOptionsToCloseLater(RocksDBOperationUtils.java:160)
at 
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.dispose(RocksDBKeyedStateBackend.java:331)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.dispose(AbstractStreamOperator.java:362)
at 
org.apache.beam.runners.flink.translation.wrappers.streaming.DoFnOperator.dispose(DoFnOperator.java:470)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.tryDisposeAllOperators(StreamTask.java:454)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:337)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)

I can cancel my pipeline and snapshotting in general works, however. Flink 
1.7.2 with Beam 2.12.0 did not have any problem, could it be that this is 
caused by the switch to FRocksDb?[0]

Best,
Tobias

[0] 
https://ci.apache.org/projects/flink/flink-docs-stable/release-notes/flink-1.8.html#rocksdb-version-bump-and-switch-to-frocksdb-flink-10471


Flink 1.8: Using the RocksDB state backend causes "NoSuchMethodError" when trying to stop a pipeline

2019-08-13 Thread Kaymak, Tobias
Hi,

I am using Apache Beam 2.14.0 with Flink 1.8.0 and I have included the
RocksDb dependency in my projects pom.xml as well as baked it into the
Dockerfile like this:

FROM flink:1.8.0-scala_2.11

ADD --chown=flink:flink
http://central.maven.org/maven2/org/apache/flink/flink-statebackend-rocksdb_2.11/1.8.0/flink-statebackend-rocksdb_2.11-1.8.0.jar
/opt/flink/lib/flink-statebackend-rocksdb_2.11-1.8.0.jar


Everything seems to be normal up to the point when I try to stop and
cleanly shutdown my pipeline. I get the following error:

java.lang.NoSuchMethodError:
org.rocksdb.ColumnFamilyHandle.getDescriptor()Lorg/rocksdb/ColumnFamilyDescriptor;
at
org.apache.flink.contrib.streaming.state.RocksDBOperationUtils.addColumnFamilyOptionsToCloseLater(RocksDBOperationUtils.java:160)
at
org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.dispose(RocksDBKeyedStateBackend.java:331)
at
org.apache.flink.streaming.api.operators.AbstractStreamOperator.dispose(AbstractStreamOperator.java:362)
at
org.apache.beam.runners.flink.translation.wrappers.streaming.DoFnOperator.dispose(DoFnOperator.java:470)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.tryDisposeAllOperators(StreamTask.java:454)
at
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:337)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
at java.lang.Thread.run(Thread.java:748)

I can cancel my pipeline and snapshotting in general works, however. Flink
1.7.2 with Beam 2.12.0 did not have any problem, could it be that this is
caused by the switch to FRocksDb?[0]

Best,
Tobias

[0]
https://ci.apache.org/projects/flink/flink-docs-stable/release-notes/flink-1.8.html#rocksdb-version-bump-and-switch-to-frocksdb-flink-10471


Re: NoSuchMethodError: org.apache.calcite.tools.FrameworkConfig.getTraitDefs()

2019-07-30 Thread Jark Wu
Hi Lakeshen,

Thanks for trying out blink planner.
First question, are you using blink-1.5.1 or flink-1.9-table-planner-blink
?
We suggest to use the latter one because we don't maintain blink-1.5.1, you
can try flink 1.9 instead.


Best,
Jark


On Tue, 30 Jul 2019 at 17:02, LakeShen  wrote:

> Hi all,when I use blink flink-sql-parser module,the maven dependency like
> this:
>
> 
> com.alibaba.blink
> flink-sql-parser
> 1.5.1
> 
>
> I also import the flink 1.9 blink-table-planner module , I
> use FlinkPlannerImpl to parse the sql to get the List. But
> when I run the program , it throws the exception like this:
>
>
>
> *Exception in thread "main" java.lang.NoSuchMethodError:
>
> org.apache.calcite.tools.FrameworkConfig.getTraitDefs()Lorg/apache/flink/shaded/calcite/com/google/common/collect/ImmutableList;
> at
>
> org.apache.flink.sql.parser.plan.FlinkPlannerImpl.(FlinkPlannerImpl.java:93)
> at
>
> com.youzan.bigdata.allsqldemo.utils.FlinkSqlUtil.getSqlNodeInfos(FlinkSqlUtil.java:33)
> at
>
> com.youzan.bigdata.allsqldemo.KafkaSrcKafkaSinkSqlDemo.main(KafkaSrcKafkaSinkSqlDemo.java:56)*
>
> * How can I solve this problem? Thanks to your reply.*
>


NoSuchMethodError: org.apache.calcite.tools.FrameworkConfig.getTraitDefs()

2019-07-30 Thread LakeShen
Hi all,when I use blink flink-sql-parser module,the maven dependency like
this:


com.alibaba.blink
flink-sql-parser
1.5.1


I also import the flink 1.9 blink-table-planner module , I
use FlinkPlannerImpl to parse the sql to get the List. But
when I run the program , it throws the exception like this:



*Exception in thread "main" java.lang.NoSuchMethodError:
org.apache.calcite.tools.FrameworkConfig.getTraitDefs()Lorg/apache/flink/shaded/calcite/com/google/common/collect/ImmutableList;
at
org.apache.flink.sql.parser.plan.FlinkPlannerImpl.(FlinkPlannerImpl.java:93)
at
com.youzan.bigdata.allsqldemo.utils.FlinkSqlUtil.getSqlNodeInfos(FlinkSqlUtil.java:33)
at
com.youzan.bigdata.allsqldemo.KafkaSrcKafkaSinkSqlDemo.main(KafkaSrcKafkaSinkSqlDemo.java:56)*

* How can I solve this problem? Thanks to your reply.*


Re: ElasticSearch RestClient throws NoSuchMethodError due to shade mechanism

2019-01-18 Thread Gary Yao
Hi Henry,

Can you share your pom.xml and the full stacktrace with us? It is expected
behavior that org.elasticsearch.client.RestClientBuilder is not shaded. That
class comes from the elasticsearch Java client, and we only shade its
transitive dependencies. Could it be that you have a dependency in your
job's pom.xml to a different version of the elasticsearch client?

Best,
Gary

On Tue, Jan 15, 2019 at 11:39 AM 徐涛  wrote:

> Hi All,
> I use the following code try to build a RestClient
> org.elasticsearch.client.RestClient.builder(  new HttpHost(xxx,
> xxx,"http")  ).build()
> but when in running time, a NoSuchMethodError throws out, I think the
> reason is:
> There are two RestClient classes, one is in the jar I include, the other
> one is in flink-connector-elasticsearch5, but the argument of build method
> in flink-connector-elasticsearch5 is
> org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.apache.http.HttpHost.
> So I want to know why org.elasticsearch.client.RestClientBuilder is not
> shaded, so runtime class conflict could be avoided?
>
>* public static RestClientBuilder
> builder(org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.apache.http.HttpHost...
> hosts) {*
> *return new RestClientBuilder(hosts);*
> *}*
>
> Best
> Henry
>


Re: ElasticSearch RestClient throws NoSuchMethodError due to shade mechanism

2019-01-15 Thread Rong Rong
Hi Henry,

I was not sure if this is the suggested way. but from what I understand of
the pom file in elasticsearch5, you are allowed to change the sub version
of the org.ealisticsearch.client via manually override using

-Delasticsearch.version=5.x.x

during maven build progress if you are using a different sub version.
This way you don't need to include 2 jars of the elasticsearch client. Does
this resolves your problem?

--
Rong

On Tue, Jan 15, 2019 at 2:39 AM 徐涛  wrote:

> Hi All,
> I use the following code try to build a RestClient
> org.elasticsearch.client.RestClient.builder(  new HttpHost(xxx,
> xxx,"http")  ).build()
> but when in running time, a NoSuchMethodError throws out, I think the
> reason is:
> There are two RestClient classes, one is in the jar I include, the other
> one is in flink-connector-elasticsearch5, but the argument of build method
> in flink-connector-elasticsearch5 is
> org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.apache.http.HttpHost.
> So I want to know why org.elasticsearch.client.RestClientBuilder is not
> shaded, so runtime class conflict could be avoided?
>
>* public static RestClientBuilder
> builder(org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.apache.http.HttpHost...
> hosts) {*
> *return new RestClientBuilder(hosts);*
> *}*
>
> Best
> Henry
>


ElasticSearch RestClient throws NoSuchMethodError due to shade mechanism

2019-01-15 Thread 徐涛
Hi All,
I use the following code try to build a RestClient
org.elasticsearch.client.RestClient.builder(  new HttpHost(xxx, 
xxx,"http")  ).build()
but when in running time, a NoSuchMethodError throws out, I think the 
reason is:
There are two RestClient classes, one is in the jar I include, the 
other one is in flink-connector-elasticsearch5, but the argument of build 
method in flink-connector-elasticsearch5 is 
org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.apache.http.HttpHost.
 So I want to know why org.elasticsearch.client.RestClientBuilder is not 
shaded, so runtime class conflict could be avoided?

public static RestClientBuilder 
builder(org.apache.flink.streaming.connectors.elasticsearch5.shaded.org.apache.http.HttpHost...
 hosts) {
return new RestClientBuilder(hosts);
}

Best
Henry

NoSuchMethodError when using the Flink Gelly library with Scala

2016-05-06 Thread Adrian Bartnik

Hi,

I am trying to run the code examples from the Gelly documentation, in 
particular this code:


import org.apache.flink.api.scala._
import org.apache.flink.graph.generator.GridGraph

object SampleObject {
  def main(args: Array[String]) {

val env = ExecutionEnvironment.getExecutionEnvironment

val graph = new GridGraph(env.getJavaEnv).addDimension(2, 
true).addDimension(4, true).generate()


   ...
  }
}

I created the project using the maven quickstart script, and compiling 
works fine,
but when I want to execute the jar on my local machine, the following 
error occurs:
java.lang.NoSuchMethodError: 
org.apache.flink.api.scala.ExecutionEnvironment.getJavaEnv()Lorg/apache/flink/api/java/ExecutionEnvironment;


My pom.xml looks like this:


UTF-8
   1.1-SNAPSHOT



   
  org.apache.flink
  flink-clients_2.11
  ${flink.version}
   
   
  org.apache.flink
  flink-scala_2.11
  ${flink.version}
   
   
  org.apache.flink
  flink-gelly-scala_2.11
  ${flink.version}
   


Scala -version prints: Scala code runner version 2.11.6 -- Copyright 
2002-2013, LAMP/EPFL


Thanks,
Adrian


NoSuchMethodError flatMap

2016-03-07 Thread Vishnu Viswanath
Hi All,

After successfully writing the wordcount program, I was trying to create a
streaming application but is getting below error when submitting the job in
local mode.

Vishnus-MacBook-Pro:flink vishnu$ flink run
target/scala-2.11/flink-vishnu_2.11-1.0.jar

java.lang.NoSuchMethodError:
org.apache.flink.streaming.api.scala.DataStream.flatMap(Lscala/Function1;Lorg/apache/flink/api/common/typeinfo/TypeInformation;)Lorg/apache/flink/streaming/api/scala/DataStream;

at 
com.vishnu.flink.streaming.FlinkStreamingWordCount$.main(FlinkStreamingWordCount.scala:14)

at 
com.vishnu.flink.streaming.FlinkStreamingWordCount.main(FlinkStreamingWordCount.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:497)

at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:395)

at org.apache.flink.client.program.Client.runBlocking(Client.java:252)

at 
org.apache.flink.client.CliFrontend.executeProgramBlocking(CliFrontend.java:676)

at org.apache.flink.client.CliFrontend.run(CliFrontend.java:326)

at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:978)

at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1028)



The exception above occurred while trying to run your command.

This is my FlinkStreamingWordCount.scala file

package com.vishnu.flink.streaming
 import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.api.scala._


object FlinkStreamingWordCount {

def main(args: Array[String])  {

val sev = StreamExecutionEnvironment.getExecutionEnvironment

val socTxtStream = sev.socketTextStream("localhost",)

val counts = socTxtStream.flatMap(line => line.split(" ") )

  .map { (_, 1) }

  .keyBy(0)

  .sum(1)

counts.print()

sev.execute()

  }

}

This is how my sbt file looks like

val flink = "org.apache.flink" % "flink-core" % "1.0.0"
val flinkclients = "org.apache.flink" % "flink-clients_2.11" % "1.0.0"
val flinkstreaming = "org.apache.flink" % "flink-streaming-scala_2.11" % "1.0.0"

val main = "com.vishnu.flink.streaming.FlinkStreamingWordCount"

name := "flink-vishnu"
mainClass in (Compile, run) := Some(main)
mainClass in (Compile, packageBin) := Some(main)

lazy val commonSettings = Seq(
  organization := "com.vishnu",
  version := "1.0",
  scalaVersion := "2.11.7"
)

lazy val root = (project in file(".")).
  settings(commonSettings:_*).
  settings(
name := "flink-vishnu",
libraryDependencies += flink,
libraryDependencies += flinkclients,
libraryDependencies += flinkstreaming,
retrieveManaged := true
  )

I m using scala 2.11.7, and have downloaded Flink for scala 2.11

Any help is appreciated

Thanks,
Vishnu
​


Re: nosuchmethoderror

2015-09-03 Thread Robert Metzger
I'm sorry that we changed the method name between minor versions.

We'll soon bring some infrastructure in place a) mark the audience of
classes and b) ensure that public APIs are stable.

On Wed, Sep 2, 2015 at 9:04 PM, Ferenc Turi  wrote:

> Ok. As I see only the method name was changed. It was an unnecessary
> modification which caused the incompatibility.
>
> F.
>
> On Wed, Sep 2, 2015 at 8:53 PM, Márton Balassi 
> wrote:
>
>> Dear Ferenc,
>>
>> The Kafka consumer implementations was modified from 0.9.0 to 0.9.1,
>> please use the new code. [1]
>>
>> I suspect that your com.nventdata.kafkaflink.sink.FlinkKafkaTopicWriterSink
>> depends on the way the Flink code used to look in 0.9.0, if you take a
>> closer look Robert changed the function that is missing in your error in
>> [1].
>>
>> [1]
>> https://github.com/apache/flink/commit/940a7c8a667875b8512b63e4a32453b1a2a58785
>>
>> Best,
>>
>> Márton
>>
>> On Wed, Sep 2, 2015 at 8:47 PM, Ferenc Turi  wrote:
>>
>>> Hi,
>>>
>>> I tried to use the latest 0.9.1 release but I got:
>>>
>>> java.lang.NoSuchMethodError:
>>> org.apache.flink.util.NetUtils.ensureCorrectHostnamePort(Ljava/lang/String;)V
>>> at
>>> com.nventdata.kafkaflink.sink.FlinkKafkaTopicWriterSink.(FlinkKafkaTopicWriterSink.java:69)
>>> at
>>> com.nventdata.kafkaflink.sink.FlinkKafkaTopicWriterSink.(FlinkKafkaTopicWriterSink.java:48)
>>> at
>>> com.nventdata.kafkaflink.FlinkKafkaTopicWriterMain.main(FlinkKafkaTopicWriterMain.java:54)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:606)
>>> at
>>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:437)
>>> at
>>> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:353)
>>> at org.apache.flink.client.program.Client.run(Client.java:315)
>>> at
>>> org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:582)
>>> at org.apache.flink.client.CliFrontend.run(CliFrontend.java:288)
>>> at
>>> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:878)
>>> at org.apache.flink.client.CliFrontend.main(CliFrontend.java:920)
>>>
>>>
>>> Thanks,
>>>
>>> Ferenc
>>>
>>
>>
>
>
> --
> Kind Regards,
>
> Ferenc
>
>
>


Re: nosuchmethoderror

2015-09-02 Thread Ferenc Turi
Ok. As I see only the method name was changed. It was an unnecessary
modification which caused the incompatibility.

F.

On Wed, Sep 2, 2015 at 8:53 PM, Márton Balassi 
wrote:

> Dear Ferenc,
>
> The Kafka consumer implementations was modified from 0.9.0 to 0.9.1,
> please use the new code. [1]
>
> I suspect that your com.nventdata.kafkaflink.sink.FlinkKafkaTopicWriterSink
> depends on the way the Flink code used to look in 0.9.0, if you take a
> closer look Robert changed the function that is missing in your error in
> [1].
>
> [1]
> https://github.com/apache/flink/commit/940a7c8a667875b8512b63e4a32453b1a2a58785
>
> Best,
>
> Márton
>
> On Wed, Sep 2, 2015 at 8:47 PM, Ferenc Turi  wrote:
>
>> Hi,
>>
>> I tried to use the latest 0.9.1 release but I got:
>>
>> java.lang.NoSuchMethodError:
>> org.apache.flink.util.NetUtils.ensureCorrectHostnamePort(Ljava/lang/String;)V
>> at
>> com.nventdata.kafkaflink.sink.FlinkKafkaTopicWriterSink.(FlinkKafkaTopicWriterSink.java:69)
>> at
>> com.nventdata.kafkaflink.sink.FlinkKafkaTopicWriterSink.(FlinkKafkaTopicWriterSink.java:48)
>> at
>> com.nventdata.kafkaflink.FlinkKafkaTopicWriterMain.main(FlinkKafkaTopicWriterMain.java:54)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:606)
>> at
>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:437)
>> at
>> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:353)
>> at org.apache.flink.client.program.Client.run(Client.java:315)
>> at
>> org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:582)
>> at org.apache.flink.client.CliFrontend.run(CliFrontend.java:288)
>> at
>> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:878)
>> at org.apache.flink.client.CliFrontend.main(CliFrontend.java:920)
>>
>>
>> Thanks,
>>
>> Ferenc
>>
>
>


-- 
Kind Regards,

Ferenc


nosuchmethoderror

2015-09-02 Thread Ferenc Turi
Hi,

I tried to use the latest 0.9.1 release but I got:

java.lang.NoSuchMethodError:
org.apache.flink.util.NetUtils.ensureCorrectHostnamePort(Ljava/lang/String;)V
at
com.nventdata.kafkaflink.sink.FlinkKafkaTopicWriterSink.(FlinkKafkaTopicWriterSink.java:69)
at
com.nventdata.kafkaflink.sink.FlinkKafkaTopicWriterSink.(FlinkKafkaTopicWriterSink.java:48)
at
com.nventdata.kafkaflink.FlinkKafkaTopicWriterMain.main(FlinkKafkaTopicWriterMain.java:54)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:437)
at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:353)
at org.apache.flink.client.program.Client.run(Client.java:315)
at org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:582)
at org.apache.flink.client.CliFrontend.run(CliFrontend.java:288)
at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:878)
at org.apache.flink.client.CliFrontend.main(CliFrontend.java:920)


Thanks,

Ferenc


Re: nosuchmethoderror

2015-09-02 Thread Márton Balassi
Dear Ferenc,

The Kafka consumer implementations was modified from 0.9.0 to 0.9.1, please
use the new code. [1]

I suspect that your com.nventdata.kafkaflink.sink.FlinkKafkaTopicWriterSink
depends on the way the Flink code used to look in 0.9.0, if you take a
closer look Robert changed the function that is missing in your error in
[1].

[1]
https://github.com/apache/flink/commit/940a7c8a667875b8512b63e4a32453b1a2a58785

Best,

Márton

On Wed, Sep 2, 2015 at 8:47 PM, Ferenc Turi  wrote:

> Hi,
>
> I tried to use the latest 0.9.1 release but I got:
>
> java.lang.NoSuchMethodError:
> org.apache.flink.util.NetUtils.ensureCorrectHostnamePort(Ljava/lang/String;)V
> at
> com.nventdata.kafkaflink.sink.FlinkKafkaTopicWriterSink.(FlinkKafkaTopicWriterSink.java:69)
> at
> com.nventdata.kafkaflink.sink.FlinkKafkaTopicWriterSink.(FlinkKafkaTopicWriterSink.java:48)
> at
> com.nventdata.kafkaflink.FlinkKafkaTopicWriterMain.main(FlinkKafkaTopicWriterMain.java:54)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:437)
> at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:353)
> at org.apache.flink.client.program.Client.run(Client.java:315)
> at org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:582)
> at org.apache.flink.client.CliFrontend.run(CliFrontend.java:288)
> at
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:878)
> at org.apache.flink.client.CliFrontend.main(CliFrontend.java:920)
>
>
> Thanks,
>
> Ferenc
>