[jira] [Commented] (FLINK-30001) sql-client.sh start failed

2022-11-13 Thread xiaohang.li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17633497#comment-17633497
 ] 

xiaohang.li commented on FLINK-30001:
-

经查询,
默认情况下 flink 中的 org.apache.flink.table.planner.loader.PlannerModule 模块使用 /tmp 
目录来作为临时的工作路径,因此会尝试调用 jave 的 java.nio.file.Files 类来创建这个目录,但是如果  /tmp 目录是一个指向 
/mnt/tmp 的符号软链接,这种情况 java.nio.file.Files 
类无法处理,从而导致出现报错。需要在sql-client.sh添加临时路径的配置:
  export JVM_ARGS="-Djava.io.tmpdir=/mnt/tmp

> sql-client.sh start failed
> --
>
> Key: FLINK-30001
> URL: https://issues.apache.org/jira/browse/FLINK-30001
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client
>Affects Versions: 1.16.0, 1.15.2
>Reporter: xiaohang.li
>Priority: Major
>
> [hadoop@master flink-1.15.0]$ ./bin/sql-client.sh 
> Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR or 
> HADOOP_CLASSPATH was set.
> Setting HBASE_CONF_DIR=/etc/hbase/conf because no HBASE_CONF_DIR was set.
> Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
> Unexpected exception. This is a bug. Please consider filing an issue.
>         at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
>         at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
> Caused by: org.apache.flink.table.api.TableException: Could not instantiate 
> the executor. Make sure a planner module is on the classpath
>         at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.lookupExecutor(ExecutionContext.java:163)
>         at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.createTableEnvironment(ExecutionContext.java:111)
>         at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.(ExecutionContext.java:66)
>         at 
> org.apache.flink.table.client.gateway.context.SessionContext.create(SessionContext.java:247)
>         at 
> org.apache.flink.table.client.gateway.local.LocalContextUtils.buildSessionContext(LocalContextUtils.java:87)
>         at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:87)
>         at org.apache.flink.table.client.SqlClient.start(SqlClient.java:88)
>         at 
> org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187)
>         ... 1 more
> Caused by: org.apache.flink.table.api.TableException: Unexpected error when 
> trying to load service provider for factories.
>         at 
> org.apache.flink.table.factories.FactoryUtil.lambda$discoverFactories$19(FactoryUtil.java:813)
>         at java.util.ArrayList.forEach(ArrayList.java:1259)
>         at 
> org.apache.flink.table.factories.FactoryUtil.discoverFactories(FactoryUtil.java:799)
>         at 
> org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:517)
>         at 
> org.apache.flink.table.client.gateway.context.ExecutionContext.lookupExecutor(ExecutionContext.java:154)
>         ... 8 more
> Caused by: java.util.ServiceConfigurationError: 
> org.apache.flink.table.factories.Factory: Provider 
> org.apache.flink.table.planner.loader.DelegateExecutorFactory could not be 
> instantiated
>         at java.util.ServiceLoader.fail(ServiceLoader.java:232)
>         at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
>         at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
>         at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
>         at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
>         at 
> org.apache.flink.table.factories.ServiceLoaderUtil.load(ServiceLoaderUtil.java:42)
>         at 
> org.apache.flink.table.factories.FactoryUtil.discoverFactories(FactoryUtil.java:798)
>         ... 10 more
> Caused by: java.lang.ExceptionInInitializerError
>         at 
> org.apache.flink.table.planner.loader.PlannerModule.getInstance(PlannerModule.java:135)
>         at 
> org.apache.flink.table.planner.loader.DelegateExecutorFactory.(DelegateExecutorFactory.java:34)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
>         at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>         at java.lang.Class.newInstance(Class.java:442)
>         at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
>         ... 14 more
> Caused by: org.apache.flink.table.api.TableException: Could not initialize 
> the table planner components loader.
>         at 
> org.apache.flink.table.planner.loader.PlannerModule.(PlannerModule.java:123)
>         at 
> 

[jira] [Created] (FLINK-30001) sql-client.sh start failed

2022-11-11 Thread xiaohang.li (Jira)
xiaohang.li created FLINK-30001:
---

 Summary: sql-client.sh start failed
 Key: FLINK-30001
 URL: https://issues.apache.org/jira/browse/FLINK-30001
 Project: Flink
  Issue Type: Bug
  Components: Command Line Client
Affects Versions: 1.15.2, 1.16.0
Reporter: xiaohang.li


[hadoop@master flink-1.15.0]$ ./bin/sql-client.sh 
Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no HADOOP_CONF_DIR or 
HADOOP_CLASSPATH was set.
Setting HBASE_CONF_DIR=/etc/hbase/conf because no HBASE_CONF_DIR was set.


Exception in thread "main" org.apache.flink.table.client.SqlClientException: 
Unexpected exception. This is a bug. Please consider filing an issue.
        at 
org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:201)
        at org.apache.flink.table.client.SqlClient.main(SqlClient.java:161)
Caused by: org.apache.flink.table.api.TableException: Could not instantiate the 
executor. Make sure a planner module is on the classpath
        at 
org.apache.flink.table.client.gateway.context.ExecutionContext.lookupExecutor(ExecutionContext.java:163)
        at 
org.apache.flink.table.client.gateway.context.ExecutionContext.createTableEnvironment(ExecutionContext.java:111)
        at 
org.apache.flink.table.client.gateway.context.ExecutionContext.(ExecutionContext.java:66)
        at 
org.apache.flink.table.client.gateway.context.SessionContext.create(SessionContext.java:247)
        at 
org.apache.flink.table.client.gateway.local.LocalContextUtils.buildSessionContext(LocalContextUtils.java:87)
        at 
org.apache.flink.table.client.gateway.local.LocalExecutor.openSession(LocalExecutor.java:87)
        at org.apache.flink.table.client.SqlClient.start(SqlClient.java:88)
        at 
org.apache.flink.table.client.SqlClient.startClient(SqlClient.java:187)
        ... 1 more
Caused by: org.apache.flink.table.api.TableException: Unexpected error when 
trying to load service provider for factories.
        at 
org.apache.flink.table.factories.FactoryUtil.lambda$discoverFactories$19(FactoryUtil.java:813)
        at java.util.ArrayList.forEach(ArrayList.java:1259)
        at 
org.apache.flink.table.factories.FactoryUtil.discoverFactories(FactoryUtil.java:799)
        at 
org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:517)
        at 
org.apache.flink.table.client.gateway.context.ExecutionContext.lookupExecutor(ExecutionContext.java:154)
        ... 8 more
Caused by: java.util.ServiceConfigurationError: 
org.apache.flink.table.factories.Factory: Provider 
org.apache.flink.table.planner.loader.DelegateExecutorFactory could not be 
instantiated
        at java.util.ServiceLoader.fail(ServiceLoader.java:232)
        at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
        at 
java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
        at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
        at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
        at 
org.apache.flink.table.factories.ServiceLoaderUtil.load(ServiceLoaderUtil.java:42)
        at 
org.apache.flink.table.factories.FactoryUtil.discoverFactories(FactoryUtil.java:798)
        ... 10 more
Caused by: java.lang.ExceptionInInitializerError
        at 
org.apache.flink.table.planner.loader.PlannerModule.getInstance(PlannerModule.java:135)
        at 
org.apache.flink.table.planner.loader.DelegateExecutorFactory.(DelegateExecutorFactory.java:34)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at java.lang.Class.newInstance(Class.java:442)
        at 
java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
        ... 14 more
Caused by: org.apache.flink.table.api.TableException: Could not initialize the 
table planner components loader.
        at 
org.apache.flink.table.planner.loader.PlannerModule.(PlannerModule.java:123)
        at 
org.apache.flink.table.planner.loader.PlannerModule.(PlannerModule.java:52)
        at 
org.apache.flink.table.planner.loader.PlannerModule$PlannerComponentsHolder.(PlannerModule.java:131)
        ... 22 more
Caused by: java.nio.file.FileAlreadyExistsException: /tmp
        at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:88)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
        at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
        at 
sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
        at java.nio.file.Files.createDirectory(Files.java:674)
        at 

[jira] [Updated] (FLINK-18960) flink sideoutput union

2020-08-14 Thread xiaohang.li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaohang.li updated FLINK-18960:

Affects Version/s: (was: 1.11.1)
   (was: 1.11.0)

> flink sideoutput union
> --
>
> Key: FLINK-18960
> URL: https://issues.apache.org/jira/browse/FLINK-18960
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.10.1
>Reporter: xiaohang.li
>Priority: Minor
>
> flink sideoutput 
> union操作时数据出现问题。从主流分出来的侧输出流进行union操作时,显示输出的是以最后一个union的数据流结果*union的次数
>  
> val side = new OutputTag[String]("side")
>  val side2 = new OutputTag[String]("side2")
>  val side3 = new OutputTag[String]("side3")
>  val ds = env.socketTextStream("master",9001)
>  val res = ds.process(new ProcessFunction[String,String] {
>  override def processElement(value: String, ctx: ProcessFunction[String, 
> String]#Context, out: Collector[String]): Unit = {
>  if(value.contains("hello"))
> { ctx.output(side,value) }
> else if(value.contains("world"))
> { ctx.output(side2,value) }
> else if(value.contains("flink"))
> { ctx.output(side3,value) }
> out.collect(value)
>  }
>  })
> val res1 = res.getSideOutput(side)
>  val res2 = res.getSideOutput(side2)
>  val res3 = res.getSideOutput(side3)
> println( ">"+res1.getClass)
>  println( ">"+res2.getClass)
> res1.print("res1")
>  res2.print("res2")
>  res3.print("res3")
> res2.union(res1).union(res3).print("all")
>  
>  
>  
> 在socket端口分别输入
> hello
> world
> flink
>  
> idea显示数据如下
> res1> hello
>  res2> world
>  res3> flink
>  all> flink
>  all> flink
>  all> flink
>  
> 可见在all输出流显示的是最后一个union的侧输出流*union的次数,实际显示应为
> all>flink
> 如果分别再对每个侧输出流进行map操作后再执行上面的代码即显示正确的数据输出,但是这一步操作应该是非必须的



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18960) flink sideoutput union

2020-08-14 Thread xiaohang.li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaohang.li updated FLINK-18960:

Affects Version/s: 1.11.0
   1.11.1
  Description: 
flink sideoutput 
union操作时数据出现问题。从主流分出来的侧输出流进行union操作时,显示输出的是以最后一个union的数据流结果*union的次数

 

val side = new OutputTag[String]("side")
 val side2 = new OutputTag[String]("side2")
 val side3 = new OutputTag[String]("side3")
 val ds = env.socketTextStream("master",9001)
 val res = ds.process(new ProcessFunction[String,String] {
 override def processElement(value: String, ctx: ProcessFunction[String, 
String]#Context, out: Collector[String]): Unit = {
 if(value.contains("hello"))

{ ctx.output(side,value) }

else if(value.contains("world"))

{ ctx.output(side2,value) }

else if(value.contains("flink"))

{ ctx.output(side3,value) }

out.collect(value)
 }
 })

val res1 = res.getSideOutput(side)
 val res2 = res.getSideOutput(side2)
 val res3 = res.getSideOutput(side3)

println( ">"+res1.getClass)
 println( ">"+res2.getClass)

res1.print("res1")
 res2.print("res2")
 res3.print("res3")

res2.union(res1).union(res3).print("all")

 

 

 

在socket端口分别输入

hello

world

flink

 

idea显示数据如下

res1> hello
 res2> world
 res3> flink
 all> flink
 all> flink
 all> flink

 

可见在all输出流显示的是最后一个union的侧输出流*union的次数,实际显示应为

all>flink

如果分别再对每个侧输出流进行map操作后再执行上面的代码即显示正确的数据输出,但是这一步操作应该是非必须的

  was:
flink sideoutput 
union操作时数据出现问题。从主流分出来的侧输出流进行union操作时,显示输出的是以最后一个union的数据流结果*union的次数

 

  Environment: (was: val side = new OutputTag[String]("side")
val side2 = new OutputTag[String]("side2")
val side3 = new OutputTag[String]("side3")
val ds = env.socketTextStream("master",9001)
val res = ds.process(new ProcessFunction[String,String] {
 override def processElement(value: String, ctx: ProcessFunction[String, 
String]#Context, out: Collector[String]): Unit = {
 if(value.contains("hello")){
 ctx.output(side,value)
 }else if(value.contains("world")){
 ctx.output(side2,value)
 }else if(value.contains("flink")){
 ctx.output(side3,value)
 }
 out.collect(value)
 }
})

val res1 = res.getSideOutput(side)
val res2 = res.getSideOutput(side2)
val res3 = res.getSideOutput(side3)


println( ">"+res1.getClass)
println( ">"+res2.getClass)


res1.print("res1")
res2.print("res2")
res3.print("res3")

res2.union(res1).union(res3).print("all")

 

 

 

在socket端口分别输入

hello

world

flink

 

idea显示数据如下

res1> hello
res2> world
res3> flink
all> flink
all> flink
all> flink

 

可见在all输出流显示的是最后一个union的侧输出流*union的次数,实际显示应为

all>flink)

> flink sideoutput union
> --
>
> Key: FLINK-18960
> URL: https://issues.apache.org/jira/browse/FLINK-18960
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.10.1, 1.11.0, 1.11.1
>Reporter: xiaohang.li
>Priority: Minor
>
> flink sideoutput 
> union操作时数据出现问题。从主流分出来的侧输出流进行union操作时,显示输出的是以最后一个union的数据流结果*union的次数
>  
> val side = new OutputTag[String]("side")
>  val side2 = new OutputTag[String]("side2")
>  val side3 = new OutputTag[String]("side3")
>  val ds = env.socketTextStream("master",9001)
>  val res = ds.process(new ProcessFunction[String,String] {
>  override def processElement(value: String, ctx: ProcessFunction[String, 
> String]#Context, out: Collector[String]): Unit = {
>  if(value.contains("hello"))
> { ctx.output(side,value) }
> else if(value.contains("world"))
> { ctx.output(side2,value) }
> else if(value.contains("flink"))
> { ctx.output(side3,value) }
> out.collect(value)
>  }
>  })
> val res1 = res.getSideOutput(side)
>  val res2 = res.getSideOutput(side2)
>  val res3 = res.getSideOutput(side3)
> println( ">"+res1.getClass)
>  println( ">"+res2.getClass)
> res1.print("res1")
>  res2.print("res2")
>  res3.print("res3")
> res2.union(res1).union(res3).print("all")
>  
>  
>  
> 在socket端口分别输入
> hello
> world
> flink
>  
> idea显示数据如下
> res1> hello
>  res2> world
>  res3> flink
>  all> flink
>  all> flink
>  all> flink
>  
> 可见在all输出流显示的是最后一个union的侧输出流*union的次数,实际显示应为
> all>flink
> 如果分别再对每个侧输出流进行map操作后再执行上面的代码即显示正确的数据输出,但是这一步操作应该是非必须的



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18960) flink sideoutput union

2020-08-14 Thread xiaohang.li (Jira)
xiaohang.li created FLINK-18960:
---

 Summary: flink sideoutput union
 Key: FLINK-18960
 URL: https://issues.apache.org/jira/browse/FLINK-18960
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.10.1
 Environment: val side = new OutputTag[String]("side")
val side2 = new OutputTag[String]("side2")
val side3 = new OutputTag[String]("side3")
val ds = env.socketTextStream("master",9001)
val res = ds.process(new ProcessFunction[String,String] {
 override def processElement(value: String, ctx: ProcessFunction[String, 
String]#Context, out: Collector[String]): Unit = {
 if(value.contains("hello")){
 ctx.output(side,value)
 }else if(value.contains("world")){
 ctx.output(side2,value)
 }else if(value.contains("flink")){
 ctx.output(side3,value)
 }
 out.collect(value)
 }
})

val res1 = res.getSideOutput(side)
val res2 = res.getSideOutput(side2)
val res3 = res.getSideOutput(side3)


println( ">"+res1.getClass)
println( ">"+res2.getClass)


res1.print("res1")
res2.print("res2")
res3.print("res3")

res2.union(res1).union(res3).print("all")

 

 

 

在socket端口分别输入

hello

world

flink

 

idea显示数据如下

res1> hello
res2> world
res3> flink
all> flink
all> flink
all> flink

 

可见在all输出流显示的是最后一个union的侧输出流*union的次数,实际显示应为

all>flink
Reporter: xiaohang.li


flink sideoutput 
union操作时数据出现问题。从主流分出来的侧输出流进行union操作时,显示输出的是以最后一个union的数据流结果*union的次数

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)