[GitHub] GJL commented on issue #7575: [FLINK-11418][docs] Fix version of bundler to 1.16.1

2019-01-26 Thread GitBox
GJL commented on issue #7575: [FLINK-11418][docs] Fix version of bundler to 
1.16.1
URL: https://github.com/apache/flink/pull/7575#issuecomment-457814142
 
 
   super!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yanghua commented on issue #7571: [FLINK-10724] Refactor failure handling in check point coordinator

2019-01-26 Thread GitBox
yanghua commented on issue #7571: [FLINK-10724] Refactor failure handling in 
check point coordinator
URL: https://github.com/apache/flink/pull/7571#issuecomment-457814745
 
 
   @lamber-ken answer your questions:
   
   * yes, now when checkpoint failure we just send a decline message to 
`CheckpointCoordinator` and let it consider whether fail the job or not.
   * I think there is not necessary to add a default statement, because 
`TaskAcknowledgeResult` is the type of the enum. It just defines four enum 
values which all been contained in the switch block.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] kisimple opened a new pull request #7577: [FLINK-11365] Remove legacy TaskManagerFailureRecoveryITCase

2019-01-26 Thread GitBox
kisimple opened a new pull request #7577: [FLINK-11365] Remove legacy 
TaskManagerFailureRecoveryITCase
URL: https://github.com/apache/flink/pull/7577
 
 
   Given that the task manager failure recovery test is already covered by the 
new code based 
`org.apache.flink.test.recovery.AbstractTaskManagerProcessFailureRecoveryTest`, 
the legacy TaskManagerFailureRecoveryITCase can be safely removed now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-11365) Check and port TaskManagerFailureRecoveryITCase to new code base if necessary

2019-01-26 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-11365:
---
Labels: pull-request-available  (was: )

> Check and port TaskManagerFailureRecoveryITCase to new code base if necessary
> -
>
> Key: FLINK-11365
> URL: https://issues.apache.org/jira/browse/FLINK-11365
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Till Rohrmann
>Assignee: boshu Zheng
>Priority: Major
>  Labels: pull-request-available
>
> Check and port {{TaskManagerFailureRecoveryITCase}} to new code base if 
> necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11434) Add the JSON_LENGTH function

2019-01-26 Thread xuqianjin (JIRA)
xuqianjin created FLINK-11434:
-

 Summary: Add the JSON_LENGTH function
 Key: FLINK-11434
 URL: https://issues.apache.org/jira/browse/FLINK-11434
 Project: Flink
  Issue Type: Improvement
Reporter: xuqianjin
Assignee: xuqianjin


{{JSON_LENGTH(*json_doc*[, *path*\])}}

Returns the length of a JSON document, or, if a _path_ argument is given, the 
length of the value within the document identified by the path. Returns 
{{NULL}} if any argument is {{NULL}} or the _path_ argument does not identify a 
value in the document. An error occurs if the _json_doc_ argument is not a 
valid JSON document or the _path_ argument is not a valid path expression or 
contains a {{*}} or {{**}} wildcard.

The length of a document is determined as follows:
 * The length of a scalar is 1.

 * The length of an array is the number of array elements.

 * The length of an object is the number of object members.

 * The length does not count the length of nested arrays or objects.

SELECT JSON_LENGTH('[1, 2, \{"a": 3}]');
+-+
| JSON_LENGTH('[1, 2, \{"a": 3}]') |
+-+
|                               3 |
+-+
SELECT JSON_LENGTH('\{"a": 1, "b": {"c": 30}}');
+-+
| JSON_LENGTH('\{"a": 1, "b": {"c": 30}}') |
+-+
|                                       2 |
+-+
SELECT JSON_LENGTH('\{"a": 1, "b": {"c": 30}}', '$.b');
++
| JSON_LENGTH('\{"a": 1, "b": {"c": 30}}', '$.b') |
++
|                                              1 |
++



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11434) Add the JSON_LENGTH function

2019-01-26 Thread xuqianjin (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuqianjin closed FLINK-11434.
-
Resolution: Fixed

> Add the JSON_LENGTH function
> 
>
> Key: FLINK-11434
> URL: https://issues.apache.org/jira/browse/FLINK-11434
> Project: Flink
>  Issue Type: Improvement
>Reporter: xuqianjin
>Assignee: xuqianjin
>Priority: Major
>
> {{JSON_LENGTH(*json_doc*[, *path*\])}}
> Returns the length of a JSON document, or, if a _path_ argument is given, the 
> length of the value within the document identified by the path. Returns 
> {{NULL}} if any argument is {{NULL}} or the _path_ argument does not identify 
> a value in the document. An error occurs if the _json_doc_ argument is not a 
> valid JSON document or the _path_ argument is not a valid path expression or 
> contains a {{*}} or {{**}} wildcard.
> The length of a document is determined as follows:
>  * The length of a scalar is 1.
>  * The length of an array is the number of array elements.
>  * The length of an object is the number of object members.
>  * The length does not count the length of nested arrays or objects.
> SELECT JSON_LENGTH('[1, 2, \{"a": 3}]');
> +-+
> | JSON_LENGTH('[1, 2, \{"a": 3}]') |
> +-+
> |                               3 |
> +-+
> SELECT JSON_LENGTH('\{"a": 1, "b": {"c": 30}}');
> +-+
> | JSON_LENGTH('\{"a": 1, "b": {"c": 30}}') |
> +-+
> |                                       2 |
> +-+
> SELECT JSON_LENGTH('\{"a": 1, "b": {"c": 30}}', '$.b');
> ++
> | JSON_LENGTH('\{"a": 1, "b": {"c": 30}}', '$.b') |
> ++
> |                                              1 |
> ++



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11435) Different jobs same consumer group are treated as separate groups

2019-01-26 Thread Avi Levi (JIRA)
Avi Levi created FLINK-11435:


 Summary: Different jobs same consumer group are treated as 
separate groups
 Key: FLINK-11435
 URL: https://issues.apache.org/jira/browse/FLINK-11435
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.7.1
 Environment: kafka consumer
Reporter: Avi Levi


deploying two jobs with the same consumer group id, still treated as different 
consumer groups. This behavior does not comply with kafka expectations. Same 
consumer group ids should be treated as the same group this will enable 
deploying more jobs (specially if they are stateless ) on demand and it also 
how normal consumer groups behave. 

reproduce :

deploy the same job twice - both jobs consumes the same message although they 
share the same consumer id



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (FLINK-11155) HadoopFreeFsFactoryTest fails with ClassCastException

2019-01-26 Thread xueyu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xueyu reassigned FLINK-11155:
-

Assignee: xueyu

> HadoopFreeFsFactoryTest fails with ClassCastException
> -
>
> Key: FLINK-11155
> URL: https://issues.apache.org/jira/browse/FLINK-11155
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.8.0
>Reporter: Gary Yao
>Assignee: xueyu
>Priority: Major
> Fix For: 1.8.0
>
>
> {{HadoopFreeFsFactoryTest#testHadoopFactoryInstantiationWithoutHadoop}} fails 
> with a ClassCastException because we try to cast the result of 
> {{getClassLoader}} to {{URLClassLoader}} (also see 
> https://stackoverflow.com/questions/46694600/java-9-compatability-issue-with-classloader-getsystemclassloader)
> {noformat}
> java.lang.ClassCastException: 
> java.base/jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to 
> java.base/java.net.URLClassLoader
>   at 
> org.apache.flink.runtime.fs.hdfs.HadoopFreeFsFactoryTest.testHadoopFactoryInstantiationWithoutHadoop(HadoopFreeFsFactoryTest.java:46)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:564)
> {noformat}
> One should also look for other instances where we cast to {{URLClassLoader}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (FLINK-11316) JarFileCreatorTest fails when run with Java 9

2019-01-26 Thread Gary Yao (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Yao closed FLINK-11316.

Resolution: Fixed

fixed via

6c01cf92b6050b66e5bf1ea9057266c9edaedff4
3fdb2af0ff51420becf99ece15fb02b96bc02f6a

> JarFileCreatorTest fails when run with Java 9
> -
>
> Key: FLINK-11316
> URL: https://issues.apache.org/jira/browse/FLINK-11316
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Affects Versions: 1.8.0
>Reporter: Gary Yao
>Assignee: TisonKun
>Priority: Major
>  Labels: pull-request-available, tests
> Fix For: 1.8.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Description*
> {{JarFileCreatorTest}} fails when run with Java 9. Because {{JarFileCreator}} 
> is not used in production, the class and its tests can be deleted.
> Stacktrace:
> {code}
> java.lang.IllegalArgumentException
>   at 
> org.apache.flink.shaded.asm5.org.objectweb.asm.ClassReader.(Unknown 
> Source)
>   at 
> org.apache.flink.shaded.asm5.org.objectweb.asm.ClassReader.(Unknown 
> Source)
>   at 
> org.apache.flink.shaded.asm5.org.objectweb.asm.ClassReader.(Unknown 
> Source)
>   at 
> org.apache.flink.runtime.util.JarFileCreator.addDependencies(JarFileCreator.java:146)
>   at 
> org.apache.flink.runtime.util.JarFileCreator.createJarFile(JarFileCreator.java:178)
>   at 
> org.apache.flink.runtime.util.JarFileCreatorTest.TestAnonymousInnerClassTrick1(JarFileCreatorTest.java:57)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:564)
> {code}
> *Steps to reproduce*
> With the Java 9 compiler in the path, run:
> {code}
> mvn test -Pjava9 -DfailIfNoTests=false -Dfast 
> -Dtest=org.apache.flink.runtime.util.JarFileCreatorTest
> {code}
> Alternatively, run the test from IntelliJ with the {{java9}} profile enabled.
> *Acceptance Criteria*
> * {{JarFileCreator}} and {{JarFileCreatorTest}} are deleted
> * Usage of {{JarFileCreator}} in 
> {{ClassLoaderUtilsTest#testWithURLClassLoader}} is replaced with an 
> alternative



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] asfgit closed pull request #7486: [FLINK-11316] [tests] Drop JarFileCreator

2019-01-26 Thread GitBox
asfgit closed pull request #7486: [FLINK-11316] [tests] Drop JarFileCreator
URL: https://github.com/apache/flink/pull/7486
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-11429) Flink fails to authenticate s3a with core-site.xml

2019-01-26 Thread Mario Georgiev (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16753128#comment-16753128
 ] 

 Mario Georgiev commented on FLINK-11429:
-

Ok this has been resolved, if you have the shaded hadoop on the classpath 
(moving it from opt to /lib) you need to specify your keys in the flink-conf, 
however now i am getting the following exception

 

Caused by: java.io.IOException: 
org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider constructor exception.  A 
class specified in fs.s3a.aws.credentials.provider must provide a public 
constructor accepting URI and Configuration, or a public factory method named 
getInstance that accepts no arguments, or a public default constructor.

Any idea?

> Flink fails to authenticate s3a with core-site.xml
> --
>
> Key: FLINK-11429
> URL: https://issues.apache.org/jira/browse/FLINK-11429
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.7.1
>Reporter:  Mario Georgiev
>Priority: Critical
>
> Hello,
> Problem is, if i put the core-site.xml somewhere and add it in the flink 
> image, put the path to it in the flink-conf.yaml it does not get picked and i 
> get an exception 
> {code:java}
> Caused by: 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.AmazonClientException: No AWS 
> Credentials provided by BasicAWSCredentialsProvider 
> EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Unable to 
> load credentials from service endpoint
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:139)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1164)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:762)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:724)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:5086)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:5060)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4309)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1337)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1277)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:373)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> ... 31 more
> Caused by: 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Unable to 
> load credentials from service endpoint
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EC2CredentialsFetcher.handleError(EC2CredentialsFetcher.java:183)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EC2CredentialsFetcher.fetchCredentials(EC2CredentialsFetcher.java:162)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EC2CredentialsFetcher.getCredentials(EC2CredentialsFetcher.java:82)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:151)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:117)
> ... 48 more
> Caused by: java.net.SocketException: Network unreachable (connect failed)
> at jav

[jira] [Updated] (FLINK-11429) Flink fails to authenticate s3a with core-site.xml

2019-01-26 Thread Mario Georgiev (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mario Georgiev updated FLINK-11429:
---
Priority: Minor  (was: Critical)

> Flink fails to authenticate s3a with core-site.xml
> --
>
> Key: FLINK-11429
> URL: https://issues.apache.org/jira/browse/FLINK-11429
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.7.1
>Reporter:  Mario Georgiev
>Priority: Minor
>
> Hello,
> Problem is, if i put the core-site.xml somewhere and add it in the flink 
> image, put the path to it in the flink-conf.yaml it does not get picked and i 
> get an exception 
> {code:java}
> Caused by: 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.AmazonClientException: No AWS 
> Credentials provided by BasicAWSCredentialsProvider 
> EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Unable to 
> load credentials from service endpoint
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:139)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1164)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:762)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:724)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:5086)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:5060)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4309)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1337)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1277)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:373)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> ... 31 more
> Caused by: 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Unable to 
> load credentials from service endpoint
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EC2CredentialsFetcher.handleError(EC2CredentialsFetcher.java:183)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EC2CredentialsFetcher.fetchCredentials(EC2CredentialsFetcher.java:162)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EC2CredentialsFetcher.getCredentials(EC2CredentialsFetcher.java:82)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:151)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:117)
> ... 48 more
> Caused by: java.net.SocketException: Network unreachable (connect failed)
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:589)
> at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
> at sun.net.www.http.HttpClient.openServer(HttpClient.java:463)
> at sun.net.www.http.HttpClient.openS

[jira] [Commented] (FLINK-11429) Flink fails to authenticate s3a with core-site.xml

2019-01-26 Thread Mario Georgiev (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16753219#comment-16753219
 ] 

 Mario Georgiev commented on FLINK-11429:
-

Solved,

Adding hadoop-common with the appropriate version to the Dockerfile helped.

> Flink fails to authenticate s3a with core-site.xml
> --
>
> Key: FLINK-11429
> URL: https://issues.apache.org/jira/browse/FLINK-11429
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.7.1
>Reporter:  Mario Georgiev
>Priority: Critical
>
> Hello,
> Problem is, if i put the core-site.xml somewhere and add it in the flink 
> image, put the path to it in the flink-conf.yaml it does not get picked and i 
> get an exception 
> {code:java}
> Caused by: 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.AmazonClientException: No AWS 
> Credentials provided by BasicAWSCredentialsProvider 
> EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Unable to 
> load credentials from service endpoint
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:139)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1164)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:762)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:724)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:5086)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:5060)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4309)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1337)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1277)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:373)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> ... 31 more
> Caused by: 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Unable to 
> load credentials from service endpoint
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EC2CredentialsFetcher.handleError(EC2CredentialsFetcher.java:183)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EC2CredentialsFetcher.fetchCredentials(EC2CredentialsFetcher.java:162)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EC2CredentialsFetcher.getCredentials(EC2CredentialsFetcher.java:82)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:151)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:117)
> ... 48 more
> Caused by: java.net.SocketException: Network unreachable (connect failed)
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:589)
> at sun.net.NetworkClient.doConnect(NetworkClient.ja

[jira] [Commented] (FLINK-11429) Flink fails to authenticate s3a with core-site.xml

2019-01-26 Thread Mario Georgiev (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16753220#comment-16753220
 ] 

 Mario Georgiev commented on FLINK-11429:
-

You can close this issue. Thanks.

> Flink fails to authenticate s3a with core-site.xml
> --
>
> Key: FLINK-11429
> URL: https://issues.apache.org/jira/browse/FLINK-11429
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.7.1
>Reporter:  Mario Georgiev
>Priority: Critical
>
> Hello,
> Problem is, if i put the core-site.xml somewhere and add it in the flink 
> image, put the path to it in the flink-conf.yaml it does not get picked and i 
> get an exception 
> {code:java}
> Caused by: 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.AmazonClientException: No AWS 
> Credentials provided by BasicAWSCredentialsProvider 
> EnvironmentVariableCredentialsProvider InstanceProfileCredentialsProvider : 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Unable to 
> load credentials from service endpoint
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:139)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.getCredentialsFromContext(AmazonHttpClient.java:1164)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.runBeforeRequestHandlers(AmazonHttpClient.java:762)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:724)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4325)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:5086)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:5060)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4309)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4272)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1337)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:1277)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$verifyBucketExists$1(S3AFileSystem.java:373)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> ... 31 more
> Caused by: 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Unable to 
> load credentials from service endpoint
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EC2CredentialsFetcher.handleError(EC2CredentialsFetcher.java:183)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EC2CredentialsFetcher.fetchCredentials(EC2CredentialsFetcher.java:162)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.EC2CredentialsFetcher.getCredentials(EC2CredentialsFetcher.java:82)
> at 
> org.apache.flink.fs.s3base.shaded.com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:151)
> at 
> org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.AWSCredentialProviderList.getCredentials(AWSCredentialProviderList.java:117)
> ... 48 more
> Caused by: java.net.SocketException: Network unreachable (connect failed)
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
> at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:589)
> at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
> at sun.net.www.http.HttpClient.openServer

[jira] [Commented] (FLINK-10460) DataDog reporter JsonMappingException

2019-01-26 Thread Elias Levy (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-10460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16753231#comment-16753231
 ] 

Elias Levy commented on FLINK-10460:


[~lining] as you can tell from the backtrace, that is not a user metric.  
Rather it appear to be a Kafka metric gathered in KafkaMetricWrapper

> DataDog reporter JsonMappingException
> -
>
> Key: FLINK-10460
> URL: https://issues.apache.org/jira/browse/FLINK-10460
> Project: Flink
>  Issue Type: Improvement
>  Components: Metrics
>Affects Versions: 1.4.2
>Reporter: Elias Levy
>Priority: Minor
> Attachments: image-2019-01-24-16-00-56-280.png
>
>
> Observed the following error in the TM logs this morning:
> {code:java}
> WARN  org.apache.flink.metrics.datadog.DatadogHttpReporter  - Failed 
> reporting metrics to Datadog.
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException:
>  (was java.util.ConcurrentModificationException) (through reference chain: 
> org.apache.flink.metrics.datadog.DSeries["series"]->
> java.util.ArrayList[88]->org.apache.flink.metrics.datadog.DGauge["points"])
>   at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:379)
>at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:339)
>   at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.std.StdSerializer.wrapAndThrow(StdSerializer.java:342)
>at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:686)
>at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:157)
>at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serializeContents(IndexedListSerializer.java:119)
>at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serialize(IndexedListSerializer.java:79)
>   at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.impl.IndexedListSerializer.serialize(IndexedListSerializer.java:18)
>   at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:672)
>   at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:678)
>   at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:157)
>at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:130)
>at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper._configAndWriteValue(ObjectMapper.java:3631)
>at 
> org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(ObjectMapper.java:2998)
>at 
> org.apache.flink.metrics.datadog.DatadogHttpClient.serialize(DatadogHttpClient.java:90)
>at 
> org.apache.flink.metrics.datadog.DatadogHttpClient.send(DatadogHttpClient.java:79)
>at 
> org.apache.flink.metrics.datadog.DatadogHttpReporter.report(DatadogHttpReporter.java:143)
>   at 
> org.apache.flink.runtime.metrics.MetricRegistryImpl$ReporterTask.run(MetricRegistryImpl.java:417)
>at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown
>  Source)
>at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
>  Source)
>at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
>at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>at java.lang.Thread.run(Unknown Source)
>  Caused by: java.util.ConcurrentModificationException
>at java.util.LinkedHashMap$LinkedHashIterator.nextNode(Unknown Source)
>at java.util.LinkedHashMap$LinkedKeyIterator.next(Unknown Source)
>at java.util.AbstractCollection.addAll(Unknown Source)
>at java.util.HashSet.(Unknown Source)
>at 
> org.apache.kafka.common.internals.PartitionStates.partitionSet(PartitionStates.java:65)
>at 
> org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedPartitions(SubscriptionState.java:298)
>at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator$ConsumerCoordinatorMetri

[jira] [Assigned] (FLINK-7697) Add metrics for Elasticsearch Sink

2019-01-26 Thread xueyu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-7697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xueyu reassigned FLINK-7697:


Assignee: xueyu

> Add metrics for Elasticsearch Sink
> --
>
> Key: FLINK-7697
> URL: https://issues.apache.org/jira/browse/FLINK-7697
> Project: Flink
>  Issue Type: Wish
>  Components: ElasticSearch Connector
>Reporter: Hai Zhou
>Assignee: xueyu
>Priority: Major
> Fix For: 1.8.0
>
>
> We should add metrics  to track  events write to ElasticasearchSink.
> eg. 
> * number of successful bulk sends
> * number of documents inserted
> * number of documents updated
> * number of documents version conflicts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-11436) Java deserialization failure of the AvroSerializer when used in an old CompositeSerializers

2019-01-26 Thread Igal Shilman (JIRA)
Igal Shilman created FLINK-11436:


 Summary: Java deserialization failure of the AvroSerializer when 
used in an old CompositeSerializers
 Key: FLINK-11436
 URL: https://issues.apache.org/jira/browse/FLINK-11436
 Project: Flink
  Issue Type: Bug
  Components: Type Serialization System
Affects Versions: 1.7.1, 1.7.0
Reporter: Igal Shilman
Assignee: Igal Shilman


During the release of Flink 1.7, the value of AvroSerializer, serialVersionUID 
was uptick to 2L (was 1L before).

Although the AvroSerializer (along with it's snapshot class) was migrated to 
the new serialization abstraction (hence free of Java serialization), there 
were composite serializers that were not yet migrated during that release (i.e. 
StreamElementSerializer), and were serialized with Java serialization into the 
checkpoint/savepoint.

In case that one of the nested serializers were the AvroSerializer we would
bump into deserialization exception due to a wrong serialVersionUID.

This issue was initially reported on the user mailing list:

http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Trouble-migrating-state-from-1-6-3-to-1-7-1-td25694.html
h3.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)