Re: Flink error;

2020-05-09 Thread Sivaprasanna S
It is working as expected. If I'm right, the print operator will simply
call the `.toString()` on the input element. If you want to visualize your
payload in JSON format, override the toString() in `SensorData` class with
the code to form your payload as a JSON representation using ObjectMapper
or lombok plugin, or something like that.


On Sun, May 10, 2020 at 8:43 AM Aissa Elaffani 
wrote:

> Hello Guys,
> I hope you are well. I am trying to build a pipeline with apache Kafka and
> apache Flink. So i am sendig some data to a kafka topic, the data is
> generated in Json format .. then i try to consume it, so I tried to
> deserialize the message but I think there is a probleme, because when i
> want to print the deserialized results, i got some weird syntax
> "sensors.SensorData@49820beb". I am going to show you some pictures
> fom the project and i hope you can figure it out.
>
>

-- 



*-*


*This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the 
system manager. This message contains confidential information and is 
intended only for the individual named. If you are not the named addressee, 
you should not disseminate, distribute or copy this email. Please notify 
the sender immediately by email if you have received this email by mistake 
and delete this email from your system. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or 
taking any action in reliance on the contents of this information is 
strictly prohibited.*

 

*Any views or opinions presented in this 
email are solely those of the author and do not necessarily represent those 
of the organization. Any information on shares, debentures or similar 
instruments, recommended product pricing, valuations and the like are for 
information purposes only. It is not meant to be an instruction or 
recommendation, as the case may be, to buy or to sell securities, products, 
services nor an offer to buy or sell securities, products or services 
unless specifically stated to be so on behalf of the Flipkart group. 
Employees of the Flipkart group of companies are expressly required not to 
make defamatory statements and not to infringe or authorise any 
infringement of copyright or any other legal right by email communications. 
Any such communication is contrary to organizational policy and outside the 
scope of the employment of the individual concerned. The organization will 
not accept any liability in respect of such communication, and the employee 
responsible will be personally liable for any damages or other liability 
arising.*

 

*Our organization accepts no liability for the 
content of this email, or for the consequences of any actions taken on the 
basis of the information *provided,* unless that information is 
subsequently confirmed in writing. If you are not the intended recipient, 
you are notified that disclosing, copying, distributing or taking any 
action in reliance on the contents of this information is strictly 
prohibited.*


_-_



Re: upgrade flink from 1.9.1 to 1.10.0 on EMR

2020-05-09 Thread aj
Hello Yang,

I have attached my pom file and I did not see that I am using any Hadoop
dependency. Can you please help me.

On Wed, May 6, 2020 at 1:22 PM Yang Wang  wrote:

> Hi aj,
>
> From the logs you have provided, the hadoop version is still 2.4.1.
> Could you check the user jar(i.e. events-processor-1.0-SNAPSHOT.jar) have
> some
> hadoop classes? If it is, you need to exclude the hadoop dependency.
>
>
> Best,
> Yang
>
> aj  于2020年5月6日周三 下午3:38写道:
>
>> Hello,
>>
>> Please help me upgrade to 1.10 in AWS EMR.
>>
>> On Fri, May 1, 2020 at 4:05 PM aj  wrote:
>>
>>> Hi Yang,
>>>
>>> I am attaching the logs for your reference, please help me what i am
>>> doing wrong.
>>>
>>> Thanks,
>>> Anuj
>>>
>>> On Wed, Apr 29, 2020 at 9:06 AM Yang Wang  wrote:
>>>
 Hi Anuj,

 I think the exception you come across still because the hadoop version
 is 2.4.1. I have checked the hadoop code, the code line are exactly
 same.
 For 2.8.1, i also have checked the ruleParse. It could work.

 /**
  * A pattern for parsing a auth_to_local rule.
  */
 private static final Pattern ruleParser =
   
 Pattern.compile("\\s*((DEFAULT)|(RULE:\\[(\\d*):([^\\]]*)](\\(([^)]*)\\))?"+
   "(s/([^/]*)/([^/]*)/(g)?)?))/?(L)?");


 Could you share the jobmanager logs so that i could check the classpath
 and hadoop version?

 Best,
 Yang

 aj  于2020年4月28日周二 上午1:01写道:

> Hello Yang,
> My Hadoop version is Hadoop 3.2.1-amzn-0
> and I have put in flink/lib.
>  flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
>
> then I am getting below error :
>
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/mnt/yarn/usercache/hadoop/appcache/application_1587983834922_0002/filecache/10/slf4j-log4j12-1.7.15.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Exception in thread "main" java.lang.IllegalArgumentException: Invalid
> rule: /L
>   RULE:[2:$1@$0](.*@)s/@.*///L
>   DEFAULT
> at
> org.apache.hadoop.security.authentication.util.KerberosName.parseRules(KerberosName.java:321)
> at
> org.apache.hadoop.security.authentication.util.KerberosName.setRules(KerberosName.java:386)
> at
> org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:75)
> at
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:247)
> at
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:232)
> at
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:718)
> at
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:703)
> at
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:605)
> at
> org.apache.flink.yarn.entrypoint.YarnEntrypointUtils.logYarnEnvironmentInformation(YarnEntrypointUtils.java:136)
> at
> org.apache.flink.yarn.entrypoint.YarnJobClusterEntrypoint.main(YarnJobClusterEntrypoint.java:109)
>
>
> if I remove the  flink-shaded-hadoop-2-uber-2.8.3-10.0.jar  from lib
> then i get below error:
>
> 2020-04-27 16:59:37,293 INFO  org.apache.flink.client.cli.CliFrontend
>   -  Classpath:
> /usr/lib/flink/lib/flink-table-blink_2.11-1.10.0.jar:/usr/lib/flink/lib/flink-table_2.11-1.10.0.jar:/usr/lib/flink/lib/log4j-1.2.17.jar:/usr/lib/flink/lib/slf4j-log4j12-1.7.15.jar:/usr/lib/flink/lib/flink-dist_2.11-1.10.0.jar::/etc/hadoop/conf:/etc/hadoop/conf
> 2020-04-27 16:59:37,293 INFO  org.apache.flink.client.cli.CliFrontend
>   -
> 
> 2020-04-27 16:59:37,300 INFO
>  org.apache.flink.configuration.GlobalConfiguration- Loading
> configuration property: jobmanager.heap.size, 1024m
> 2020-04-27 16:59:37,300 INFO
>  org.apache.flink.configuration.GlobalConfiguration- Loading
> configuration property: taskmanager.memory.process.size, 1568m
> 2020-04-27 16:59:37,300 INFO
>  org.apache.flink.configuration.GlobalConfiguration- Loading
> configuration property: taskmanager.numberOfTaskSlots, 1
> 2020-04-27 16:59:37,300 INFO
>  org.apache.flink.configuration.GlobalConfiguration- Loading
> configuration property: parallelism.default, 1
> 2020-04-27 16:59:37,300 INFO
>