Interesting. Which hbase / Phoenix releases are you using ?
The following method has been removed from Put:

   public Put setWriteToWAL(boolean write) {

Please make sure the Phoenix release is compatible with your hbase version.

Cheers

On Fri, Jan 15, 2016 at 6:20 AM, Siddharth Ubale <
siddharth.ub...@syncoms.com> wrote:

> Hi,
>
>
>
>
>
> This is the log from the application :
>
>
>
> 16/01/15 19:23:19 INFO yarn.ApplicationMaster: Unregistering
> ApplicationMaster with SUCCEEDED (diag message: Shutdown hook called before
> final status was reported.)
>
> 16/01/15 19:23:19 INFO yarn.ApplicationMaster: Deleting staging directory
> .sparkStaging/application_1452763526769_0011
>
> 16/01/15 19:23:19 INFO remote.RemoteActorRefProvider$RemotingTerminator:
> Shutting down remote daemon.
>
> 16/01/15 19:23:19 INFO remote.RemoteActorRefProvider$RemotingTerminator:
> Remote daemon shut down; proceeding with flushing remote transports.
>
> 16/01/15 19:23:19 INFO remote.RemoteActorRefProvider$RemotingTerminator:
> Remoting shut down.
>
> 16/01/15 19:23:19 INFO util.Utils: Shutdown hook called
>
> 16/01/15 19:23:19 INFO
> client.HConnectionManager$HConnectionImplementation: Closing zookeeper
> sessionid=0x1523f753f6f0061
>
> 16/01/15 19:23:19 INFO zookeeper.ClientCnxn: EventThread shut down
>
> 16/01/15 19:23:19 INFO zookeeper.ZooKeeper: Session: 0x1523f753f6f0061
> closed
>
> 16/01/15 19:23:19 ERROR yarn.ApplicationMaster: User class threw
> exception: java.lang.NoSuchMethodError:
> org.apache.hadoop.hbase.client.Put.setWriteToWAL(Z)Lorg/apache/hadoop/hbase/client/Put;
>
> java.lang.NoSuchMethodError:
> org.apache.hadoop.hbase.client.Put.setWriteToWAL(Z)Lorg/apache/hadoop/hbase/client/Put;
>
>                 at
> org.apache.phoenix.schema.PTableImpl$PRowImpl.newMutations(PTableImpl.java:639)
>
>                 at
> org.apache.phoenix.schema.PTableImpl$PRowImpl.<init>(PTableImpl.java:632)
>
>                 at
> org.apache.phoenix.schema.PTableImpl.newRow(PTableImpl.java:557)
>
>                 at
> org.apache.phoenix.schema.PTableImpl.newRow(PTableImpl.java:573)
>
>                 at
> org.apache.phoenix.execute.MutationState.addRowMutations(MutationState.java:185)
>
>                 at
> org.apache.phoenix.execute.MutationState.access$200(MutationState.java:79)
>
>                 at
> org.apache.phoenix.execute.MutationState$2.init(MutationState.java:258)
>
>                 at
> org.apache.phoenix.execute.MutationState$2.<init>(MutationState.java:255)
>
>                 at
> org.apache.phoenix.execute.MutationState.toMutations(MutationState.java:253)
>
>                 at
> org.apache.phoenix.execute.MutationState.toMutations(MutationState.java:243)
>
>                 at
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1840)
>
>                 at
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:744)
>
>                 at
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:186)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:303)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:295)
>
>                 at
> org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:293)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1236)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1891)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1860)
>
>                 at
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:77)
>
>                 at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1860)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:162)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:131)
>
>                 at
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:133)
>
>                 at
> java.sql.DriverManager.getConnection(DriverManager.java:664)
>
>                 at
> java.sql.DriverManager.getConnection(DriverManager.java:270)
>
>                 at
> spark.phoenix.PhoenixConnect.getConnection(PhoenixConnect.java:26)
>
>                 at
> spark.stream.eventStream.startStream(eventStream.java:105)
>
>                 at
> time.series.wo.agg.InputStreamSpark.main(InputStreamSpark.java:38)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:497)
>
>                 at
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:483)
>
> Thanks,
>
> Siddharth
>
>
>
>
>
> *From:* Ted Yu [mailto:yuzhih...@gmail.com]
> *Sent:* Friday, January 15, 2016 7:43 PM
> *To:* Siddharth Ubale <siddharth.ub...@syncoms.com>
> *Cc:* user@spark.apache.org
> *Subject:* Re: Spark App -Yarn-Cluster-Mode ===> Hadoop_conf_**.zip file.
>
>
>
> bq. check application tracking 
> page:http://slave1:8088/proxy/application_1452763526769_0011/
> Then <http://slave1:8088/proxy/application_1452763526769_0011/Then>, ...
>
>
>
> Have you done the above to see what error was in each attempt ?
>
>
>
> Which Spark / hadoop release are you using ?
>
>
>
> Thanks
>
>
>
> On Fri, Jan 15, 2016 at 5:58 AM, Siddharth Ubale <
> siddharth.ub...@syncoms.com> wrote:
>
> Hi,
>
>
>
> I am trying to run a Spark streaming application in yarn-cluster mode.
> However I am facing an issue where the job ends asking for a particular
> Hadoop_conf_**.zip file in hdfs location.
>
> Can any one guide with this?
>
> The application works fine in local mode only it stops abruptly for want
> of memory.
>
>
>
> Below is the error stack trace:
>
>
>
> diagnostics: Application application_1452763526769_0011 failed 2 times due
> to AM Container for appattempt_1452763526769_0011_000002 exited with
> exitCode: -1000
>
> For more detailed output, check application tracking page:
> http://slave1:8088/proxy/application_1452763526769_0011/Then, click on
> links to logs of each attempt.
>
> Diagnostics: File does not exist:
> hdfs://slave1:9000/user/hduser/.sparkStaging/application_1452763526769_0011/__hadoop_conf__1057113228186399290.zip
>
> *java.io.FileNotFoundException: File does not exist:
> hdfs://slave1:9000/user/hduser/.sparkStaging/application_1452763526769_0011/__hadoop_conf__1057113228186399290.zip*
>
>                 at
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
>
>                 at
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
>
>                 at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
>                 at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
>
>                 at
> org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
>
>                 at
> org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
>
>                 at
> org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
>
>                 at
> org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
>
>                 at java.security.AccessController.doPrivileged(Native
> Method)
>
>                 at javax.security.auth.Subject.doAs(Subject.java:422)
>
>                 at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>
>                 at
> org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
>
>                 at
> org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
>
>                 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>
>                 at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>
>                 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>
>                 at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
>                 at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
>                 at java.lang.Thread.run(Thread.java:745)
>
>
>
> Failing this attempt. Failing the application.
>
>                 ApplicationMaster host: N/A
>
>                 ApplicationMaster RPC port: -1
>
>                 queue: default
>
>                 start time: 1452866026622
>
>                 final status: FAILED
>
>                 tracking URL:
> http://slave1:8088/cluster/app/application_1452763526769_0011
>
>                 user: hduser
>
> Exception in thread "main" org.apache.spark.SparkException: Application
> application_1452763526769_0011 finished with failed status
>
>                 at
> org.apache.spark.deploy.yarn.Client.run(Client.scala:841)
>
>                 at
> org.apache.spark.deploy.yarn.Client$.main(Client.scala:867)
>
>                 at org.apache.spark.deploy.yarn.Client.main(Client.scala)
>
>                 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
>
>                 at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
>                 at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>                 at java.lang.reflect.Method.invoke(Method.java:497)
>
>                 at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:664)
>
>                 at
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:169)
>
>                 at
> org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:192)
>
>                 at
> org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:111)
>
>                 at
> org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> 16/01/15 19:23:53 INFO Utils: Shutdown hook called
>
> 16/01/15 19:23:53 INFO Utils: Deleting directory
> /tmp/spark-b6ebcb83-efff-432a-9a7a-b4764f482d81
>
> java.lang.UNIXProcess$ProcessPipeOutputStream@7a0a6f73  1
>
>
>
>
>
>
>
> Siddharth Ubale,
>
> *Synchronized Communications *
>
> *#43, Velankani Tech Park, Block No. II, *
>
> *3rd Floor, Electronic City Phase I,*
>
> *Bangalore – 560 100*
>
> *Tel : +91 80 3202 4060 <%2B91%2080%203202%204060>*
>
> *Web:* *www.syncoms.com* <http://www.syncoms.com/>
>
> *[image: LogoNEWmohLARGE]*
>
> *London*|*Bangalore*|*Orlando*
>
>
>
> *we innovate, plan, execute, and transform the business​*
>
>
>
>
>

Reply via email to