Hi John,

I see this error

Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL


Can you check in case you have a problem under Hadoop storage or you have
an issue with your user say hduser on Linux!

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 18 May 2016 at 19:41, JOHN MILLER <jmill...@gmail.com> wrote:

> Greetings
>
> Thanx  Sure can.   Below is from HIVE CLI
>
> hive> select count(distinct warctype) from commoncrawl18 where
> warctype='warcinfo';
> Query ID = jmill383_20160518143715_34041e3e-713b-4e35-ae86-a88498192ab1
> Total jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=<number>
> Starting Job = job_1463594979064_0003, Tracking URL =
> http://starchild:8088/proxy/application_1463594979064_0003/
> Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1463594979064_0003
> Hadoop job information for Stage-1: number of mappers: 0; number of
> reducers: 0
> 2016-05-18 14:37:19,794 Stage-1 map = 0%,  reduce = 0%
> Ended Job = job_1463594979064_0003 with errors
> Error during job, obtaining debugging information...
> Job Tracking URL:
> http://starchild:8088/cluster/app/application_1463594979064_0003
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> MapReduce Jobs Launched:
> Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> hive>
>
>
> This one is from cascading-hive
>
> [jmill383@starchild demo]$ /opt/hadoop/bin/hadoop jar
> build/libs/cascading-hive-demo-1.0.jar cascading.hive.HiveDemo
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/usr/local/hive/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 16/05/18 14:40:20 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 16/05/18 14:40:20 INFO metastore.HiveMetaStore: 0: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/18 14:40:20 INFO metastore.ObjectStore: ObjectStore, initialize
> called
> 16/05/18 14:40:21 WARN DataNucleus.General: Plugin (Bundle)
> "org.datanucleus.api.jdo" is already registered. Ensure you dont have
> multiple JAR versions of the same plugin in the classpath. The URL
> "file:/tmp/hadoop-unjar5875988559818865999/lib/datanucleus-api-jdo-3.2.6.jar"
> is already registered, and you are trying to register an identical plugin
> located at URL "file:/usr/local/hive/lib/datanucleus-api-jdo-3.2.6.jar."
> 16/05/18 14:40:21 WARN DataNucleus.General: Plugin (Bundle)
> "org.datanucleus" is already registered. Ensure you dont have multiple JAR
> versions of the same plugin in the classpath. The URL
> "file:/tmp/hadoop-unjar5875988559818865999/lib/datanucleus-core-3.2.10.jar"
> is already registered, and you are trying to register an identical plugin
> located at URL "file:/usr/local/hive/lib/datanucleus-core-3.2.10.jar."
> 16/05/18 14:40:21 WARN DataNucleus.General: Plugin (Bundle)
> "org.datanucleus.store.rdbms" is already registered. Ensure you dont have
> multiple JAR versions of the same plugin in the classpath. The URL
> "file:/tmp/hadoop-unjar5875988559818865999/lib/datanucleus-rdbms-3.2.9.jar"
> is already registered, and you are trying to register an identical plugin
> located at URL "file:/usr/local/hive/lib/datanucleus-rdbms-3.2.9.jar."
> 16/05/18 14:40:21 INFO DataNucleus.Persistence: Property
> datanucleus.cache.level2 unknown - will be ignored
> 16/05/18 14:40:21 INFO DataNucleus.Persistence: Property
> hive.metastore.integral.jdo.pushdown unknown - will be ignored
> 16/05/18 14:40:22 INFO metastore.ObjectStore: Setting MetaStore object pin
> classes with
> hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
> 16/05/18 14:40:23 INFO DataNucleus.Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own datastore table.
> 16/05/18 14:40:23 INFO DataNucleus.Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only" so does not have its own datastore table.
> 16/05/18 14:40:23 INFO DataNucleus.Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as
> "embedded-only" so does not have its own datastore table.
> 16/05/18 14:40:23 INFO DataNucleus.Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as
> "embedded-only" so does not have its own datastore table.
> 16/05/18 14:40:24 INFO DataNucleus.Query: Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is closing
> 16/05/18 14:40:24 INFO metastore.MetaStoreDirectSql: Using direct SQL,
> underlying DB is DERBY
> 16/05/18 14:40:24 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: Added admin role in
> metastore
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: Added public role in
> metastore
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: No user is added in admin
> role, since config is empty
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: get_table : db=default
> tbl=dual
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Shutting down the
> object store...
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Shutting down the object store...
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Metastore shutdown
> complete.
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Metastore shutdown complete.
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: get_table : db=default
> tbl=dual
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/18 14:40:24 INFO metastore.ObjectStore: ObjectStore, initialize
> called
> 16/05/18 14:40:24 INFO DataNucleus.Query: Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is closing
> 16/05/18 14:40:24 INFO metastore.MetaStoreDirectSql: Using direct SQL,
> underlying DB is DERBY
> 16/05/18 14:40:24 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Shutting down the
> object store...
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Shutting down the object store...
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Metastore shutdown
> complete.
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Metastore shutdown complete.
> 16/05/18 14:40:24 INFO property.AppProps: using app.id:
> 954F6CFECF794BC191AB3296A6FAC1F5
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: get_table : db=default
> tbl=keyvalue
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/18 14:40:24 INFO metastore.ObjectStore: ObjectStore, initialize
> called
> 16/05/18 14:40:24 INFO DataNucleus.Query: Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is closing
> 16/05/18 14:40:24 INFO metastore.MetaStoreDirectSql: Using direct SQL,
> underlying DB is DERBY
> 16/05/18 14:40:24 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Shutting down the
> object store...
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Shutting down the object store...
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Metastore shutdown
> complete.
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Metastore shutdown complete.
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: get_table : db=default
> tbl=keyvalue
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/18 14:40:24 INFO metastore.ObjectStore: ObjectStore, initialize
> called
> 16/05/18 14:40:24 INFO DataNucleus.Query: Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is closing
> 16/05/18 14:40:24 INFO metastore.MetaStoreDirectSql: Using direct SQL,
> underlying DB is DERBY
> 16/05/18 14:40:24 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Shutting down the
> object store...
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Shutting down the object store...
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Metastore shutdown
> complete.
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Metastore shutdown complete.
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: get_table : db=default
> tbl=keyvalue2
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue2
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/18 14:40:24 INFO metastore.ObjectStore: ObjectStore, initialize
> called
> 16/05/18 14:40:24 INFO DataNucleus.Query: Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is closing
> 16/05/18 14:40:24 INFO metastore.MetaStoreDirectSql: Using direct SQL,
> underlying DB is DERBY
> 16/05/18 14:40:24 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Shutting down the
> object store...
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Shutting down the object store...
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Metastore shutdown
> complete.
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Metastore shutdown complete.
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: get_table : db=default
> tbl=keyvalue2
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue2
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/18 14:40:24 INFO metastore.ObjectStore: ObjectStore, initialize
> called
> 16/05/18 14:40:24 INFO DataNucleus.Query: Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is closing
> 16/05/18 14:40:24 INFO metastore.MetaStoreDirectSql: Using direct SQL,
> underlying DB is DERBY
> 16/05/18 14:40:24 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Shutting down the
> object store...
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Shutting down the object store...
> 16/05/18 14:40:24 INFO metastore.HiveMetaStore: 0: Metastore shutdown
> complete.
> 16/05/18 14:40:24 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Metastore shutdown complete.
> 16/05/18 14:40:24 INFO util.Util: resolving application jar from found
> main method on: cascading.hive.HiveDemo
> 16/05/18 14:40:24 INFO planner.HadoopPlanner: using application jar:
> /home/jmill383/cascading-hive/demo/build/libs/cascading-hive-demo-1.0.jar
> 16/05/18 14:40:25 INFO flow.Flow: [uppercase kv -> kv2 ] executed rule
> registry: MapReduceHadoopRuleRegistry, completed as: SUCCESS, in: 00:00.050
> 16/05/18 14:40:25 INFO flow.Flow: [uppercase kv -> kv2 ] rule registry:
> MapReduceHadoopRuleRegistry, supports assembly with steps: 1, nodes: 1
> 16/05/18 14:40:25 INFO flow.Flow: [uppercase kv -> kv2 ] rule registry:
> MapReduceHadoopRuleRegistry, result was selected using: 'default
> comparator: selects plan with fewest steps and fewest nodes'
> 16/05/18 14:40:25 INFO Configuration.deprecation:
> mapred.used.genericoptionsparser is deprecated. Instead, use
> mapreduce.client.genericoptionsparser.used
> 16/05/18 14:40:25 INFO Configuration.deprecation: mapred.working.dir is
> deprecated. Instead, use mapreduce.job.working.dir
> 16/05/18 14:40:25 INFO Configuration.deprecation: mapred.input.dir is
> deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
> 16/05/18 14:40:25 INFO Configuration.deprecation: mapred.output.dir is
> deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
> 16/05/18 14:40:25 INFO Configuration.deprecation: mapred.output.compress
> is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
> 16/05/18 14:40:25 INFO Configuration.deprecation: mapred.output.key.class
> is deprecated. Instead, use mapreduce.job.output.key.class
> 16/05/18 14:40:25 INFO Configuration.deprecation:
> mapred.output.value.class is deprecated. Instead, use
> mapreduce.job.output.value.class
> 16/05/18 14:40:25 INFO util.Version: Concurrent, Inc - Cascading
> 3.1.0-wip-57
> 16/05/18 14:40:25 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]
> starting
> 16/05/18 14:40:25 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]
> parallel execution of flows is enabled: false
> 16/05/18 14:40:25 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]
> executing total flows: 3
> 16/05/18 14:40:25 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]
> allocating management threads: 1
> 16/05/18 14:40:25 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]
> starting flow: load data into dual
> 16/05/18 14:40:25 INFO flow.Flow: [load data into dual] at least one sink
> is marked for delete
> 16/05/18 14:40:25 INFO flow.Flow: [load data into dual] sink oldest
> modified date: Wed Dec 31 18:59:59 EST 1969
> 16/05/18 14:40:25 INFO metastore.HiveMetaStore: 1: get_table : db=default
> tbl=dual
> 16/05/18 14:40:25 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual
> 16/05/18 14:40:25 INFO metastore.HiveMetaStore: 1: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/18 14:40:25 INFO metastore.ObjectStore: ObjectStore, initialize
> called
> 16/05/18 14:40:25 INFO DataNucleus.Query: Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is closing
> 16/05/18 14:40:25 INFO metastore.MetaStoreDirectSql: Using direct SQL,
> underlying DB is DERBY
> 16/05/18 14:40:25 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/18 14:40:25 INFO hive.HiveTap: strict mode: comparing existing hive
> table with table descriptor
> 16/05/18 14:40:25 INFO metastore.HiveMetaStore: 1: Shutting down the
> object store...
> 16/05/18 14:40:25 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Shutting down the object store...
> 16/05/18 14:40:25 INFO metastore.HiveMetaStore: 1: Metastore shutdown
> complete.
> 16/05/18 14:40:25 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Metastore shutdown complete.
> 16/05/18 14:40:25 INFO metastore.HiveMetaStore: 2: get_all_databases
> 16/05/18 14:40:25 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_all_databases
> 16/05/18 14:40:25 INFO metastore.HiveMetaStore: 2: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/18 14:40:25 INFO metastore.ObjectStore: ObjectStore, initialize
> called
> 16/05/18 14:40:25 INFO DataNucleus.Query: Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is closing
> 16/05/18 14:40:25 INFO metastore.MetaStoreDirectSql: Using direct SQL,
> underlying DB is DERBY
> 16/05/18 14:40:25 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/18 14:40:25 INFO metastore.HiveMetaStore: 2: get_functions:
> db=default pat=*
> 16/05/18 14:40:25 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_functions: db=default pat=*
> 16/05/18 14:40:25 INFO DataNucleus.Datastore: The class
> "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as
> "embedded-only" so does not have its own datastore table.
> 16/05/18 14:40:25 INFO session.SessionState: Created local directory:
> /tmp/e9c0df1e-0647-47df-ad9b-ddc1dcdb6054_resources
> 16/05/18 14:40:25 INFO session.SessionState: Created HDFS directory:
> /tmp/hive/jmill383/e9c0df1e-0647-47df-ad9b-ddc1dcdb6054
> 16/05/18 14:40:25 INFO session.SessionState: Created local directory:
> /tmp/jmill383/e9c0df1e-0647-47df-ad9b-ddc1dcdb6054
> 16/05/18 14:40:25 INFO session.SessionState: Created HDFS directory:
> /tmp/hive/jmill383/e9c0df1e-0647-47df-ad9b-ddc1dcdb6054/_tmp_space.db
> 16/05/18 14:40:25 INFO hive.HiveQueryRunner: running hive query: 'load
> data local inpath
> 'file:///home/jmill383/cascading-hive/demo/src/main/resources/data.txt'
> overwrite into table dual'
> 16/05/18 14:40:25 INFO log.PerfLogger: <PERFLOG method=Driver.run
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:25 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:25 INFO log.PerfLogger: <PERFLOG method=compile
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:25 INFO log.PerfLogger: <PERFLOG method=parse
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:25 INFO parse.ParseDriver: Parsing command: load data local
> inpath
> 'file:///home/jmill383/cascading-hive/demo/src/main/resources/data.txt'
> overwrite into table dual
> 16/05/18 14:40:26 INFO parse.ParseDriver: Parse Completed
> 16/05/18 14:40:26 INFO log.PerfLogger: </PERFLOG method=parse
> start=1463596825623 end=1463596826130 duration=507
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 2: get_table : db=default
> tbl=dual
> 16/05/18 14:40:26 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual
> 16/05/18 14:40:26 INFO ql.Driver: Semantic Analysis Completed
> 16/05/18 14:40:26 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze
> start=1463596826132 end=1463596826291 duration=159
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO ql.Driver: Returning Hive schema:
> Schema(fieldSchemas:null, properties:null)
> 16/05/18 14:40:26 INFO log.PerfLogger: </PERFLOG method=compile
> start=1463596825602 end=1463596826295 duration=693
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO ql.Driver: Concurrency mode is disabled, not
> creating a lock manager
> 16/05/18 14:40:26 INFO log.PerfLogger: <PERFLOG method=Driver.execute
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO ql.Driver: Starting
> command(queryId=jmill383_20160518144025_dbe97c43-6e94-43a9-bf17-14dd2d88f490):
> load data local inpath
> 'file:///home/jmill383/cascading-hive/demo/src/main/resources/data.txt'
> overwrite into table dual
> 16/05/18 14:40:26 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit
> start=1463596825602 end=1463596826298 duration=696
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO log.PerfLogger: <PERFLOG method=runTasks
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO log.PerfLogger: <PERFLOG method=task.MOVE.Stage-0
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO ql.Driver: Starting task [Stage-0:MOVE] in serial
> mode
> Loading data to table default.dual
> 16/05/18 14:40:26 INFO exec.Task: Loading data to table default.dual from
> file:/home/jmill383/cascading-hive/demo/src/main/resources/data.txt
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 2: get_table : db=default
> tbl=dual
> 16/05/18 14:40:26 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 2: get_table : db=default
> tbl=dual
> 16/05/18 14:40:26 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual
> 16/05/18 14:40:26 INFO common.FileUtils: deleting
> hdfs://localhost:8025/user/hive/warehouse/dual/data.txt
> 16/05/18 14:40:26 INFO fs.TrashPolicyDefault: Namenode trash
> configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
> 16/05/18 14:40:26 INFO metadata.Hive: Replacing
> src:file:/home/jmill383/cascading-hive/demo/src/main/resources/data.txt,
> dest: hdfs://localhost:8025/user/hive/warehouse/dual/data.txt, Status:true
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 2: alter_table: db=default
> tbl=dual newtbl=dual
> 16/05/18 14:40:26 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=alter_table: db=default tbl=dual newtbl=dual
> 16/05/18 14:40:26 INFO hive.log: Updating table stats fast for dual
> 16/05/18 14:40:26 INFO hive.log: Updated size of table dual to 2
> 16/05/18 14:40:26 INFO log.PerfLogger: <PERFLOG method=task.STATS.Stage-1
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO ql.Driver: Starting task [Stage-1:STATS] in serial
> mode
> 16/05/18 14:40:26 INFO exec.StatsTask: Executing stats task
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 2: get_table : db=default
> tbl=dual
> 16/05/18 14:40:26 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 2: get_table : db=default
> tbl=dual
> 16/05/18 14:40:26 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 2: alter_table: db=default
> tbl=dual newtbl=dual
> 16/05/18 14:40:26 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=alter_table: db=default tbl=dual newtbl=dual
> 16/05/18 14:40:26 INFO hive.log: Updating table stats fast for dual
> 16/05/18 14:40:26 INFO hive.log: Updated size of table dual to 2
> Table default.dual stats: [numFiles=1, numRows=0, totalSize=2,
> rawDataSize=0]
> 16/05/18 14:40:26 INFO exec.Task: Table default.dual stats: [numFiles=1,
> numRows=0, totalSize=2, rawDataSize=0]
> 16/05/18 14:40:26 INFO log.PerfLogger: </PERFLOG method=runTasks
> start=1463596826298 end=1463596826726 duration=428
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO log.PerfLogger: </PERFLOG method=Driver.execute
> start=1463596826296 end=1463596826726 duration=430
> from=org.apache.hadoop.hive.ql.Driver>
> OK
> 16/05/18 14:40:26 INFO ql.Driver: OK
> 16/05/18 14:40:26 INFO log.PerfLogger: <PERFLOG method=releaseLocks
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO log.PerfLogger: </PERFLOG method=releaseLocks
> start=1463596826727 end=1463596826727 duration=0
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO log.PerfLogger: </PERFLOG method=Driver.run
> start=1463596825602 end=1463596826727 duration=1125
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO log.PerfLogger: <PERFLOG method=releaseLocks
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO log.PerfLogger: </PERFLOG method=releaseLocks
> start=1463596826728 end=1463596826728 duration=0
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]
> completed flow: load data into dual
> 16/05/18 14:40:26 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]
> starting flow: select data from dual into keyvalue
> 16/05/18 14:40:26 INFO flow.Flow: [select data from dual ...] at least one
> sink is marked for delete
> 16/05/18 14:40:26 INFO flow.Flow: [select data from dual ...] sink oldest
> modified date: Wed Dec 31 18:59:59 EST 1969
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 1: get_table : db=default
> tbl=keyvalue
> 16/05/18 14:40:26 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 1: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/18 14:40:26 INFO metastore.ObjectStore: ObjectStore, initialize
> called
> 16/05/18 14:40:26 INFO DataNucleus.Query: Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is closing
> 16/05/18 14:40:26 INFO metastore.MetaStoreDirectSql: Using direct SQL,
> underlying DB is DERBY
> 16/05/18 14:40:26 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/18 14:40:26 INFO hive.HiveTap: strict mode: comparing existing hive
> table with table descriptor
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 1: Shutting down the
> object store...
> 16/05/18 14:40:26 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Shutting down the object store...
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 1: Metastore shutdown
> complete.
> 16/05/18 14:40:26 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=Metastore shutdown complete.
> 16/05/18 14:40:26 INFO session.SessionState: Created local directory:
> /tmp/b51c81d8-23b9-49ea-b012-92f81bc1c0ce_resources
> 16/05/18 14:40:26 INFO session.SessionState: Created HDFS directory:
> /tmp/hive/jmill383/b51c81d8-23b9-49ea-b012-92f81bc1c0ce
> 16/05/18 14:40:26 INFO session.SessionState: Created local directory:
> /tmp/jmill383/b51c81d8-23b9-49ea-b012-92f81bc1c0ce
> 16/05/18 14:40:26 INFO session.SessionState: Created HDFS directory:
> /tmp/hive/jmill383/b51c81d8-23b9-49ea-b012-92f81bc1c0ce/_tmp_space.db
> 16/05/18 14:40:26 INFO hive.HiveQueryRunner: running hive query: 'insert
> overwrite table keyvalue select 'Hello' as key, 'hive!' as value from dual'
> 16/05/18 14:40:26 INFO log.PerfLogger: <PERFLOG method=Driver.run
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO log.PerfLogger: <PERFLOG method=compile
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO log.PerfLogger: <PERFLOG method=parse
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO parse.ParseDriver: Parsing command: insert
> overwrite table keyvalue select 'Hello' as key, 'hive!' as value from dual
> 16/05/18 14:40:26 INFO parse.ParseDriver: Parse Completed
> 16/05/18 14:40:26 INFO log.PerfLogger: </PERFLOG method=parse
> start=1463596826826 end=1463596826833 duration=7
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:26 INFO parse.CalcitePlanner: Starting Semantic Analysis
> 16/05/18 14:40:26 INFO parse.CalcitePlanner: Completed phase 1 of Semantic
> Analysis
> 16/05/18 14:40:26 INFO parse.CalcitePlanner: Get metadata for source tables
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 3: get_table : db=default
> tbl=dual
> 16/05/18 14:40:26 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=dual
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 3: Opening raw store with
> implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
> 16/05/18 14:40:26 INFO metastore.ObjectStore: ObjectStore, initialize
> called
> 16/05/18 14:40:26 INFO DataNucleus.Query: Reading in results for query
> "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used
> is closing
> 16/05/18 14:40:26 INFO metastore.MetaStoreDirectSql: Using direct SQL,
> underlying DB is DERBY
> 16/05/18 14:40:26 INFO metastore.ObjectStore: Initialized ObjectStore
> 16/05/18 14:40:26 INFO parse.CalcitePlanner: Get metadata for subqueries
> 16/05/18 14:40:26 INFO parse.CalcitePlanner: Get metadata for destination
> tables
> 16/05/18 14:40:26 INFO metastore.HiveMetaStore: 3: get_table : db=default
> tbl=keyvalue
> 16/05/18 14:40:26 INFO HiveMetaStore.audit: ugi=jmill383
> ip=unknown-ip-addr    cmd=get_table : db=default tbl=keyvalue
> 16/05/18 14:40:26 INFO parse.CalcitePlanner: Completed getting MetaData in
> Semantic Analysis
> 16/05/18 14:40:26 INFO parse.BaseSemanticAnalyzer: Not invoking CBO
> because the statement has too few joins
> 16/05/18 14:40:26 INFO common.FileUtils: Creating directory if it doesn't
> exist:
> hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-05-18_14-40-26_826_7796779660082577343-1
> 16/05/18 14:40:26 INFO parse.CalcitePlanner: Set stats collection dir :
> hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-05-18_14-40-26_826_7796779660082577343-1/-ext-10001
> 16/05/18 14:40:27 INFO ppd.OpProcFactory: Processing for FS(2)
> 16/05/18 14:40:27 INFO ppd.OpProcFactory: Processing for SEL(1)
> 16/05/18 14:40:27 INFO ppd.OpProcFactory: Processing for TS(0)
> 16/05/18 14:40:27 INFO log.PerfLogger: <PERFLOG
> method=partition-retrieving
> from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
> 16/05/18 14:40:27 INFO log.PerfLogger: </PERFLOG
> method=partition-retrieving start=1463596827044 end=1463596827044
> duration=0 from=org.apache.hadoop.hive.ql.optimizer.ppr.PartitionPruner>
> 16/05/18 14:40:27 INFO optimizer.GenMRFileSink1: using
> CombineHiveInputformat for the merge job
> 16/05/18 14:40:27 INFO physical.NullScanTaskDispatcher: Looking for table
> scans where optimization is applicable
> 16/05/18 14:40:27 INFO physical.NullScanTaskDispatcher: Found 0 null table
> scans
> 16/05/18 14:40:27 INFO physical.NullScanTaskDispatcher: Looking for table
> scans where optimization is applicable
> 16/05/18 14:40:27 INFO physical.NullScanTaskDispatcher: Found 0 null table
> scans
> 16/05/18 14:40:27 INFO physical.NullScanTaskDispatcher: Looking for table
> scans where optimization is applicable
> 16/05/18 14:40:27 INFO physical.NullScanTaskDispatcher: Found 0 null table
> scans
> 16/05/18 14:40:27 INFO parse.CalcitePlanner: Completed plan generation
> 16/05/18 14:40:27 INFO ql.Driver: Semantic Analysis Completed
> 16/05/18 14:40:27 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze
> start=1463596826833 end=1463596827060 duration=227
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:27 INFO ql.Driver: Returning Hive schema:
> Schema(fieldSchemas:[FieldSchema(name:key, type:string, comment:null),
> FieldSchema(name:value, type:string, comment:null)], properties:null)
> 16/05/18 14:40:27 INFO log.PerfLogger: </PERFLOG method=compile
> start=1463596826825 end=1463596827060 duration=235
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:27 INFO ql.Driver: Concurrency mode is disabled, not
> creating a lock manager
> 16/05/18 14:40:27 INFO log.PerfLogger: <PERFLOG method=Driver.execute
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:27 INFO ql.Driver: Starting
> command(queryId=jmill383_20160518144026_b7cbea99-d34d-41da-8582-3be591e5d282):
> insert overwrite table keyvalue select 'Hello' as key, 'hive!' as value
> from dual
> Query ID = jmill383_20160518144026_b7cbea99-d34d-41da-8582-3be591e5d282
> 16/05/18 14:40:27 INFO ql.Driver: Query ID =
> jmill383_20160518144026_b7cbea99-d34d-41da-8582-3be591e5d282
> Total jobs = 3
> 16/05/18 14:40:27 INFO ql.Driver: Total jobs = 3
> 16/05/18 14:40:27 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit
> start=1463596826825 end=1463596827061 duration=236
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:27 INFO log.PerfLogger: <PERFLOG method=runTasks
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:27 INFO log.PerfLogger: <PERFLOG method=task.MAPRED.Stage-1
> from=org.apache.hadoop.hive.ql.Driver>
> Launching Job 1 out of 3
> 16/05/18 14:40:27 INFO ql.Driver: Launching Job 1 out of 3
> 16/05/18 14:40:27 INFO ql.Driver: Starting task [Stage-1:MAPRED] in serial
> mode
> Number of reduce tasks is set to 0 since there's no reduce operator
> 16/05/18 14:40:27 INFO exec.Task: Number of reduce tasks is set to 0 since
> there's no reduce operator
> 16/05/18 14:40:27 INFO ql.Context: New scratch dir is
> hdfs://localhost:8025/tmp/hive/jmill383/b51c81d8-23b9-49ea-b012-92f81bc1c0ce/hive_2016-05-18_14-40-26_826_7796779660082577343-1
> 16/05/18 14:40:27 INFO mr.ExecDriver: Using
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat
> 16/05/18 14:40:27 INFO exec.Utilities: Processing alias dual
> 16/05/18 14:40:27 INFO exec.Utilities: Adding input file
> hdfs://localhost:8025/user/hive/warehouse/dual
> 16/05/18 14:40:27 INFO exec.Utilities: Content Summary not cached for
> hdfs://localhost:8025/user/hive/warehouse/dual
> 16/05/18 14:40:27 INFO ql.Context: New scratch dir is
> hdfs://localhost:8025/tmp/hive/jmill383/b51c81d8-23b9-49ea-b012-92f81bc1c0ce/hive_2016-05-18_14-40-26_826_7796779660082577343-1
> 16/05/18 14:40:27 INFO log.PerfLogger: <PERFLOG method=serializePlan
> from=org.apache.hadoop.hive.ql.exec.Utilities>
> 16/05/18 14:40:27 INFO exec.Utilities: Serializing MapWork via kryo
> 16/05/18 14:40:27 INFO log.PerfLogger: </PERFLOG method=serializePlan
> start=1463596827125 end=1463596827182 duration=57
> from=org.apache.hadoop.hive.ql.exec.Utilities>
> 16/05/18 14:40:27 INFO Configuration.deprecation:
> mapred.submit.replication is deprecated. Instead, use
> mapreduce.client.submit.file.replication
> 16/05/18 14:40:27 ERROR mr.ExecDriver: yarn
> 16/05/18 14:40:27 INFO client.RMProxy: Connecting to ResourceManager at /
> 0.0.0.0:8032
> 16/05/18 14:40:27 INFO fs.FSStatsPublisher: created :
> hdfs://localhost:8025/user/hive/warehouse/keyvalue/.hive-staging_hive_2016-05-18_14-40-26_826_7796779660082577343-1/-ext-10001
> 16/05/18 14:40:27 INFO client.RMProxy: Connecting to ResourceManager at /
> 0.0.0.0:8032
> 16/05/18 14:40:27 INFO exec.Utilities: PLAN PATH =
> hdfs://localhost:8025/tmp/hive/jmill383/b51c81d8-23b9-49ea-b012-92f81bc1c0ce/hive_2016-05-18_14-40-26_826_7796779660082577343-1/-mr-10004/2243cc23-ce83-4a00-87ba-d1970db29def/map.xml
> 16/05/18 14:40:27 INFO exec.Utilities: PLAN PATH =
> hdfs://localhost:8025/tmp/hive/jmill383/b51c81d8-23b9-49ea-b012-92f81bc1c0ce/hive_2016-05-18_14-40-26_826_7796779660082577343-1/-mr-10004/2243cc23-ce83-4a00-87ba-d1970db29def/reduce.xml
> 16/05/18 14:40:27 INFO exec.Utilities: ***************non-local
> mode***************
> 16/05/18 14:40:27 INFO exec.Utilities: local path =
> hdfs://localhost:8025/tmp/hive/jmill383/b51c81d8-23b9-49ea-b012-92f81bc1c0ce/hive_2016-05-18_14-40-26_826_7796779660082577343-1/-mr-10004/2243cc23-ce83-4a00-87ba-d1970db29def/reduce.xml
> 16/05/18 14:40:27 INFO exec.Utilities: Open file to read in plan:
> hdfs://localhost:8025/tmp/hive/jmill383/b51c81d8-23b9-49ea-b012-92f81bc1c0ce/hive_2016-05-18_14-40-26_826_7796779660082577343-1/-mr-10004/2243cc23-ce83-4a00-87ba-d1970db29def/reduce.xml
> 16/05/18 14:40:27 INFO exec.Utilities: File not found: File does not
> exist:
> /tmp/hive/jmill383/b51c81d8-23b9-49ea-b012-92f81bc1c0ce/hive_2016-05-18_14-40-26_826_7796779660082577343-1/-mr-10004/2243cc23-ce83-4a00-87ba-d1970db29def/reduce.xml
>     at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:66)
>     at
> org.apache.hadoop.hdfs.server.namenode.INodeFile.valueOf(INodeFile.java:56)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsUpdateTimes(FSNamesystem.java:1891)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocationsInt(FSNamesystem.java:1832)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1812)
>     at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1784)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getBlockLocations(NameNodeRpcServer.java:542)
>     at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getBlockLocations(ClientNamenodeProtocolServerSideTranslatorPB.java:362)
>     at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:415)
>     at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)
>
> 16/05/18 14:40:27 INFO exec.Utilities: No plan file found:
> hdfs://localhost:8025/tmp/hive/jmill383/b51c81d8-23b9-49ea-b012-92f81bc1c0ce/hive_2016-05-18_14-40-26_826_7796779660082577343-1/-mr-10004/2243cc23-ce83-4a00-87ba-d1970db29def/reduce.xml
> 16/05/18 14:40:27 WARN mapreduce.JobSubmitter: Hadoop command-line option
> parsing not performed. Implement the Tool interface and execute your
> application with ToolRunner to remedy this.
> 16/05/18 14:40:27 INFO log.PerfLogger: <PERFLOG method=getSplits
> from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
> 16/05/18 14:40:27 INFO exec.Utilities: PLAN PATH =
> hdfs://localhost:8025/tmp/hive/jmill383/b51c81d8-23b9-49ea-b012-92f81bc1c0ce/hive_2016-05-18_14-40-26_826_7796779660082577343-1/-mr-10004/2243cc23-ce83-4a00-87ba-d1970db29def/map.xml
> 16/05/18 14:40:27 INFO io.CombineHiveInputFormat: Total number of paths:
> 1, launching 1 threads to check non-combinable ones.
> 16/05/18 14:40:27 INFO io.CombineHiveInputFormat: CombineHiveInputSplit
> creating pool for hdfs://localhost:8025/user/hive/warehouse/dual; using
> filter path hdfs://localhost:8025/user/hive/warehouse/dual
> 16/05/18 14:40:27 INFO input.FileInputFormat: Total input paths to process
> : 1
> 16/05/18 14:40:27 INFO input.CombineFileInputFormat: DEBUG: Terminated
> node allocation with : CompletedNodes: 1, size left: 0
> 16/05/18 14:40:27 INFO io.CombineHiveInputFormat: number of splits 1
> 16/05/18 14:40:27 INFO io.CombineHiveInputFormat: Number of all splits 1
> 16/05/18 14:40:27 INFO log.PerfLogger: </PERFLOG method=getSplits
> start=1463596827579 end=1463596827605 duration=26
> from=org.apache.hadoop.hive.ql.io.CombineHiveInputFormat>
> 16/05/18 14:40:27 INFO mapreduce.JobSubmitter: number of splits:1
> 16/05/18 14:40:27 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_1463594979064_0004
> 16/05/18 14:40:27 INFO impl.YarnClientImpl: Submitted application
> application_1463594979064_0004
> 16/05/18 14:40:27 INFO mapreduce.Job: The url to track the job:
> http://starchild:8088/proxy/application_1463594979064_0004/
> Starting Job = job_1463594979064_0004, Tracking URL =
> http://starchild:8088/proxy/application_1463594979064_0004/
> 16/05/18 14:40:27 INFO exec.Task: Starting Job = job_1463594979064_0004,
> Tracking URL = http://starchild:8088/proxy/application_1463594979064_0004/
> Kill Command = /opt/hadoop/bin/hadoop job  -kill job_1463594979064_0004
> 16/05/18 14:40:27 INFO exec.Task: Kill Command = /opt/hadoop/bin/hadoop
> job  -kill job_1463594979064_0004
> Hadoop job information for Stage-1: number of mappers: 0; number of
> reducers: 0
> 16/05/18 14:40:31 INFO exec.Task: Hadoop job information for Stage-1:
> number of mappers: 0; number of reducers: 0
> 16/05/18 14:40:31 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
> 2016-05-18 14:40:31,962 Stage-1 map = 0%,  reduce = 0%
> 16/05/18 14:40:32 INFO exec.Task: 2016-05-18 14:40:31,962 Stage-1 map =
> 0%,  reduce = 0%
> 16/05/18 14:40:32 WARN mapreduce.Counters: Group
> org.apache.hadoop.mapred.Task$Counter is deprecated. Use
> org.apache.hadoop.mapreduce.TaskCounter instead
> Ended Job = job_1463594979064_0004 with errors
> 16/05/18 14:40:32 ERROR exec.Task: Ended Job = job_1463594979064_0004 with
> errors
> Error during job, obtaining debugging information...
> 16/05/18 14:40:32 ERROR exec.Task: Error during job, obtaining debugging
> information...
> 16/05/18 14:40:32 INFO Configuration.deprecation: mapred.job.tracker is
> deprecated. Instead, use mapreduce.jobtracker.address
> Job Tracking URL:
> http://starchild:8088/cluster/app/application_1463594979064_0004
> 16/05/18 14:40:32 ERROR exec.Task: Job Tracking URL:
> http://starchild:8088/cluster/app/application_1463594979064_0004
> 16/05/18 14:40:32 INFO impl.YarnClientImpl: Killed application
> application_1463594979064_0004
> FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> 16/05/18 14:40:32 ERROR ql.Driver: FAILED: Execution Error, return code 2
> from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> 16/05/18 14:40:32 INFO log.PerfLogger: </PERFLOG method=Driver.execute
> start=1463596827060 end=1463596832133 duration=5073
> from=org.apache.hadoop.hive.ql.Driver>
> MapReduce Jobs Launched:
> 16/05/18 14:40:32 INFO ql.Driver: MapReduce Jobs Launched:
> 16/05/18 14:40:32 WARN mapreduce.Counters: Group FileSystemCounters is
> deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
> Stage-Stage-1:  HDFS Read: 0 HDFS Write: 0 FAIL
> 16/05/18 14:40:32 INFO ql.Driver: Stage-Stage-1:  HDFS Read: 0 HDFS Write:
> 0 FAIL
> Total MapReduce CPU Time Spent: 0 msec
> 16/05/18 14:40:32 INFO ql.Driver: Total MapReduce CPU Time Spent: 0 msec
> 16/05/18 14:40:32 INFO log.PerfLogger: <PERFLOG method=releaseLocks
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:32 INFO log.PerfLogger: </PERFLOG method=releaseLocks
> start=1463596832136 end=1463596832136 duration=0
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:32 INFO log.PerfLogger: <PERFLOG method=releaseLocks
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:32 INFO log.PerfLogger: </PERFLOG method=releaseLocks
> start=1463596832139 end=1463596832139 duration=0
> from=org.apache.hadoop.hive.ql.Driver>
> 16/05/18 14:40:32 WARN cascade.Cascade: [uppercase kv -> kv2 +l...] flow
> failed: select data from dual into keyvalue
> cascading.CascadingException: hive error 'FAILED: Execution Error, return
> code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask' while running
> query insert overwrite table keyvalue select 'Hello' as key, 'hive!' as
> value from dual
>     at cascading.flow.hive.HiveQueryRunner.run(HiveQueryRunner.java:131)
>     at cascading.flow.hive.HiveQueryRunner.call(HiveQueryRunner.java:167)
>     at cascading.flow.hive.HiveQueryRunner.call(HiveQueryRunner.java:41)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
> 16/05/18 14:40:32 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]
> stopping all flows
> 16/05/18 14:40:32 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]
> stopping flow: uppercase kv -> kv2
> 16/05/18 14:40:32 INFO flow.Flow: [uppercase kv -> kv2 ] stopping all jobs
> 16/05/18 14:40:32 INFO flow.Flow: [uppercase kv -> kv2 ] stopping: (1/1)
> .../hive/warehouse/keyvalue2
> 16/05/18 14:40:32 INFO flow.Flow: [uppercase kv -> kv2 ] stopped all jobs
> 16/05/18 14:40:32 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]
> stopping flow: select data from dual into keyvalue
> 16/05/18 14:40:32 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]
> stopping flow: load data into dual
> 16/05/18 14:40:32 INFO cascade.Cascade: [uppercase kv -> kv2 +l...]
> stopped all flows
> Exception in thread "main" cascading.cascade.CascadeException: flow
> failed: select data from dual into keyvalue
>     at cascading.cascade.BaseCascade$CascadeJob.call(BaseCascade.java:963)
>     at cascading.cascade.BaseCascade$CascadeJob.call(BaseCascade.java:900)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>     at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
> Caused by: cascading.CascadingException: hive error 'FAILED: Execution
> Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask'
> while running query insert overwrite table keyvalue select 'Hello' as key,
> 'hive!' as value from dual
>     at cascading.flow.hive.HiveQueryRunner.run(HiveQueryRunner.java:131)
>     at cascading.flow.hive.HiveQueryRunner.call(HiveQueryRunner.java:167)
>     at cascading.flow.hive.HiveQueryRunner.call(HiveQueryRunner.java:41)
>     ... 4 more
> [jmill383@starchild demo]$
>
>
>
>
> On Wed, May 18, 2016 at 1:48 PM, Mich Talebzadeh <
> mich.talebza...@gmail.com> wrote:
>
>> Hi John,
>>
>> can you please a new thread for your problem so we can deal with
>> separately.
>>
>> thanks
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> On 18 May 2016 at 15:11, JOHN MILLER <jmill...@gmail.com> wrote:
>>
>>> Greetings Mitch
>>>
>>> I have an issue with running mapreduce in hive   I am getting a  FAILED:
>>> Execution Error, return code 2 from
>>> org.apache.hadoop.hive.ql.exec.mr.MapRedTask
>>>
>>> error while attemtiing to execute SELECT DISTINCT(fieldname) FROM TABLE
>>> x  or SELECT  COUNT(*)  FROM TABLE x;;  Trying to run cascading-hive gives
>>> me the same problem as well
>>>
>>> Please advise if u have come across this type of problem or generated
>>> some ideas as to resolve this problema
>>>
>>> On Wed, May 18, 2016 at 9:53 AM, Mich Talebzadeh <
>>> mich.talebza...@gmail.com> wrote:
>>>
>>>> Hi Kuldeep,
>>>>
>>>> Have you installed hive on any of these nodes.
>>>>
>>>> Hive is basically an API. You will also need to install sqoop as well
>>>> if you are going to import data from other RDBMss like Oracle, Sybase etc.
>>>>
>>>> Hive has a very small footprint so my suggestion is to install it on
>>>> all your boxes and permission granted to Haddop user say hduser.
>>>>
>>>> Hive will require a metadata in  a database of your choice. default is
>>>> derby which I don't use. try to use a reasonable database. ours is on
>>>> Oracle
>>>>
>>>>  Now under directory $HIVE_HOME/conf/hive-site.xml you can set up info
>>>> about Hadoop and your metastore etc. You also need to set up environment
>>>> variables for both Hadoop and hive in your start up script like .profile
>>>> .kshrc etc
>>>>
>>>> Have a look anyway.
>>>>
>>>> HTH
>>>>
>>>> Dr Mich Talebzadeh
>>>>
>>>>
>>>>
>>>> LinkedIn * 
>>>> https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>>> <https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>>>>
>>>>
>>>>
>>>> http://talebzadehmich.wordpress.com
>>>>
>>>>
>>>>
>>>> On 18 May 2016 at 13:49, Kuldeep Chitrakar <
>>>> kuldeep.chitra...@synechron.com> wrote:
>>>>
>>>>> I have a very basic question regarding Hadoop & Hive setup.  I have 7
>>>>> Machines say M1,M2,M3,M4,M5,M6,M7
>>>>>
>>>>>
>>>>>
>>>>> Hadoop Cluster Setup:
>>>>>
>>>>>
>>>>>
>>>>> Namenode: M1
>>>>>
>>>>> Seondary Namenode: M2
>>>>>
>>>>> Datanodes: M3,M4,M5
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Now question is:
>>>>>
>>>>>
>>>>>
>>>>> Where do I need to install Hive.
>>>>>
>>>>> 1.       Should I install Hiverserver on M6
>>>>>
>>>>> a.       if yes does that machine needs core Hadoop JAR’s installed?
>>>>>
>>>>> b.      How this Hive server knows where Hadoop cluster is. What
>>>>> configurations needs to be done?
>>>>>
>>>>> c.       How can we restrict this machine to be only hive server and
>>>>> not datanode of Hadoop cluster?
>>>>>
>>>>>
>>>>>
>>>>> 2.       Where do we install Hive CLI
>>>>>
>>>>> a.       If I want to hive M7 as Hive CLI, then what needs to be
>>>>> installed on this machine.
>>>>>
>>>>> b.      Any required configurations.
>>>>>
>>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Kuldeep
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to