*core-site.xml :* <property> <name>fs.defaultFS</name> <value>hdfs://master1:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/home/dcos/hdfs/tmp</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.proxyuser.dcos.groups</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.dcos.hosts</name> <value>*</value> </property> <property> <name>io.compression.codecs</name> <value> org.apache.hadoop.io.compress.GzipCodec, org.apache.hadoop.io.compress.DefaultCodec, org.apache.hadoop.io.compress.BZip2Codec, org.apache.hadoop.io.compress.SnappyCodec </value> </property>
*hdfs-site.xml* <property> <name>dfs.namenode.name.dir</name> <value>file:/home/dcos/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/dcos/hdfs/data</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>master1:9001</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> 2016-05-10 14:32 GMT+08:00 kevin <[email protected]>: > thanks, what I use is from apache. and hadoop ,hbase was in cluster model > with one master and three slaves > > 2016-05-10 14:17 GMT+08:00 Gabriel Reid <[email protected]>: > >> Hi, >> >> It looks like your setup is using a combination of the local >> filesystem and HDFS at the same time, so this looks to be a general >> configuration issue. >> >> Are you running on a real distributed cluster, or a single-node setup? >> Is this a vendor-based distribution (i.e. HDP or CDH), or apache >> releases of Hadoop and HBase? >> >> - Gabriel >> >> On Tue, May 10, 2016 at 5:34 AM, kevin <[email protected]> wrote: >> > hi,all : >> > I use phoenix 4.6.0-hbase0.98 and hadoop 2.7.1,when I try Loading via >> > MapReduce,I got error: >> > >> > 16/05/10 11:24:00 ERROR mapreduce.MultiHfileOutputFormat: the table >> logical >> > name is USER >> > 16/05/10 11:24:00 INFO >> client.HConnectionManager$HConnectionImplementation: >> > Closing master protocol: MasterService >> > 16/05/10 11:24:00 INFO >> client.HConnectionManager$HConnectionImplementation: >> > Closing zookeeper sessionid=0x354935e154c0053 >> > 16/05/10 11:24:00 INFO zookeeper.ZooKeeper: Session: 0x354935e154c0053 >> > closed >> > 16/05/10 11:24:00 INFO zookeeper.ClientCnxn: EventThread shut down >> > 16/05/10 11:24:01 INFO mapreduce.MultiHfileOutputFormat: Configuring 1 >> > reduce partitions to match current region count >> > 16/05/10 11:24:01 ERROR mapreduce.CsvBulkLoadTool: Error Wrong FS: >> > >> file:/home/dcos/hdfs/tmp/partitions_2e6483b0-af99-4e8c-9084-1d2c9b1729bb, >> > expected: hdfs://master1:9000 occurred submitting CSVBulkLoad >> > 16/05/10 11:24:01 INFO >> client.HConnectionManager$HConnectionImplementation: >> > Closing zookeeper sessionid=0x154935e0ee60074 >> > 16/05/10 11:24:01 INFO zookeeper.ZooKeeper: Session: 0x154935e0ee60074 >> > closed >> > 16/05/10 11:24:01 INFO zookeeper.ClientCnxn: EventThread shut down >> > >> > >> > can anyone help me ? >> > >
