Dear Maxim,
Thank you for your reply to my email!
Sorry, I still can't solve the problem.

1.

You mentioned [0] put rya.mapreduce jar to HDFS , and I have put  $RDF_DATA and 
rya.mapreduce jar to HDFS , but I still run jar at locally instead of running 
directly on hadoop, because I can't find some commands to run jar directly on 
hadoop. And I don' know what's mean "declare" that you have mentioned that 
"your command, but do you really declare this envvar?"


I modify the commands,as:
hadoop jar /usr/local/JAR/rya.mapreduce-4.0.0-incubating-SNAPSHOT-shaded.jar 
org.apache.rya.accumulo.mr.tools.RdfFileInputTool -Dac.zk=localhost:2181 
-Dac.instance=accumulo -Dac.username=root -Dac.pwd=111111 
-Drdf.tablePrefix=rya_ -Drdf.format=N-Triples  hdfs://$RDF_DATA


And my first question still cannot be reslved [1]. I guess it is because of  
“-Dac.zk=localhost:2181”?


2.
The wiki [0] has shown me a example about load data using web rest endpoint, my 
codes like that [2]. But I cannot find the "web.rya/loadrdf" in web.rya.war. 
And when I run the codes, the error is [3].
I have no idear about this question and I don't how to reslove it.









[0] 
https://github.com/apache/incubator-rya/blob/rel/rya-incubating-3.2.12/extras/rya.manual/src/site/markdown/loaddata.md

[1]

Job started: Mon Nov 26 20:25:38 CST 2018
18/11/26 20:25:38 INFO client.RMProxy: Connecting to ResourceManager at 
v7/192.168.122.1:8032
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client 
environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client environment:host.name=v7
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client 
environment:java.version=1.8.0_181
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client 
environment:java.vendor=Oracle Corporation
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client 
environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.181-3.b13.el7_5.x86_64/jre
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client 
environment:java.class.path=/usr/local/hadoop-2.7.4/etc/hadoop:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hadoop-annotations-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hadoop-auth-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-common-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/common/hadoop-nfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-api-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-client-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/yarn/hadoop-yarn-registry-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.4.jar:/usr/local/hadoop-2.7.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.4-tests.jar:/usr/local/hadoop-2.7.4/contrib/capacity-scheduler/*.jar
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client 
environment:java.library.path=/usr/local/hadoop-2.7.4/lib/native
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client 
environment:java.io.tmpdir=/tmp
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client 
environment:java.compiler=<NA>
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client 
environment:os.version=3.10.0-862.11.6.el7.x86_64
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client environment:user.name=root
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/v7
18/11/26 20:25:38 INFO zookeeper.ZooKeeper: Initiating client connection, 
connectString=localhost:2181 sessionTimeout=30000 
watcher=org.apache.accumulo.fate.zookeeper.ZooSession$ZooWatcher@2feab4e2
18/11/26 20:25:38 INFO zookeeper.ClientCnxn: Opening socket connection to 
server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL 
(unknown error)
18/11/26 20:25:38 INFO zookeeper.ClientCnxn: Socket connection established to 
localhost/127.0.0.1:2181, initiating session
18/11/26 20:25:38 INFO zookeeper.ClientCnxn: Session establishment complete on 
server localhost/127.0.0.1:2181, sessionid = 0x1674ac8ad740022, negotiated 
timeout = 30000
18/11/26 20:25:41 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:25:42 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:25:43 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 2 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:25:44 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 3 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:25:45 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 4 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:25:46 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 5 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:25:47 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 6 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:25:48 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 7 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:25:49 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 8 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:25:50 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 9 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:26:21 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:26:22 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:26:23 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 2 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:26:24 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 3 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:26:25 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 4 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:26:26 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 5 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:26:27 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 6 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:26:28 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 7 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:26:29 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 8 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:26:30 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 9 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
......................    repeat ........................
18/11/26 20:38:21 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 0 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:38:22 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 1 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:38:23 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 2 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:38:24 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 3 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:38:25 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 4 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:38:26 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 5 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:38:27 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 6 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:38:28 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 7 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:38:29 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 8 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/11/26 20:38:30 INFO ipc.Client: Retrying connect to server: 
v7/192.168.122.1:8032. Already tried 9 time(s); retry policy is 
RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
java.net.ConnectException: Call From v7/192.168.122.1 to v7:8032 failed on 
connection exception: java.net.ConnectException: 拒绝连接; For more details see:  
http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.GeneratedConstructorAccessor4.newInstance(Unknown Source)
    at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
    at org.apache.hadoop.ipc.Client.call(Client.java:1480)
    at org.apache.hadoop.ipc.Client.call(Client.java:1413)
    at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy15.getNewApplication(Unknown Source)
    at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getNewApplication(ApplicationClientProtocolPBClientImpl.java:221)
    at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
    at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy16.getNewApplication(Unknown Source)
    at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNewApplication(YarnClientImpl.java:219)
    at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createApplication(YarnClientImpl.java:227)
    at 
org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:187)
    at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:241)
    at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:153)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
    at 
org.apache.rya.accumulo.mr.tools.RdfFileInputTool.run(RdfFileInputTool.java:74)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at 
org.apache.rya.accumulo.mr.tools.RdfFileInputTool.main(RdfFileInputTool.java:55)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: 拒绝连接
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:615)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:713)
    at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:376)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
    at org.apache.hadoop.ipc.Client.call(Client.java:1452)
    ... 31 more


[2]
public class LoadDataServletRun {

    public static void main(String[] args) {
        try {
            final InputStream resourceAsStream = 
Thread.currentThread().getContextClassLoader()
                    .getResourceAsStream("RDF_DATA");
            //System.out.print(resourceAsStream);
            URL url = new URL("http://localhost:8080/web.rya/loadrdf"; +
                    "?format=N-Triples" +
                    "");
          /*  URL url = new URL("http://localhost:8080/web.rya/loadrdf"; +
                    "?format=N-Triples" +
                    "");*/
            URLConnection urlConnection = url.openConnection();
            urlConnection.setRequestProperty("Content-Type", "text/plain");
            urlConnection.setDoOutput(true);

            final OutputStream os = urlConnection.getOutputStream();
            System.out.print(resourceAsStream.read());

            int read;
            while((read = resourceAsStream.read()) >= 0) {
            read = resourceAsStream.read();
                os.write(read);
            }
            resourceAsStream.close();
            os.flush();

            BufferedReader rd = new BufferedReader(new InputStreamReader(
                    urlConnection.getInputStream()));
            String line;
            while ((line = rd.readLine()) != null) {
                System.out.println(line);
            }
            rd.close();
            os.close();
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}


[3]
60java.io.IOException: Server returned HTTP response code: 500 for URL: 
http://localhost:8080/web.rya/loadrdf?format=N-Triples
    at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1894)
    at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1492)
    at LoadDataServletRun.main(LoadDataServletRun.java:37)




Weiqi
2018.11.26







From: "徐炜淇" <xuwe...@tju.edu.cn>
Date: 2018-11-26 11:24:34
To:  dev@rya.incubator.apache.org
Subject: Rya load data>Dear sir or madam,
>I really don't want to bother you because these questions seem "stupid." But I 
>have been troubled by them for a month, so that I can't do the next step. I am 
>very troubled, so I have to send you this very rude mail! Please be forgive 
>me, I urgently need your help now. 
>My question is I can not load data, there are details: 
>
>
>1. Bulk load data
>When I executed this instruction, the instruction is
>hadoop jar usr/local/rya.mapreduce-4.0.0-incubting-SNAPSHOT-shaded.jar 
>org.apache.rya.accumulo.mr.RdfFileInputTool -Dac.zk=localhost:2181 
>-Dac.instance=accumulo -Dac.username=root -Dac.pwd=111111 
>-Drdf.tablePrefix=triplestore_ -Drdf.format=N-Triples hdfs://$RDF_DATA 
>
>there are always out errors about “Exception in thread “main” 
>java.long.ClassNotFoundException: 
>org.apache.rya.accumulo.mr.RdfFileInputTool”. As shown in the picture
>
>I find the location of "RdfFileInputTool ", so I add "tools." before  
>"RdfFileInputTool ", the instruction turns to:
> hadoop jar /usr/local/rya.mapreduce-4.0.0-incubating-SNAPSHOT-shaded.jar 
> org.apache.rya.accumulo.mr.tools.RdfFileInputTool 
> -Drdf.tablePrefix=triplestore_ -Dcb.username=root -Dcb.pwd=111111 
> -Dcb.instance=accumulo -Dcb.zk=localhost:2181 -Drdf.format=N-Triples 
> hdfs://$RDF_DATA
>But the errors are:
>java.lang.NullPointerException: Accumulo instance name [ac.instance] not set.
>    at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
>    at 
> org.apache.rya.accumulo.mr.AbstractAccumuloMRTool.init(AbstractAccumuloMRTool.java:133)
>    at 
> org.apache.rya.accumulo.mr.tools.RdfFileInputTool.run(RdfFileInputTool.java:63)
>    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>    at 
> org.apache.rya.accumulo.mr.tools.RdfFileInputTool.main(RdfFileInputTool.java:55)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>    at java.lang.reflect.Method.invoke(Method.java:498)
>    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>2. Web Rest Point
>
>The quick start in eht website has told me to use these java codes to load 
>data to REST endpoint, but my question is : I have copy these codes to 
>Eclipse, and put it as a jar to HDFS, but I can not load the data.
>And the codes like the picture, I tried to run the code in eclipse but it went 
>wrong, so I export these codes as an jar and put this jar to HDFS, but still 
>can't use it.:
>
>
>
>
>3.The accumulo-site.xml
>
>
>.....
> <property>
>    <name>instance.volumes</name>
>    <value>hdfs://192.168.122.1:8020/accumulo</value>
>    <description>comma separated list of URIs for volumes. example: 
> hdfs://localhost:9000/accumulo</description>
>  </property>
>
>  <property>
>    <name>instance.zookeeper.host</name>
>    <value>192.168.122.1:2181</value>
>    <description>comma separated list of zookeeper servers</description>
>  </property>
>
>  <property>
>    <name>instance.secret</name>
>    <value>PASS1234</value>
>    <description>A secret unique to a given instance that all servers must 
> know in order to communicate with one another.
>      Change it before initialization. To
>      change it later use ./bin/accumulo 
> org.apache.accumulo.server.util.ChangeSecret --old [oldpasswd] --new 
> [newpasswd],
>      and then update this file.
>    </description>
>  </property>
>.....
>
>
>
>
>
>
>4.the accumulo_evn.sh 
>
>
>#! /usr/bin/env bash
>
># Licensed to the Apache Software Foundation (ASF) under one or more
># contributor license agreements.  See the NOTICE file distributed with
># this work for additional information regarding copyright ownership.
># The ASF licenses this file to You under the Apache License, Version 2.0
># (the "License"); you may not use this file except in compliance with
># the License.  You may obtain a copy of the License at
>#
>#     http://www.apache.org/licenses/LICENSE-2.0
>#
># Unless required by applicable law or agreed to in writing, software
># distributed under the License is distributed on an "AS IS" BASIS,
># WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
># See the License for the specific language governing permissions and
># limitations under the License.
>
>###
>### Configure these environment variables to point to your local installations.
>###
>### The functional tests require conditional values, so keep this style:
>###
>### test -z "$JAVA_HOME" && export JAVA_HOME=/usr/lib/jvm/java
>###
>###
>### Note that the -Xmx -Xms settings below require substantial free memory:
>### you may want to use smaller values, especially when running everything
>### on a single machine.
>### && export HADOOP_PREFIX=/path/to/hadoop
>###
>if [[ -z $HADOOP_HOME ]] ; then
>   test -z "$HADOOP_PREFIX"      && export 
> HADOOP_PREFIX=/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4
>else
>   HADOOP_PREFIX="$HADOOP_HOME"
>   unset HADOOP_HOME
>fi
>
>###&& export HADOOP_CONF_DIR="$HADOOP_PREFIX/etc/hadoop"
># hadoop-2.0:
>test -z "$HADOOP_CONF_DIR"       && export 
>HADOOP_CONF_DIR="/usr/local/hadoop-2.7.4/etc/hadoop"
>
>test -z "$JAVA_HOME"             && export 
>JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk
>test -z "$ZOOKEEPER_HOME"        && export 
>ZOOKEEPER_HOME=/home/v7/RyaInstall/zookeeper-3.4.10
>test -z "$ACCUMULO_LOG_DIR"      && export ACCUMULO_LOG_DIR=$ACCUMULO_HOME/logs
>if [[ -f ${ACCUMULO_CONF_DIR}/accumulo.policy ]]
>then
>   POLICY="-Djava.security.manager 
> -Djava.security.policy=${ACCUMULO_CONF_DIR}/accumulo.policy"
>fi
>test -z "$ACCUMULO_TSERVER_OPTS" && export ACCUMULO_TSERVER_OPTS="${POLICY} 
>-Xmx384m -Xms384m "
>test -z "$ACCUMULO_MASTER_OPTS"  && export ACCUMULO_MASTER_OPTS="${POLICY} 
>-Xmx128m -Xms128m"
>test -z "$ACCUMULO_MONITOR_OPTS" && export ACCUMULO_MONITOR_OPTS="${POLICY} 
>-Xmx64m -Xms64m"
>test -z "$ACCUMULO_GC_OPTS"      && export ACCUMULO_GC_OPTS="-Xmx64m -Xms64m"
>test -z "$ACCUMULO_SHELL_OPTS"   && export ACCUMULO_SHELL_OPTS="-Xmx128m 
>-Xms64m"
>test -z "$ACCUMULO_GENERAL_OPTS" && export 
>ACCUMULO_GENERAL_OPTS="-XX:+UseConcMarkSweepGC 
>-XX:CMSInitiatingOccupancyFraction=75 -Djava.net.preferIPv4Stack=true 
>-XX:+CMSClassUnloadingEnabled"
>test -z "$ACCUMULO_OTHER_OPTS"   && export ACCUMULO_OTHER_OPTS="-Xmx128m 
>-Xms64m"
>test -z "${ACCUMULO_PID_DIR}"    && export 
>ACCUMULO_PID_DIR="${ACCUMULO_HOME}/run"
># what do when the JVM runs out of heap memory
>export ACCUMULO_KILL_CMD='kill -9 %p'
>
>### Optionally look for hadoop and accumulo native libraries for your
>### platform in additional directories. (Use DYLD_LIBRARY_PATH on Mac OS X.)
>### May not be necessary for Hadoop 2.x or using an RPM that installs to
>### the correct system library directory.
># export 
>LD_LIBRARY_PATH=${HADOOP_PREFIX}/lib/native/${PLATFORM}:${LD_LIBRARY_PATH}
>
># Should the monitor bind to all network interfaces -- default: false
>export ACCUMULO_MONITOR_BIND_ALL="true"
>
># Should process be automatically restarted
># export ACCUMULO_WATCHER="true"
>
># What settings should we use for the watcher, if enabled
>export UNEXPECTED_TIMESPAN="3600"
>export UNEXPECTED_RETRIES="2"
>
>export OOM_TIMESPAN="3600"
>export OOM_RETRIES="5"
>
>export ZKLOCK_TIMESPAN="600"
>export ZKLOCK_RETRIES="5"
>
># The number of .out and .err files per process to retain
># export ACCUMULO_NUM_OUT_FILES=5
>
>export NUM_TSERVERS=1
>
>### Example for configuring multiple tservers per host. Note that the 
>ACCUMULO_NUMACTL_OPTIONS
>### environment variable is used when NUM_TSERVERS is 1 to preserve backwards 
>compatibility.
>### If NUM_TSERVERS is greater than 2, then the TSERVER_NUMA_OPTIONS array is 
>used if defined.
>### If TSERVER_NUMA_OPTIONS is declared but not the correct size, then the 
>service will not start.
>###
>### export NUM_TSERVERS=2
>### declare -a TSERVER_NUMA_OPTIONS
>### TSERVER_NUMA_OPTIONS[1]="--cpunodebind 0"
>### TSERVER_NUMA_OPTIONS[2]="--cpunodebind 1"
>
>
>
>
>
>
>5.My classpath 
>.....
>export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk 
>export HADOOP_HOME=/usr/local/hadoop-2.7.4
>export ZOOKEEPER_HOME=/home/v7/RyaInstall/zookeeper-3.4.10
>export MAVEN_HOME=/usr/share/maven
>export ACCUMULO_HOME=/home/v7/RyaInstall/accumulo-1.9.2
>export ENVIROMENT_PROPERTIES=/usr/local
>export 
>PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_H
>OME/bin:$MAVEN_HOME/bin:$ENVIROMENT_PROPERTIES
>.....
>
>
>6.environment.properties
>instance.name=accumulo  
>instance.zk=localhost:2181 
>instance.username=root 
>instance.password=111111  
>rya.tableprefix=triplestore_  
>rya.displayqueryplan=true 
>
>
>
>
>
>
>
>
>
>




Reply via email to