You don't need to start Yarn if you only want to write HDFS using C API.
and you also don't need to restart HDFS.



On Thu, Mar 5, 2015 at 4:58 PM, Alexandru Calin <alexandrucali...@gmail.com>
wrote:

> Now I've also started YARN ( just for the sake of trying anything), the
> config for mapred-site.xml and yarn-site.xml are those on apache website. A 
> *jps
> *command shows:
>
> 11257 NodeManager
> 11129 ResourceManager
> 11815 Jps
> 10620 NameNode
> 10966 SecondaryNameNode
>
> On Thu, Mar 5, 2015 at 10:48 AM, Azuryy Yu <azury...@gmail.com> wrote:
>
>> Can you share your core-site.xml here?
>>
>>
>> On Thu, Mar 5, 2015 at 4:32 PM, Alexandru Calin <
>> alexandrucali...@gmail.com> wrote:
>>
>>> No change at all, I've added them at the start and end of the CLASSPATH,
>>> either way it still writes the file on the local fs. I've also restarted
>>> hadoop.
>>>
>>> On Thu, Mar 5, 2015 at 10:22 AM, Azuryy Yu <azury...@gmail.com> wrote:
>>>
>>>> Yes,  you should do it:)
>>>>
>>>> On Thu, Mar 5, 2015 at 4:17 PM, Alexandru Calin <
>>>> alexandrucali...@gmail.com> wrote:
>>>>
>>>>> Wow, you are so right! it's on the local filesystem!  Do I have to
>>>>> manually specify hdfs-site.xml and core-site.xml in the CLASSPATH variable
>>>>> ? Like this:
>>>>> CLASSPATH=$CLASSPATH:/usr/local/hadoop/etc/hadoop/core-site.xml
>>>>> ?
>>>>>
>>>>> On Thu, Mar 5, 2015 at 10:04 AM, Azuryy Yu <azury...@gmail.com> wrote:
>>>>>
>>>>>> you need to include core-site.xml as well. and I think you can find
>>>>>> '/tmp/testfile.txt' on your local disk, instead of HDFS.
>>>>>>
>>>>>> if so,  My guess is right.  because you don't include core-site.xml,
>>>>>> then your Filesystem schema is file:// by default, not hdfs://.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Mar 5, 2015 at 3:52 PM, Alexandru Calin <
>>>>>> alexandrucali...@gmail.com> wrote:
>>>>>>
>>>>>>> I am trying to run the basic libhdfs example, it compiles ok, and
>>>>>>> actually runs ok, and executes the whole program, but I cannot see the 
>>>>>>> file
>>>>>>> on the HDFS.
>>>>>>>
>>>>>>> It is said  here <http://hadoop.apache.org/docs/r1.2.1/libhdfs.html>,
>>>>>>> that you have to include *the right configuration directory
>>>>>>> containing hdfs-site.xml*
>>>>>>>
>>>>>>> My hdfs-site.xml:
>>>>>>>
>>>>>>> <configuration>
>>>>>>>     <property>
>>>>>>>         <name>dfs.replication</name>
>>>>>>>         <value>1</value>
>>>>>>>     </property>
>>>>>>>     <property>
>>>>>>>       <name>dfs.namenode.name.dir</name>
>>>>>>>       <value>file:///usr/local/hadoop/hadoop_data/hdfs/namenode</value>
>>>>>>>     </property>
>>>>>>>     <property>
>>>>>>>       <name>dfs.datanode.data.dir</name>
>>>>>>>       <value>file:///usr/local/hadoop/hadoop_store/hdfs/datanode</value>
>>>>>>>     </property></configuration>
>>>>>>>
>>>>>>> I generate my classpath with this:
>>>>>>>
>>>>>>> #!/bin/bashexport CLASSPATH=/usr/local/hadoop/
>>>>>>> declare -a subdirs=("hdfs" "tools" "common" "yarn" "mapreduce")for 
>>>>>>> subdir in "${subdirs[@]}"do
>>>>>>>         for file in $(find /usr/local/hadoop/share/hadoop/$subdir -name 
>>>>>>> *.jar)
>>>>>>>         do
>>>>>>>                 export CLASSPATH=$CLASSPATH:$file
>>>>>>>         donedone
>>>>>>>
>>>>>>> and I also add export
>>>>>>> CLASSPATH=$CLASSPATH:/usr/local/hadoop/etc/hadoop , where my
>>>>>>> *hdfs-site.xml* reside.
>>>>>>>
>>>>>>> MY LD_LIBRARY_PATH =
>>>>>>> /usr/local/hadoop/lib/native:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server
>>>>>>> Code:
>>>>>>>
>>>>>>> #include "hdfs.h"#include <stdio.h>#include <string.h>#include 
>>>>>>> <stdio.h>#include <stdlib.h>
>>>>>>> int main(int argc, char **argv) {
>>>>>>>
>>>>>>>     hdfsFS fs = hdfsConnect("default", 0);
>>>>>>>     const char* writePath = "/tmp/testfile.txt";
>>>>>>>     hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 
>>>>>>> 0, 0, 0);
>>>>>>>     if(!writeFile) {
>>>>>>>           printf("Failed to open %s for writing!\n", writePath);
>>>>>>>           exit(-1);
>>>>>>>     }
>>>>>>>     printf("\nfile opened\n");
>>>>>>>     char* buffer = "Hello, World!";
>>>>>>>     tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, 
>>>>>>> strlen(buffer)+1);
>>>>>>>     printf("\nWrote %d bytes\n", (int)num_written_bytes);
>>>>>>>     if (hdfsFlush(fs, writeFile)) {
>>>>>>>            printf("Failed to 'flush' %s\n", writePath);
>>>>>>>           exit(-1);
>>>>>>>     }
>>>>>>>    hdfsCloseFile(fs, writeFile);
>>>>>>>    hdfsDisconnect(fs);
>>>>>>>    return 0;}
>>>>>>>
>>>>>>> It compiles and runs without error, but I cannot see the file on
>>>>>>> HDFS.
>>>>>>>
>>>>>>> I have Hadoop 2.6.0 on Ubuntu 14.04 64bit.
>>>>>>>
>>>>>>> Any ideas on this ?
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to