Issues with getting HTrace to work with Hadoop 2.7.3

2016-11-20 Thread Alexandru Calin
Hello, I've successfully used Zipkin with Hadoop Htrace in 2.6.0 x32, on Ubuntu 14.04. Now I want to use it with Hadoop 2.7.3., but I can't even enable Htrace tracing with this hadoop version. The setup for HTrace in 2.6.0 is different from 2.7.3, as it can be seen here-2.6.0

[no subject]

2016-08-30 Thread Alexandru Calin
Hello I want to measure the time taken to read/write from HDFS and feed data to the mapper/reducer vs the actual map/reduce time for the WordCount example. I have enabled HTrace with

Re: Tracing Hadoop using HTrace with Zipkin

2016-08-27 Thread Alexandru Calin
, Alexandru Calin wrote: > Yes, you are right, sorry, that was I copy-paste mistake form the website, > haven't noticed it. In reality I am using AlwawsSampler, in my > configuration, 3 slaves, 1 namenode, lxc containers. > > On Sat, Aug 27, 2016 at 5:05 PM, Sandeep Khurana > wro

Re: Tracing Hadoop using HTrace with Zipkin

2016-08-27 Thread Alexandru Calin
reading documentation of the links you sent > "NeverSampler: > HTrace is OFF for all spans" . Should you not use either ProbabilitySampler > or AlwaysSampler > > On Sat, Aug 27, 2016 at 7:22 PM, Alexandru Calin < > alexandrucali...@gmail.com> wrote: > >> >>

Tracing Hadoop using HTrace with Zipkin

2016-08-27 Thread Alexandru Calin
favorite I am trying to use HTrace with Hadoop 2.6.0 on Ubuntu 14.04. I have followed Hadoop's HTrace integration tutorial here

Re: I/O time when reading from HDFS in Hadoop

2016-06-11 Thread Alexandru Calin
> writes on Read and Write operations and more. > > Hope this helps… > > Kind regards, Daniel. > > > > Sent from my iPad > On 11 Jun 2016, at 17:22, Alexandru Calin > wrote: > > Hello, > > I would like to measure the time taken for map and reduce when pe

I/O time when reading from HDFS in Hadoop

2016-06-11 Thread Alexandru Calin
Hello, I would like to measure the time taken for map and reduce when performing I/O (reading from HDFS) in Hadoop. I am using Yarn. Hadoop 2.6.0. What are the options for that? Thanks

Another 1/1 local-dirs are bad

2015-03-08 Thread Alexandru Calin
I have two hadoop instances running inside two lxc containers on the same host, a hadoop-master and a hadoop-slave1. While starting YARN & DFS on master I get this UNHEALTHY state for hadoop-slave1. For what I've found on the web it must be one of these two possibilities: a. Not enough disk space.

Re: File is not written on HDFS after running libhdfs C API

2015-03-05 Thread Alexandru Calin
pened, anyway .. I issued this command : *bin/hdfs namenode -format -clusterId CID-862f3fad-175e-442d-a06b-d65ac57d64b2* And that got it started, the file is written correctly. Thank you very much On Thu, Mar 5, 2015 at 2:03 PM, Alexandru Calin wrote: > After putting the CLASSPATH initializ

Re: File is not written on HDFS after running libhdfs C API

2015-03-05 Thread Alexandru Calin
to restart HDFS. > > > > On Thu, Mar 5, 2015 at 4:58 PM, Alexandru Calin < > alexandrucali...@gmail.com> wrote: > >> Now I've also started YARN ( just for the sake of trying anything), the >> config for mapred-site.xml and yarn-site.xml are those on

Re: File is not written on HDFS after running libhdfs C API

2015-03-05 Thread Alexandru Calin
48 AM, Azuryy Yu wrote: > Can you share your core-site.xml here? > > > On Thu, Mar 5, 2015 at 4:32 PM, Alexandru Calin < > alexandrucali...@gmail.com> wrote: > >> No change at all, I've added them at the start and end of the CLASSPATH, >> either way it st

Re: File is not written on HDFS after running libhdfs C API

2015-03-05 Thread Alexandru Calin
This is how core-site.xml looks: fs.defaultFS hdfs://localhost:9000 On Thu, Mar 5, 2015 at 10:32 AM, Alexandru Calin wrote: > No change at all, I've added them at the start and end of the CLASSPATH, > either way it still writes the file on the local fs

Re: File is not written on HDFS after running libhdfs C API

2015-03-05 Thread Alexandru Calin
No change at all, I've added them at the start and end of the CLASSPATH, either way it still writes the file on the local fs. I've also restarted hadoop. On Thu, Mar 5, 2015 at 10:22 AM, Azuryy Yu wrote: > Yes, you should do it:) > > On Thu, Mar 5, 2015 at 4:17

Re: File is not written on HDFS after running libhdfs C API

2015-03-05 Thread Alexandru Calin
include core-site.xml as well. and I think you can find > '/tmp/testfile.txt' on your local disk, instead of HDFS. > > if so, My guess is right. because you don't include core-site.xml, then > your Filesystem schema is file:// by default, not hdfs://. > > > >

File is not written on HDFS after running libhdfs C API

2015-03-04 Thread Alexandru Calin
I am trying to run the basic libhdfs example, it compiles ok, and actually runs ok, and executes the whole program, but I cannot see the file on the HDFS. It is said here , that you have to include *the right configuration directory containing hd