Hey All,

Currently in a project I'm involved, we're about to make design choices
regarding the use of Hadoop as a scalable and distributed data analytics
framework.
Basically the application would be the base of a Web Analytics tool, so I do
have the vision that Hadoop would be the finest choice for analyzing the
collected data.
But for the collection of data is somewhat a different issue to consider,
there needs to be serious design decision taken for the data collection
architecture.

Actually, I'd like to have a distributed and scalable data collection in
production. The current situation is like we have multiple of servers in 3-4
different locations, each collect some sort of data.
The basic approach on analyzing this distributed data would be: logging them
into structured text files so that we'll be able to transfer them to the
hadoop cluster and analyze them using some MapReduce jobs.
The basic process I define follows like this
- Transfer log files to Hadoop master, (collectors to master)
- Put the files on the master to the HDFS, (master to the cluster)
As it's clear there's an overhead in the transfer of the log files. And the
big log files will have to be analyzed even if you'll somehow need a small
portion of the data.

One better other option is, logging directly to a distributed database like
Cassandra and HBase, so the MapReduce jobs would be fetching the data from
the databases and doing the analysis. And the data will also be randomly
accessible and open to queries in real-time.
I'm not that much familiar in this area of distributed databases, however I
can see that,
-If we're using cassandra for storing logging information, we won't have a
connection overhead for writing the data to the Cassandra cluster, since all
nodes in the cluster are able to accept incoming write requests. However in
HBase I'm afraid we'll need to write to the master only, so in such
situation, there seems to be a connection overhead on the master and we can
only scale up-to the levels that the through-put of master. Logging to HBase
doesn't seem scalable from this point of view.
-On the other hand, using a different Cassandra cluster which is not
directly from the ecosystem of Hadoop, I'm afraid we'll lose the concept of
"data locality" while using the data for analysis in MapReduce jobs if
Cassandra was the choice for keeping the log data. However in the case of
HBase we'll be able to use the data locality since it's directly related to
the HDFS.
-Is there a stable way for integrating Cassandra with Hadoop?

So finally Chukwa seems to be a good choice for such kind of a data
collection. Where each server that can be defined as sources will be running
Agents on them, so they can transfer the data to the Collectors that reside
close to the HDFS. After series of pipe-lined processes the data would be
clearly available for analysis using MapReduce jobs.
I see some connection overhead due to the through-put of master in this
scenario and the files that need to be analyzed will also be again available
in big files, so a sample range of the data analysis would require the
reading of the full files.

I feel like these are the brief options I figured out till now. Actually all
decision will come with some kind of a drawback and provide some decision
specific more functionality compared to the others.

Is there anyone on the list who solved the need in such functionality
previously? I'm open to all kind of comments and suggestions,

Best Regards,
Utku

Reply via email to