do you mean every 7 mins?
e.g, [10:07, 10:14),
[10:14, 10:21) .
On 28 January 2013 12:56, Oleg Ruchovets wrote:
> Hi ,
> I have such row data structure:
>
> event_id | time
> ==
> event1 | 10:07
> event2 | 10:10
> event3 | 10:12
>
> event4 | 10:
use ls -l to check if hdfs has the right to access url.txt.
On 9 October 2012 19:40, Bai Shen wrote:
> I have a CDH3 cluster up and running. I'm on the namenode and trying to
> copy a file into HDFS. However, whenever I run copyFromLocal, I get a file
> does not exist error.
>
> [root@node1-0
o guarantee that each event is processed once and
> only once. You can also store your results into HDFS if you want, perhaps
> through HBASE, if you need to do further processing on the data.
>
> --Bobby Evans
>
> On 5/22/12 5:02 AM, "Zhiwei Lin" wrote:
>
> Hi Ro
> How quickly do you have to get the result out once the new data is added?
> How far back in time do you have to look for from the occurrence of
> ? Do you have to do this for all combinations of values or is it just
> a small subset of values?
>
> --Bobby Evans
&
I have large volume of stream log data. Each data record contains a time
stamp, which is very important to the analysis.
For example, I have data format like this:
(1) 20:30:21 01/April/2012A.
(2) 20:30:51 01/April/2012.
(3) 21:30:21 01/April/2012
Hi Ravi,
There is a compiled plugin available.
http://dl.dropbox.com/u/24999702/Apache/hadoop-eclipse-plugin-1.0.0.jar
You can follow this link on how to use eclipse plugin
http://v-lad.org/Tutorials/Hadoop/13.5%20-%20copy%20hadoop%20plugin.html
On 17 May 2012 17:29, Ravi Joshi wrote:
> H