512 bytes of data (io.bytes.per.checksum), so only the
relevant parts are checked vs. the whole file when doing a partial read.
On Mon, 18 Sep 2017 at 19:23 Ralph Soika <mailto:ralph.so...@imixs.com>> wrote:
Hi,
I have a question about the read behavior of partial read in a
*Mobil:* +49-177-4128245
Registergericht: Amtsgericht Muenchen, HRB 136045
Geschaeftsfuehrer: Gaby Heinle u. Ralph Soika
ll publish my solution as well on GitHub.
So once again - thanks a lot for your help.
Ralph
On 04.09.2017 19:03, Ralph Soika wrote:
Hi,
I know that the issue around the small-file problem was asked
frequently, not only in this mailing list.
I also have read already some books about Haddoop an
Hi,
I know that the issue around the small-file problem was asked
frequently, not only in this mailing list.
I also have read already some books about Haddoop and I also started to
work with Hadoop. But still I did not really understand if Hadoop is the
right choice for my goals.
To simplify
files.
HTH
Ravi
On Sun, Jul 30, 2017 at 2:21 PM, Ralph Soika <mailto:ralph.so...@imixs.com>> wrote:
Hi,
I want to ask, what's the best way implementing a Job which is
importing files into the HDFS?
I have an external System offering data accessible through a Re
Hi,
I want to ask, what's the best way implementing a Job which is importing
files into the HDFS?
I have an external System offering data accessible through a Rest API.
My goal is to have a job running in Hadoop which is periodical (maybe
started by chron?) looking into the Rest API if new d