In addition to others' comments, I think fsck command like below is the easiest
way to find the block locations of the file.
$ hdfs fsck /path/to/the/data -blocks -files -locations
Thanks,
- Takanobu
-Original Message-
From: Jim Clampffer [mailto:james.clampf...@gmail.com]
Sent:
For more details, see https://builds.apache.org/job/hadoop-trunk-win/446/
[Apr 22, 2018 3:07:19 PM] (arp) HDFS-13055. Aggregate usage statistics from
datanodes. Contributed by
[Apr 23, 2018 2:49:35 AM] (inigoiri) HDFS-13388. RequestHedgingProxyProvider
calls multiple configured NNs
[Apr 23,
The vote passes with many +1s (12 committers + 5 contributors) and no -1.
Thanks everyone for voting.
On 4/17/18, 5:19 AM, "Jitendra Pandey" wrote:
Hi All,
The community unanimously voted
(https://s.apache.org/HDDSMergeResult) to
If you want to read replicas from a specific DN after determining the block
bounds via getFileBlockLocations you could abuse the rack locality
infrastructure by generating a dummy topology script to get the NN to order
replicas such that the client tries to read from the DNs you prefer first.
It's
[
https://issues.apache.org/jira/browse/HDFS-3653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon resolved HDFS-3653.
---
Resolution: Won't Fix
> 1.x: Add a retention period for purged edit logs
>
[
https://issues.apache.org/jira/browse/HDFS-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Todd Lipcon resolved HDFS-3069.
---
Resolution: Won't Fix
Target Version/s: (was: )
> If an edits file has more edits in it
Hi,
Perhaps I missed something in the question. FileSystem#getFileBlockLocations
followed by open, seek to start of target block, read. This will let you read
the contents of a specific block using public APIs.
On 4/23/18, 5:26 PM, "Daniel Templeton" wrote:
I'm
I'm not aware of a way to work with blocks using the public APIs. The
easiest way to do it is probably to retrieve the block IDs and then go
grab those blocks from the data nodes' local file systems directly.
Daniel
On 4/23/18 9:05 AM, Thodoris Zois wrote:
Hello list,
I have a file on HDFS
Erik Krogen created HDFS-13493:
--
Summary: Reduce the HttpServer2 thread count on DataNodes
Key: HDFS-13493
URL: https://issues.apache.org/jira/browse/HDFS-13493
Project: Hadoop HDFS
Issue Type:
Wei-Chiu Chuang created HDFS-13492:
--
Summary: Limit httpfs binds to certain IP addresses in branch-2
Key: HDFS-13492
URL: https://issues.apache.org/jira/browse/HDFS-13492
Project: Hadoop HDFS
Hello list,
I have a file on HDFS that is divided into 10 blocks (partitions).
Is there any way to retrieve data from a specific block? (e.g: using
the blockID).
Except that, is there any option to write the contents of each block
(or of one block) into separate files?
Thank you very much,
For more details, see
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/760/
[Apr 22, 2018 3:07:19 PM] (arp) HDFS-13055. Aggregate usage statistics from
datanodes. Contributed by
-1 overall
The following subsystems voted -1:
asflicense unit xml
The following subsystems
+1 (non-binding)
On 4/17/18, 5:19 AM, "Jitendra Pandey" wrote:
Hi All,
The community unanimously voted (https://s.apache.org/HDDSMergeResult)
to adopt
HDDS/Ozone as a sub-project of Hadoop, here is the formal vote for code
merge.
Here
Rakesh R created HDFS-13491:
---
Summary: [SPS]: Discuss and implement efficient approach to send a
copy of a block to another datanode
Key: HDFS-13491
URL: https://issues.apache.org/jira/browse/HDFS-13491
14 matches
Mail list logo