[jira] [Created] (HDFS-11658) Ozone: SCM daemon is unable to be started via CLI

2017-04-17 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11658:
--

 Summary: Ozone: SCM daemon is unable to be started via CLI
 Key: HDFS-11658
 URL: https://issues.apache.org/jira/browse/HDFS-11658
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang
Assignee: Weiwei Yang


SCM daemon can no longer be started via CLI since {{StorageContainerManager}} 
class package renamed from 
{{org.apache.hadoop.ozone.storage.StorageContainerManager}} to 
{{org.apache.hadoop.ozone.scm.StorageContainerManager}} after HDFS-11184.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11655) Ozone: CLI: Guarantees user runs ozone commands has appropriate permission

2017-04-13 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11655:
--

 Summary: Ozone: CLI: Guarantees user runs ozone commands has 
appropriate permission
 Key: HDFS-11655
 URL: https://issues.apache.org/jira/browse/HDFS-11655
 Project: Hadoop HDFS
  Issue Type: Sub-task
Affects Versions: HDFS-7240
Reporter: Weiwei Yang
Assignee: Weiwei Yang


We need to add a permission check module for ozone command line utilities, to 
make sure users run commands with proper privileges. For now, in commands in 
[design doc| 
https://issues.apache.org/jira/secure/attachment/12861478/storage-container-manager-cli-v002.pdf]
 all require admin privilege.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11625) Ozone: Replace hard coded datanode data dir in test code with getStorageDir to fix UT failures

2017-04-05 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11625:
--

 Summary: Ozone: Replace hard coded datanode data dir in test code 
with getStorageDir to fix UT failures
 Key: HDFS-11625
 URL: https://issues.apache.org/jira/browse/HDFS-11625
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Weiwei Yang
Assignee: Weiwei Yang


There seems to be some UT regressions after HDFS-11519, such as

* TestDataNodeVolumeFailureToleration
* TestDataNodeVolumeFailureReporting
* TestDiskBalancerCommand
* TestBlockStatsMXBean
* TestDataNodeVolumeMetrics
* TestDFSAdmin
* TestDataNodeHotSwapVolumes
* TestDataNodeVolumeFailure

these tests set up datanode data dir by some hard coded names, such as 
{code}
  new File(cluster.getDataDirectory(), "data1");
{code}

this no longer works since HDFS-11519 changes the pattern from

{code}
/data/data<2*dnIndex + 1>
/data/data<2*dnIndex + 2>
...
{code}

to 

{code}
/data/dn0_data0
/data/dn0_data1
/data/dn1_data0
/data/dn1_data1
...
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11585) Ozone: Support force update a container

2017-03-27 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11585:
--

 Summary: Ozone: Support force update a container
 Key: HDFS-11585
 URL: https://issues.apache.org/jira/browse/HDFS-11585
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang
Assignee: Weiwei Yang


HDFS-11567 added support of updating a container, and in following cases

# Container is closed
# Container meta file is falsely removed on disk or corrupted

a container cannot be gracefully updated. It is useful to support forcibly 
update if a container gets into such state, that gives us the chance to repair 
meta data.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11581) Ozone: Support force delete a container

2017-03-27 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11581:
--

 Summary: Ozone: Support force delete a container
 Key: HDFS-11581
 URL: https://issues.apache.org/jira/browse/HDFS-11581
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang


In some occasions, we may want to forcibly delete a container regardless of if 
deletion condition is satisfied, e.g container is empty. This way we can do 
best-effort to clean up containers. Note, only a CLOSED container can be force 
deleted. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11569) Ozone: Implement listKey function for KeyManager

2017-03-23 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11569:
--

 Summary: Ozone: Implement listKey function for KeyManager
 Key: HDFS-11569
 URL: https://issues.apache.org/jira/browse/HDFS-11569
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang
Assignee: Weiwei Yang


List keys by prefix from a container. This doesn't need to support pagination 
as keys in a single container should be containable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11567) Support update container

2017-03-23 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11567:
--

 Summary: Support update container
 Key: HDFS-11567
 URL: https://issues.apache.org/jira/browse/HDFS-11567
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Add support to update a container.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11550) Ozone: Add a check to prevent removing a container that has keys in it

2017-03-20 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11550:
--

 Summary: Ozone: Add a check to prevent removing a container that 
has keys in it
 Key: HDFS-11550
 URL: https://issues.apache.org/jira/browse/HDFS-11550
 Project: Hadoop HDFS
  Issue Type: Task
  Components: ozone
Reporter: Weiwei Yang
Assignee: Weiwei Yang


The Storage Container remove call must check if there are keys in the container 
before removing itself. if not it should return an error, 
ERROR_CONTAINER_NOT_EMPTY.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-11413) HDFS fsck command shows health as corrupt for '/'

2017-02-16 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HDFS-11413.

Resolution: Not A Bug

> HDFS fsck command shows health as corrupt for '/'
> -
>
> Key: HDFS-11413
> URL: https://issues.apache.org/jira/browse/HDFS-11413
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Nishant Verma
>
> I have open source hadoop version 2.7.3 cluster (2 Masters + 3 Slaves) 
> installed on AWS EC2 instances. I am using the cluster to integrate it with 
> Kafka Connect. 
> The setup of cluster was done last month and setup of kafka connect was 
> completed last fortnight. Since then, we were able to operate the kafka topic 
> records on our HDFS and do various operations.
> Since last afternoon, I find that any kafka topic is not getting committed to 
> the cluster. When I tried to open the older files, I started getting below 
> error. When I copy a new file to the cluster from local, it comes and gets 
> opened but after some time, again starts showing similar IOException:
> ==
> 17/02/14 07:57:55 INFO hdfs.DFSClient: No node available for 
> BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 
> file=/test/inputdata/derby.log
> 17/02/14 07:57:55 INFO hdfs.DFSClient: Could not obtain 
> BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 from any node: 
> java.io.IOException: No live nodes contain block 
> BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 after checking 
> nodes = [], ignoredNodes = null No live nodes contain current block Block 
> locations: Dead nodes: . Will get new block locations from namenode and 
> retry...
> 17/02/14 07:57:55 WARN hdfs.DFSClient: DFS chooseDataNode: got # 1 
> IOException, will wait for 499.3472970548959 msec.
> 17/02/14 07:57:55 INFO hdfs.DFSClient: No node available for 
> BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 
> file=/test/inputdata/derby.log
> 17/02/14 07:57:55 INFO hdfs.DFSClient: Could not obtain 
> BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 from any node: 
> java.io.IOException: No live nodes contain block 
> BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 after checking 
> nodes = [], ignoredNodes = null No live nodes contain current block Block 
> locations: Dead nodes: . Will get new block locations from namenode and 
> retry...
> 17/02/14 07:57:55 WARN hdfs.DFSClient: DFS chooseDataNode: got # 2 
> IOException, will wait for 4988.873277172643 msec.
> 17/02/14 07:58:00 INFO hdfs.DFSClient: No node available for 
> BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 
> file=/test/inputdata/derby.log
> 17/02/14 07:58:00 INFO hdfs.DFSClient: Could not obtain 
> BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 from any node: 
> java.io.IOException: No live nodes contain block 
> BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 after checking 
> nodes = [], ignoredNodes = null No live nodes contain current block Block 
> locations: Dead nodes: . Will get new block locations from namenode and 
> retry...
> 17/02/14 07:58:00 WARN hdfs.DFSClient: DFS chooseDataNode: got # 3 
> IOException, will wait for 8598.311122824263 msec.
> 17/02/14 07:58:09 WARN hdfs.DFSClient: Could not obtain block: 
> BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 
> file=/test/inputdata/derby.log No live nodes contain current block Block 
> locations: Dead nodes: . Throwing a BlockMissingException
> 17/02/14 07:58:09 WARN hdfs.DFSClient: Could not obtain block: 
> BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 
> file=/test/inputdata/derby.log No live nodes contain current block Block 
> locations: Dead nodes: . Throwing a BlockMissingException
> 17/02/14 07:58:09 WARN hdfs.DFSClient: DFS Read
> org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: 
> BP-1831277630-10.16.37.124-1484306078618:blk_1073793876_55013 
> file=/test/inputdata/derby.log
> at 
> org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:983)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:642)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:882)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:934)
> at java.io.DataInputStream.read(DataInputStream.java:100)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:59)
> at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
> at 
> org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:107)
> at 
> org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:10

[jira] [Created] (HDFS-11166) Doc for GET_BLOCK_LOCATIONS is missing in webhdfs API

2016-11-22 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11166:
--

 Summary: Doc for GET_BLOCK_LOCATIONS is missing in webhdfs API
 Key: HDFS-11166
 URL: https://issues.apache.org/jira/browse/HDFS-11166
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, webhdfs
Affects Versions: 2.7.3
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Add GET_BLOCK_LOCATIONS in webhdfs API docs, currently it is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11156) WebHDFS GET_BLOCK_LOCATIONS

2016-11-18 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11156:
--

 Summary: WebHDFS GET_BLOCK_LOCATIONS 
 Key: HDFS-11156
 URL: https://issues.apache.org/jira/browse/HDFS-11156
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.7.3
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Following webhdfs REST API

{code}
http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1
{code}

will get a response like
{code}
{
  "LocatedBlocks" : {
"fileLength" : 1073741824,
"isLastBlockComplete" : true,
"isUnderConstruction" : false,
"lastLocatedBlock" : { ... },
"locatedBlocks" : [ {...} ]
  }
}
{code}

This represents for *o.a.h.h.p.LocatedBlocks*. However according to 
*FileSystem* API, 

{code}
public BlockLocation[] getFileBlockLocations(Path p, long start, long len)
{code}

clients would expect an array of BlockLocation. This mismatch should be fixed. 
Marked as Incompatible change as this will change the output of the 
GET_BLOCK_LOCATIONS API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10588) False alarm in namenode log - ERROR - Disk Balancer is not enabled

2016-06-28 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10588:
--

 Summary: False alarm in namenode log - ERROR - Disk Balancer is 
not enabled
 Key: HDFS-10588
 URL: https://issues.apache.org/jira/browse/HDFS-10588
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, hdfs
Reporter: Weiwei Yang


Noticed error message in namenode log 
{code}2016-06-28 19:49:12,221 ERROR datanode.DiskBalancer 
(DiskBalancer.java:checkDiskBalancerEnabled(297)) - Disk Balancer is not 
enabled.
{code}
even with default configuration dfs.disk.balancer.enabled=false.This is 
triggered when accessing datanode web UI, because 
{{DataNode#getDiskBalancerStatus}} calls the check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10583) Add Utilities/conf links to HDFS UI

2016-06-27 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10583:
--

 Summary: Add Utilities/conf links to HDFS UI
 Key: HDFS-10583
 URL: https://issues.apache.org/jira/browse/HDFS-10583
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs, ui
Reporter: Weiwei Yang


When admin wants to explore some configuration properties, such as namenode and 
datanode, it will be helpful to provide an UI page to read them. This is 
extremely useful when nodes are having different configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10581) Redundant table on Datanodes page when there is no nodes under decomissioning

2016-06-27 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10581:
--

 Summary: Redundant table on Datanodes page when there is no nodes 
under decomissioning
 Key: HDFS-10581
 URL: https://issues.apache.org/jira/browse/HDFS-10581
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs, ui
Reporter: Weiwei Yang
Priority: Trivial


A minor user experience Improvement on namenode UI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10569) A bug causes OutOfIndex error in BlockListAsLongs

2016-06-23 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10569:
--

 Summary: A bug causes OutOfIndex error in BlockListAsLongs
 Key: HDFS-10569
 URL: https://issues.apache.org/jira/browse/HDFS-10569
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Weiwei Yang
Assignee: Weiwei Yang
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10493) Add links to datanode web UI in namenode datanodes page

2016-06-06 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10493:
--

 Summary: Add links to datanode web UI in namenode datanodes page
 Key: HDFS-10493
 URL: https://issues.apache.org/jira/browse/HDFS-10493
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode, ui
Reporter: Weiwei Yang


HDFS-10440 makes some improvements to datanode UI, it will be good to provide 
links from namenode datanodes information page to individual datanode UI, to 
check more datanode information easily.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10440) Add more information to DataNode web UI

2016-05-19 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10440:
--

 Summary: Add more information to DataNode web UI
 Key: HDFS-10440
 URL: https://issues.apache.org/jira/browse/HDFS-10440
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Affects Versions: 2.7.0, 2.6.0, 2.5.0
Reporter: Weiwei Yang


At present, datanode web UI doesn't have much information except for node name 
and port. Propose to add more information similar to namenode UI, including, 

* Static info (version, block pool  and cluster ID)
* Running state (active, decommissioning, decommissioned or lost etc)
* Summary (blocks, capacity, storage etc)
* Utilities (logs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-10198) File browser web UI should split to pages when files/dirs are too many

2016-03-22 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HDFS-10198.

   Resolution: Duplicate
Fix Version/s: 2.8.0

> File browser web UI should split to pages when files/dirs are too many
> --
>
> Key: HDFS-10198
> URL: https://issues.apache.org/jira/browse/HDFS-10198
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.2
>Reporter: Weiwei Yang
>  Labels: ui
> Fix For: 2.8.0
>
>
> When there are a large number of files/dirs, HDFS file browser UI takes too 
> long to load, and it loads all items in one single page, causes so many 
> problems to read. We should have it split to pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-10198) File browser web UI should split to pages when files/dirs are too many

2016-03-22 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-10198:
--

 Summary: File browser web UI should split to pages when files/dirs 
are too many
 Key: HDFS-10198
 URL: https://issues.apache.org/jira/browse/HDFS-10198
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.7.2
Reporter: Weiwei Yang


When there are large number of files/dirs, HDFS file browser UI takes too long 
to load, and it loads all items in one single page, causes so many problems to 
read. We should have it split to pages.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9152) Get input/output error while copying 800 small files to NFS Gateway mount point

2016-03-02 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang resolved HDFS-9152.
---
Resolution: Cannot Reproduce

> Get input/output error while copying 800 small files to NFS Gateway mount 
> point 
> 
>
> Key: HDFS-9152
> URL: https://issues.apache.org/jira/browse/HDFS-9152
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: nfsgateway
> Attachments: DNErrors.log, NNErrors.log
>
>
> We have around *800 3-5K* files on local file system, we have nfs gateway 
> mounted on */hdfs/*, when we tried to copy these files to HDFS by 
> *cp ~/userdata/* /hdfs/user/cqdemo/demo3.data/*
> most of files are failed because of 
> cp: writing `/hdfs/user/cqdemo/demo3.data/TRAFF_201408011220.csv': 
> Input/output error
> cp: writing `/hdfs/user/cqdemo/demo3.data/TRAFF_201408011221.csv': 
> Input/output error
> cp: writing `/hdfs/user/cqdemo/demo3.data/TRAFF_201408011222.csv': 
> Input/output error
> for same set of files, I tried to use hadoop dfs -put command to do the copy, 
> it works fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9682) Fix a typo "aplication" in webhdfs document

2016-01-21 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-9682:
-

 Summary: Fix a typo "aplication" in webhdfs document
 Key: HDFS-9682
 URL: https://issues.apache.org/jira/browse/HDFS-9682
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation, webhdfs
Reporter: Weiwei Yang
Assignee: Weiwei Yang
Priority: Trivial
 Fix For: 2.8.0


This was found while fixing YARN-4605,

The webhdfs client FileSytem implementation can be used to access HttpFS using 
the Hadoop filesystem command (`hadoop fs`) line tool as well as from Java 
*aplications* using the Hadoop FileSystem Java API

just a typo fix, trivial change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9653) Expose the number of blocks pending deletion through dfsadmin report command

2016-01-14 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-9653:
-

 Summary: Expose the number of blocks pending deletion through 
dfsadmin report command
 Key: HDFS-9653
 URL: https://issues.apache.org/jira/browse/HDFS-9653
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.7.1
Reporter: Weiwei Yang


HDFS-5986 adds *Number of Blocks Pending Deletion* on namenode UI and JMX, 
propose to expose this from hdfs dfsadmin -report as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9152) Get input/output error while copying 800 small files to NFS Gateway mount point

2015-09-27 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-9152:
-

 Summary: Get input/output error while copying 800 small files to 
NFS Gateway mount point 
 Key: HDFS-9152
 URL: https://issues.apache.org/jira/browse/HDFS-9152
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.7.1
Reporter: Weiwei Yang


We have around 800 3-5K files on local file system, we have nfs gateway mounted 
on /hdfs/, when we tried to copy these files to HDFS by 

cp ~/userdata/* /hdfs/user/cqdemo/demo3.data/

most of files are failed because of 

cp: writing `/hdfs/user/cqdemo/demo3.data/TRAFF_201408011220.csv': Input/output 
error
cp: writing `/hdfs/user/cqdemo/demo3.data/TRAFF_201408011221.csv': Input/output 
error
cp: writing `/hdfs/user/cqdemo/demo3.data/TRAFF_201408011222.csv': Input/output 
error

for same set of files, I tried to use hadoop dfs -put cimm



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2