[
https://issues.apache.org/jira/browse/HDFS-3525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tsz Wo (Nicholas), SZE resolved HDFS-3525.
--
Resolution: Duplicate
> TestFileLengthOnClusterRestart may fail
> --
Tsz Wo (Nicholas), SZE created HDFS-3525:
Summary: TestFileLengthOnClusterRestart may fail
Key: HDFS-3525
URL: https://issues.apache.org/jira/browse/HDFS-3525
Project: Hadoop HDFS
Iss
Eli Collins created HDFS-3524:
-
Summary:
TestFileLengthOnClusterRestart.testFileLengthWithHSyncAndClusterRestartWithOutDNsRegister
failed
Key: HDFS-3524
URL: https://issues.apache.org/jira/browse/HDFS-3524
[
https://issues.apache.org/jira/browse/HDFS-3521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Tsz Wo (Nicholas), SZE resolved HDFS-3521.
--
Resolution: Fixed
Hadoop Flags: Reviewed
> Allow namenode to tolerate
Tsz Wo (Nicholas), SZE created HDFS-3523:
Summary: TestFileCreation fails in branch-1
Key: HDFS-3523
URL: https://issues.apache.org/jira/browse/HDFS-3523
Project: Hadoop HDFS
Issue Ty
Sure. We could also find the current user ID and bake that into the
test as an "acceptable" UID. If that makes sense.
Colin
On Mon, Jun 11, 2012 at 4:12 PM, Alejandro Abdelnur wrote:
> Colin,
>
> Would be possible using some kind of cmake config magic to set a macro to
> the current OS limit?
Colin,
Would be possible using some kind of cmake config magic to set a macro to
the current OS limit? Even if this means detecting the OS version and
assuming its default limit.
thx
On Mon, Jun 11, 2012 at 3:57 PM, Colin McCabe wrote:
> Hi all,
>
> I recently pulled the latest source, and ran
Hi all,
I recently pulled the latest source, and ran a full build. The
command line was this:
mvn compile -Pnative
I was confronted with this:
[INFO] Requested user cmccabe has id 500, which is below the minimum
allowed 1000
[INFO] FAIL: test-container-executor
[INFO] ==