SFTPFileSystem was introduced in HADOOP-5732. I don't see any discussion
there about the getScheme() implementation, so this might not have been an
intentional design choice. I think it's a bug.
Are you interested in contributing a patch?
Chris Nauroth
On Thu, Apr 20, 2023 at 6:00 AM Wenqi Ma
Yes, I expect that will work (for both
yarn.resourcemanager.nodes.exclude-path and
yarn.resourcemanager.nodes.include-path), using the "s3a://..." scheme to
specify a file in an S3 bucket.
Chris Nauroth
On Tue, Jan 3, 2023 at 11:50 PM Dong Ye wrote:
> Hi, All:
>
> F
s do not synchronize the state of include/exclude files.
Chris Nauroth
On Wed, Dec 28, 2022 at 11:08 PM Dong Ye wrote:
> Hi, Chris:
>
> Thank you very much! Yes, I am also concerned with the
> decommissioning of nodemanager in a Resource Manager High Availability
> scenar
with the new ResourceManager.
Separately, if you're concerned about divergence of node include/exclude
files, you can configure them to be stored at a shared file system (e.g.
your preferred cloud object store) to be used by all ResourceManager
instances.
Chris Nauroth
On Sat, Dec 24, 2022 at 6:27 PM
Chris Nauroth
On Tue, Nov 8, 2022 at 7:56 PM hehaore...@gmail.com
wrote:
> hello
>
> HDFS cluster version 2.7.2, I set a space quota for the directory, but the
> available space is much less than expected, for example, this image has a
> quota of 600T, 31T used space, it should be
https://hadoop.apache.org/mailing_lists.html
As described here,you can unsubscribe by sending an email to
user-unsubscr...@hadoop.apache.org. (That's a general pattern for all ASF
mailing lists.)
Chris Nauroth
On Sat, Oct 29, 2022 at 1:14 AM Vara Prasad Beerakam <
mr.b.varapra...@gmail.
recommend looking into
an upgrade to 2.10.2 in the short term, followed by a plan for getting onto
a currently supported 3.x release.
I hope this helps.
Chris Nauroth
On Mon, Oct 24, 2022 at 11:31 PM hehaore...@gmail.com
wrote:
> I have an HDFS cluster, version 2.7.2, with two nameno
hdfs/tools/GetGroups.java
Chris Nauroth
On Mon, Oct 3, 2022 at 8:35 PM Chris Nauroth wrote:
> I expect you'd be able to fork a separate process and run any hadoop/hdfs
> commands that you'd like.
>
> If you're coding in Java, then I expect you can code to APIs to accomplish
> t
/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tools/GetGroupsBase.java0
Chris Nauroth
On Wed, Sep 7, 2022 at 6:15 AM wrote:
> Hello,
> does anyone know if "hdfs" commands can be run with Spark?
> For instance, I would like to run "hdfs groups &quo
HDFS-10860
[3] https://issues.apache.org/jira/browse/HDFS-10423
Chris Nauroth
On Mon, Jan 10, 2022 at 7:26 AM Roman Savchenko wrote:
> Dear Hadoop Developers,
>
> I'm seeing an issue with httpfs server (Cloudera server with Kerberos and
> HTTPFS enabled), when I'm tryin
which will perform an eager checksum validation before calling mlock to pin
the block into physical memory explicitly at the DataNode host.
Chris Nauroth
On Sun, Oct 24, 2021 at 1:42 PM Pratyush Das wrote:
> Hi,
>
> I can successfully load files from HDFS via the C API like -
>
dy implemented ?
>> > Or any there better solution for the problem.
>> >
>> > thanks.
>> >
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: user-h...@hadoop.apache.org
>>
>>
>
--
Chris Nauroth
following sentence in "HDFS Permissions Guide”:
> In contrast to the POSIX model, there are no setuid or setgid bits for
> files as there is no notion of executable files. For directories, there are
> no setuid or setgid bits directory as a simplification.
>
--
Chris Nauroth
to support use of an external store with strong
consistency guarantees for S3A file system metadata. In the interaction you
described, we could consult the consistent metadata store instead of sending a
HEAD request to S3 to determine if the object already exists.
--Chris Nauroth
From: Dave
d a default ACL,
please refer to the HDFS Permissions Guide documentation.
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html#ACLs_Access_Control_Lists
--Chris Nauroth
From: Shashi Vishwakarma <shashi.vish...@gmail.com>
Date: Monday, Septe
.
--Chris Nauroth
From: xeon Mailinglist <xeonmailingl...@gmail.com>
Date: Sunday, August 21, 2016 at 3:56 AM
To: "user@hadoop.apache.org" <user@hadoop.apache.org>
Subject: Improve IdentityMapper code for wordcount
Hi,
I have created a map method that reads the map output of the
/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html#Downgrade
--Chris Nauroth
From: jinxing <jinxing6...@126.com>
Date: Tuesday, August 16, 2016 at 7:44 PM
To: Chris Nauroth <cnaur...@hortonworks.com>, "user@hadoop.apache.org"
<user@hadoop.apache.org>
Subject:
.
Then, the effect on overall job execution might be faster.
--Chris Nauroth
On 8/7/16, 12:12 PM, "Sebastian Nagel" <wastl.na...@googlemail.com> wrote:
Hi,
recently, after upgrading to CDH 5.8.0, I've run into a performance
issue when reading data from
daemons (e.g. just the DataNodes). If you want to proceed with upgrading your
2.5.0 DataNodes to 2.7.2, then I expect you can start a new rolling upgrade and
proceed with the upgrade process on just the subset of DataNodes still running
2.5.0.
--Chris Nauroth
From: jinxing <jinxing6...@
I agree with Rakesh that spaces in JAVA_HOME is likely to be a problem. This
is a known problem tracked in JIRA issue HADOOP-9600.
--Chris Nauroth
From: Rakesh Radhakrishnan <rake...@apache.org>
Date: Monday, August 8, 2016 at 8:03 AM
To: Atri Sharma <atri.j...@gmail.com>
Cc:
that want to keep it, I wouldn’t object.
--Chris Nauroth
On 7/26/16, 11:14 PM, "Vinayakumar B" <vinayakumar...@huawei.com> wrote:
Hi All,
BKJM was Active and made much stable when the NameNode HA was
implemented and there was no QJM implemented.
No
attaching a patch to the
JIRA.
--Chris Nauroth
From: M G <mgbuyst...@gmail.com<mailto:mgbuyst...@gmail.com>>
Date: Wednesday, July 13, 2016 at 7:23 AM
To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>"
<user@hadoop.apache.org<mailto:user@hadoop.apa
REST API (or HTTPFS). However, it is possible for the FileSystem shell to
reference paths as URIs using the "webhdfs" scheme. For example:
> hadoop fs -cp webhdfs://localhost:9870/hello1 webhdfs://localhost:9870/hello2
> hadoop fs -cat webhdfs://localhost:9870/hello2
hello
-
lock duration on the
server side for any individual RPC. Admittedly, this is an engineering
trade-off, but the choice has worked well in practice.
--Chris Nauroth
From: ravi teja <raviort...@gmail.com<mailto:raviort...@gmail.com>>
Date: Wednesday, June 29, 2016 at 4:55 AM
To: Chris Na
luding any group membership resolution and authorization checks) proceeds
as user "michael". There is a different environment variable named
HADOOP_PROXY_USER that can be set to achieve this.
Does that help?
--Chris Nauroth
From: Aneela Saleem <ane...@platalytics.com<mailto:ane
If it's a problem pattern that spans multiple sub-components, then please file
it as a HADOOP issue. It might end up getting split into related issues
assigned to each sub-project, but HADOOP is a good starting point.
--Chris Nauroth
From: Christopher <ctubb...@apache.org<mailto
-applications-distributedshell), then please file the JIRA against
the YARN project.
--Chris Nauroth
From: Christopher <ctubb...@apache.org<mailto:ctubb...@apache.org>>
Date: Monday, June 20, 2016 at 12:48 PM
To: Chris Nauroth <cnaur...@hortonworks.com<mailto:cnaur...@hortonw
Christopher, thank you for the follow-up information. Please feel free to file
an Apache JIRA in the HADOOP project with your findings.
--Chris Nauroth
From: Christopher <ctubb...@apache.org<mailto:ctubb...@apache.org>>
Date: Monday, June 20, 2016 at 12:26 PM
To: Chris Na
reproduce the problem.
Is it possible that this is a problem specific to the Maven version running
your build? The BUILDING.txt file in the root of the source tree has details
on what tools and what versions are required for the build.
--Chris Nauroth
From: Christopher <ctubb...@apache.
failure? That might help us
confirm if it's a bug, and if so, move to tracking it in an Apache JIRA.
--Chris Nauroth
From: Christopher <ctubb...@apache.org<mailto:ctubb...@apache.org>>
Date: Thursday, June 16, 2016 at 12:35 PM
To: "user@hadoop.apache.org<mailto:user@hadoop
. To mitigate this, I recommend that you start with
a test run in a non-production environment to see how it reacts.
--Chris Nauroth
From: ravi teja <raviort...@gmail.com<mailto:raviort...@gmail.com>>
Date: Wednesday, June 15, 2016 at 8:33 PM
To: "user@hadoop.apach
message is harmless.
Apache Hadoop 2.8.0 will ship fix HDFS-9572 to prevent this message from going
into the logs.
<https://issues.apache.org/jira/browse/HDFS-9572><https://issues.apache.org/jira/browse/HDFS-9572>https://issues.apache.org/jira/browse/HDFS-9572
--Chris Nauroth
Fr
://docs.oracle.com/javase/8/docs/technotes/guides/security/sasl/sasl-refguide.html#DEBUG
--Chris Nauroth
From: Dmitry Goldenberg
<dgoldenberg...@gmail.com<mailto:dgoldenberg...@gmail.com>>
Date: Wednesday, June 8, 2016 at 4:30 PM
To: "user@hadoop.apache.org<mailto:user@hadoop
webhdfs://127.0.0.1:50070/
You could also get an instance of WebHdfsFileSystem by calling FileSystem#get
with a Configuration object that sets fs.defaultFS to a webhdfs: URI, or call
the overload of FileSystem#get that accepts an explicit URI argument.
--Chris Nauroth
From: Vamsi Krishna
Hello Colin,
Judging from the stack trace, I think you've hit a known HDFS bug:
HDFS-8055. A fix for this bug has been committed for the upcoming Apache
Hadoop 2.8.0 release.
https://issues.apache.org/jira/browse/HDFS-8055
--Chris Nauroth
On 6/3/16, 1:21 PM, "Colin Kincaid Williams&q
related to permission
handling in the HDFS permissions guide.
http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html
--Chris Nauroth
From: Gagan Brahmi <gaganbra...@gmail.com<mailto:gaganbra...@gmail.com>>
Date: Friday, June 3, 2016 at 9:18 A
the same file system. If multiple file systems on
different volumes are mounted, and the rename crosses different file systems,
then typically the rename either degrades to a non-atomic copy-delete or the
call simply fails fast.
--Chris Nauroth
From: Kun Ren <ren.h...@gmail.com<mailto
to be copied across
NameNodes, and then HDFS would not be able to satisfy its promise that rename
is atomic. There are more details about this in the ViewFs guide.
http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/ViewFs.html
--Chris Nauroth
From: Kun Ren <ren.h...@gmail.
Hello Kun,
Yes, this command works with federation. The command would copy the file from
NameNode 1/block pool 1 to NameNode 2/block pool 2.
--Chris Nauroth
From: Kun Ren <ren.h...@gmail.com<mailto:ren.h...@gmail.com>>
Date: Wednesday, May 25, 2016 at 8:57 AM
To: "user@
he mask, because the application
knows to apply that logic regardless of the FileSystem implementation.
[1] http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html
--Chris Nauroth
From: kumar r <kumarc...@gmail.com<mailto:kumarc...@gmail.com>>
Date: Monday, May 23, 2016 at 10:20 PM
To
There is also some discussion on that JIRA considering a checksum strategy
independent of block size. I don't think anything was ever implemented
though, and there would be some drawbacks to that approach. Sorry if this
caused confusion.
--Chris Nauroth
On 5/24/16, 9:55 AM, "D
not aware of a more appropriate forum
for this question, so I don't have a recommendation for where to take this.
--Chris Nauroth
From: Deepak Goel <deic...@gmail.com<mailto:deic...@gmail.com>>
Date: Monday, May 23, 2016 at 2:59 PM
To: "user@hadoop.apache.org<mailto:user@hadoop
to the user has changed to advise on some potential
workarounds.
--Chris Nauroth
On 5/22/16, 10:31 AM, "Dmitry Sivachenko" <trtrmi...@gmail.com> wrote:
>
>> On 21 May 2016, at 09:34, Dmitry Sivachenko <trtrmi...@gmail.com> wrote:
>>
>>
>>
Something is definitely odd about the UI there. From your second link, can you
try clicking directly on the "Create" button (not the drop-down arrow leading
to "Create Detailed")?
--Chris Nauroth
From: John Lilley <john.lil...@redpoint.net<mailto:john.lil...@redpoin
Hello Dmitry,
MAPREDUCE-5065 has been included in these branches for a long time. Are
you certain that you passed a dfs.blocksize equal to what was used in the
source files? Did all source files use the same block size?
--Chris Nauroth
On 5/20/16, 3:30 PM, "Dmitry Sivachenko&quo
as corrupt. This condition is normal
and expected, so it's incorrect for fsck to report it as corruption. HDFS-8809
has a fix committed for Apache Hadoop 2.8.0.
https://issues.apache.org/jira/browse/HDFS-8809
--Chris Nauroth
From: Henning Blohm <henning.bl...@zfabrik.de<mailto:henn
Hello John,
You should be able to go to https://issues.apache.org/jira and then get an
option to create a new issue in any Apache project, including HADOOP, HDFS,
YARN or MAPREDUCE, depending on the scope of the issue you want to report.
--Chris Nauroth
From: John Lilley <john.
.
--Chris Nauroth
From: Elliot West <tea...@gmail.com<mailto:tea...@gmail.com>>
Date: Tuesday, May 3, 2016 at 2:50 AM
To: Chris Nauroth <cnaur...@hortonworks.com<mailto:cnaur...@hortonworks.com>>
Cc: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>&qu
details on how this
works, or if you want to follow any further discussion on this topic, then
please take a look at the comments on HADOOP-13076.
--Chris Nauroth
From: Chris Nauroth <cnaur...@hortonworks.com<mailto:cnaur...@hortonworks.com>>
Date: Friday, April 29, 2016 at 9:03 PM
To:
benefits of the
cache are noticeable. Sometimes this is a helpful workaround for specific
applications though.
--Chris Nauroth
From: Cazen Lee <cazen@gmail.com<mailto:cazen@gmail.com>>
Date: Thursday, April 28, 2016 at 5:53 PM
To: "user@hadoop.apache.org<mailto:us
this
feature in s3a.
--Chris Nauroth
From: Elliot West <tea...@gmail.com<mailto:tea...@gmail.com>>
Date: Thursday, April 28, 2016 at 5:01 AM
To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>"
<user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Sub
or not the data is encrypted in
transit becomes irrelevant at that point.
--Chris Nauroth
From: Musty Rehmani
<musty_rehm...@yahoo.com.INVALID<mailto:musty_rehm...@yahoo.com.INVALID>>
Reply-To: "musty_rehm...@yahoo.com<mailto:musty_rehm...@yahoo.com>"
<musty_rehm
are tracked on the file paths
being written, the old client's lease on its temporary file won't block the new
client from writing to a different temporary file.
--Chris Nauroth
From: Ken Huang <dnion...@gmail.com<mailto:dnion...@gmail.com>>
Date: Thursday, February 25, 2016 at 5:49
, then perhaps that behavior could be added as
another option in ProcfsBasedProcessTree. Off the top of my head, I can't
think of a reliable way to do this, and I can't research it further
immediately. Do others on the thread have ideas?
--Chris Nauroth
[1] http://linux.die.net/man/2/mlock
On 2
a background thread and refetch them proactively. That way, the
caller wouldn't absorb the latency hit of the extra RPC as part of its read
call. Please feel free to file an HDFS JIRA if this makes sense, or if you
have something else like it in mind.
--Chris Nauroth
From: Dejan Menges <
ository cache on disk. After that, you'll be able to do
faster incremental builds by running "mvn compile" or building only a specific
sub-module, and Maven will be able to find hadoop-maven-plugins from the local
repository cache.
--Chris Nauroth
From: Boric Tan <it.news.tr
Yes, certainly, if you only need it in one spot, then -mv is a fast
metadata-only operation. I was under the impression that Gavin really wanted
to achieve 2 distinct copies. Perhaps I was mistaken.
--Chris Nauroth
From: sandeep vura <sandeepv...@gmail.com<mailto:sandeepv...@gmail.com&
legitimate to run DistCp within
the same cluster.
--Chris Nauroth
From: Gavin Yue <yue.yuany...@gmail.com<mailto:yue.yuany...@gmail.com>>
Date: Friday, January 8, 2016 at 4:45 PM
To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>"
<user@hadoop.apache.or
n a viable workaround would be to remove those properties.
--Chris Nauroth
From: Anu Engineer <aengin...@hortonworks.com<mailto:aengin...@hortonworks.com>>
Date: Monday, January 4, 2016 at 3:47 PM
To: Marcin Tustin <mtus...@handybook.com<mailto:mtus...@handybook.com>>,
&
Yes, good point about the combination of one local short-circuit read + one
remote read.
--Chris Nauroth
From: Tenghuan He <tenghua...@gmail.com<mailto:tenghua...@gmail.com>>
Date: Monday, January 4, 2016 at 9:42 AM
To: Chris Nauroth <cnaur...@hortonworks.com<mailto:cnaur.
and performs some CPU-intensive processing as
it reads, then perhaps the NIC is not saturated, and multi-threading could help.
As usual with performance work, the actual outcomes are going to be highly
situational.
I hope this helps.
--Chris Nauroth
From: Tenghuan He <tenghua...@gmail.
if we decide to do some internal
refactoring.
If you can give a high-level description of what you want to achieve, then
perhaps we can suggest a way to do it through the public API.
--Chris Nauroth
From: Tenghuan He <tenghua...@gmail.com<mailto:tenghua...@gmail.com>>
Date: Wednesday,
on the right track
investigating why this application used the same temp directory. Is the temp
directory something that is controlled by the parameters that you pass to your
script? Do you know how the "055830" gets determined in this example?
--Chris Nauroth
From: Saravanan
he remaining payload. Is it possible that
the client that is trying to perform the write is running a very old version of
the HDFS client code?
--Chris Nauroth
From: yaoxiaohua <yaoxiao...@outlook.com<mailto:yaoxiao...@outlook.com>>
Date: Tuesday, December 15, 2015 at 11:16 PM
To: &qu
that the
command is accessible at /bin/ls on your system.
--Chris Nauroth
On 12/11/15, 8:05 AM, "Namikaze Minato" <lloydsen...@gmail.com> wrote:
>My question was, which spark command are you using, and since you
>already did the analysis, which function of Shell.java is
. If the namenode ID is not configured it
is determined automatically by matching the local node's address
with the configured address.
I hope this helps.
--Chris Nauroth
On 12/9/15, 8:07 PM, "F21" <f21.gro...@gmail.com> wrote:
>I am receiving this error when tr
Hello,
Yes, path.toUri().getPath() is a reliable way to get the absolute path without
scheme or authority as a String from a Path instance. The Hadoop codebase
itself uses this same pattern throughout a lot of JUnit tests when we need to
run assertions on an absolute path.
--Chris Nauroth
#d5e431
There is one known issue that can impact WebHDFS throughput:
https://issues.apache.org/jira/browse/HDFS-8696
This issue would impact Apache Hadoop 2.7 versions, and the fix is currently
targeted to Apache Hadoop 2.8.0.
I hope this helps.
--Chris Nauroth
From: Krishna Kishore Bonagiri
though compared to that system
I helped build. If I was still in my prior role, I'd be giving Falcon a
serious evaluation as a replacement.
--Chris Nauroth
From: Biren Saini <bsa...@hortonworks.com<mailto:bsa...@hortonworks.com>>
Date: Friday, December 4, 2015 at 6:25 AM
To: prav
to HDP Sandbox, you'll likely get more help
from Hortonworks support forums. (This is generally true for any vendor
product that differentiates from the Apache distro.)
I hope this helps.
--Chris Nauroth
From: praveenesh kumar <praveen...@gmail.com<mailto:praveen...@gmail.com>>
Dat
org.apache.hadoop.security.AuthenticationFilterInitializer
There are more details in the HTTP Authentication section of the Apache
Hadoop documentation:
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Htt
pAuthentication.html
--Chris Nauroth
On 11/25/15, 7:48 AM, "Francis Dupin" &
/ShortCircuitLocalReads.html
--Chris Nauroth
From: sandeep das <yarnhad...@gmail.com<mailto:yarnhad...@gmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>"
<user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Date: Wednesday,
o the JIRA, and then
clicking the Submit Patch button to trigger pre-commit checks. No further
testing beyond that is required. Committers will verify it through code review.
--Chris Nauroth
From: Zheng Huang <bup...@gmail.com<mailto:bup...@gmail.com>>
Reply-To: "user@ha
+1
Thank you, Arpit.
--Chris Nauroth
From: Arpit Agarwal <aagar...@hortonworks.com<mailto:aagar...@hortonworks.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>"
<user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Date: Thursda
ent variable. If you're interested in helping to improve
the documentation, then we'd welcome a patch on the BUILDING.txt file.
I hope this helps.
--Chris Nauroth
From: Zheng Huang <bup...@gmail.com<mailto:bup...@gmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@had
I'm glad to hear it helped.
This page describes the contribution process.
https://wiki.apache.org/hadoop/HowToContribute
For a documentation-only patch on BUILDING.txt, the process would be
simplified, because a few steps like unit tests won't be applicable.
--Chris Nauroth
From: Zheng Huang
.
--Chris Nauroth
From: MBA <adaryl.wakefi...@hotmail.com<mailto:adaryl.wakefi...@hotmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>"
<user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Date: Tuesday, November 3, 2015 at 11:16 AM
space
free for non dfs use.
--Chris Nauroth
From: MBA <adaryl.wakefi...@hotmail.com<mailto:adaryl.wakefi...@hotmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>"
<user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Dat
In addition to the standard Hadoop jars available in an Apache Hadoop distro,
Windows also requires the native components for Windows: winutils.exe and
hadoop.dll. This wiki page has more details on how that works:
https://wiki.apache.org/hadoop/WindowsProblems
--Chris Nauroth
From: James
client on your
platform of choice.
http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html
--Chris Nauroth
From: Shashi Vishwakarma
<shashi.vish...@gmail.com<mailto:shashi.vish...@gmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop
ng on port 50070.
curl
'http://127.0.0.1:50070/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo'
More details on our compatibility policy are listed here in case you're
interested.
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Co
mpatibility.html
--Chris Nauroth
On 10
there.
--Chris Nauroth
From: Benoy Antony <bant...@gmail.com<mailto:bant...@gmail.com>>
Reply-To: user <user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Date: Thursday, October 29, 2015 at 2:15 PM
To: "u...@hive.apache.org<mailto:u...@hive.apache.org&g
simplified consistent repro that demonstrates the problem,
then I suggest filing a JIRA for further investigation. Thanks again.
--Chris Nauroth
On 10/15/15, 2:40 PM, "Bourgon, Armel" <armel.bour...@citygridmedia.com>
wrote:
>Hello,
>
>To give you a bit of contex
of
the Configuration. As I said earlier, these will be only primitive types like
String or int. If it's helpful, your setup method can use the primitive values
read from configuration to reconstruct an instance of any class that you want.
I hope this helps.
--Chris Nauroth
From: , Saurav <
--Chris Nauroth
From: , Saurav <sda...@paypal.com<mailto:sda...@paypal.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>"
<user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Date: Tuesday, October 13, 2015 at 1:04 PM
To: "user@ha
.
--Chris Nauroth
From: Jingfei Hu <jingfei...@hotmail.com<mailto:jingfei...@hotmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>"
<user@hadoop.apache.org<mailto:user@hadoop.apache.org>>
Date: Monday, October 12, 2015 at 6:53 PM
not have
the problem, so it must be the _80 revision that introduced the regression.
The known fixes for this problem are either to keep running with JDK 1.7.0_79
or upgrade to Apache Hadoop 2.6.1 or 2.7.0 to pick up the code change we did on
our side to work around it. I hope this helps.
--Chris
Hello Onder,
The "successfully formatted" line tells me that the HDFS format worked
successfully, so I think all is well. Perhaps that wiki page has some outdated
information.
--Chris Nauroth
From: Onder SEZGIN <ondersez...@gmail.com<mailto:ondersez...@gmail.com&g
at this in a while, but the prior
work is tracked in JIRA issues HDFS-3296 and HADOOP-11957 if you want to see
the current progress.
--Chris Nauroth
From: Demai Ni <nid...@gmail.com<mailto:nid...@gmail.com>>
Reply-To: "user@hadoop.apache.org<mailto:user@hadoop.apache.org>&qu
.
Thank you!
--Chris Nauroth
From: Caesar Samsi caesarsa...@mac.commailto:caesarsa...@mac.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
user@hadoop.apache.orgmailto:user@hadoop.apache.org
Date: Tuesday, July 21, 2015 at 5:10 PM
To: user@hadoop.apache.orgmailto:user
in the community can pick it
up.
I hope this helps.
--Chris Nauroth
On 7/22/15, 12:41 AM, Alexander Striffeler
a.striffe...@students.unibe.ch wrote:
Hi all
I'm pretty new to the Hadoop environment and I'm about performing some
micro benchmarks. In particular, I'm struggling with executing NNBench
source tree.
--Chris Nauroth
From: Caesar Samsi caesarsa...@mac.commailto:caesarsa...@mac.com
Date: Wednesday, July 22, 2015 at 11:20 AM
To: Chris Nauroth cnaur...@hortonworks.commailto:cnaur...@hortonworks.com,
user@hadoop.apache.orgmailto:user@hadoop.apache.org
user
--set user::rwx,user:user1:---,group::rwx,other::rwx /test1
--Chris Nauroth
From: kumar r kumarlear...@gmail.commailto:kumarlear...@gmail.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
user@hadoop.apache.orgmailto:user@hadoop.apache.org
Date: Wednesday, July 15, 2015 at 12:29 AM
does not incur excessive locking and tie up handler
threads doing recursive operations on a large sub-tree of the file system.
--Chris Nauroth
From: kumar r kumarlear...@gmail.commailto:kumarlear...@gmail.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
user
org.apache.hadoop.security.HadoopKerberosName your Kerberos principal.
I hope this helps.
--Chris Nauroth
On 7/7/15, 2:33 AM, xeonmailinglist xeonmailingl...@gmail.com wrote:
Is it possible to get the hadoop working dir using the bash commands?
/HdfsPermissionsGuide.html
I hope this helps.
--Chris Nauroth
From: Pratik Gadiya
pratik_gad...@persistent.commailto:pratik_gad...@persistent.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
user@hadoop.apache.orgmailto:user@hadoop.apache.org
Date: Tuesday, July 7, 2015 at 9:38 AM
To: user
, on the /mtn directory. I
recommend checking that directory to make sure you have both read and execute
permissions.
--Chris Nauroth
From: Ilker Ozkaymak iozkay...@gmail.commailto:iozkay...@gmail.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
user@hadoop.apache.orgmailto:user
documentation. If you can confirm that this is
what you were looking for, then I'll enter a follow-up jira to make sure this
information gets into our documentation.
I hope this helps.
--Chris Nauroth
From: Ruslan Dautkhanov dautkha...@gmail.commailto:dautkha...@gmail.com
Reply-To: user
Hello Demai,
Apache Bigtop is a project that tests and publishes rpm and deb packages for
Hadoop ecosystem components. They'll have more details on their own site.
http://bigtop.apache.org/
Would this suit your needs?
--Chris Nauroth
From: Demai Ni nid...@gmail.commailto:nid...@gmail.com
://issues.apache.org/jira/browse/HADOOP-7139
--Chris Nauroth
From: rab ra rab...@gmail.commailto:rab...@gmail.com
Reply-To: user@hadoop.apache.orgmailto:user@hadoop.apache.org
user@hadoop.apache.orgmailto:user@hadoop.apache.org
Date: Wednesday, May 27, 2015 at 2:19 AM
To: user
1 - 100 of 166 matches
Mail list logo