You need to pass the -diff
option (works only when -update is active). The newer snapshot name can
also be "." to indicate the current view.
On Sat, Dec 12, 2015 at 12:53 AM Nicolas Seritti wrote:
> Hello all,
>
> It looks like HDFS-8828 implemented a way to utilize the snapshotdiff
> output t
If an Application Master requests N containers, each with 2000 MB of memory,
should all container allocations received by the AM always have a memory
allocation of at least 2000 MB?
We have a case of requesting N 2000 MB containers and received one container
with only 1024 MB. The rest were 204
Thanks for the information. If you are already aware of the problem,
that is enough for me :)
Best
On 11 December 2015 at 18:49, Chris Nauroth wrote:
> Hello Samuel,
>
> Issue HADOOP-11935 tracks an improvement to re-implement this code path
> using either JNI to OS syscalls or the JDK 7 java.ni
> My question was, which spark command are you using, and since you
> already did the analysis, which function of Shell.java is this spark
> code using?
Sorry, I misunderstood you. It was something using RawLocalFileSystem
to load parquet files. The problem seemed to go away after I upgraded
to sp
Hello all,
It looks like HDFS-8828 implemented a way to utilize the snapshotdiff
output to feed distcp in some way. I haven't however found any
documentation explaining how to do this. Can anyone provide any information
on if this is possible, and if so how to appropriately execute a distcp
with t
Hello Samuel,
Issue HADOOP-11935 tracks an improvement to re-implement this code path
using either JNI to OS syscalls or the JDK 7 java.nio.file APIs (probably
the latter).
https://issues.apache.org/jira/browse/HADOOP-11935
For right now, I don't see a viable workaround besides ensuring that th
My question was, which spark command are you using, and since you
already did the analysis, which function of Shell.java is this spark
code using?
Regards,
LLoyd
On 11 December 2015 at 15:43, Samuel wrote:
> I am not using hadoop-util directly, it is Spark code what uses it
> (i,e. not directly
I am not using hadoop-util directly, it is Spark code what uses it
(i,e. not directly under my control).
Regarding ls, for my particular use case it is fine if you use "ls"
instead of "/bin/ls".
However, I do agree that using ls to fetch file permissions is
incorrect, so a better solution (in ter
So what you ultimately need is a piece of java code listing the rwx
permissions for user, group and others that is not using ls
internally, is that correct?
If "RawLocalFileSystem" is not HDFS, do you really need to use
hadoop-util for that?
Can you tell us more about your use case?
Regards,
LLoyd
> Using ls to figure out permissions is a bad design anyway, so I would
> not be surprised if this hardcode was reported as a bug.
Of course, I have no idea why it was implemented like this. I assume
it was written at some point in time where Java didn't provide the
needed APIS (?)
Implementing t
Using ls to figure out permissions is a bad design anyway, so I would
not be surprised if this hardcode was reported as a bug.
LLoyd
On 11 December 2015 at 09:19, Samuel wrote:
> Hi,
>
> I am experiencing some crashes when using spark over local files (mainly for
> testing). Some operations fail
Hi,
I am experiencing some crashes when using spark over local files (mainly
for testing). Some operations fail with
java.lang.RuntimeException: Error while running command to get file
permissions : java.io.IOException: Cannot run program "/bin/ls": error=2,
No such file or directory
at j
12 matches
Mail list logo