YARN-2026 has fixed the issue.
On Thu, Feb 25, 2016 at 4:17 AM, Prabhu Joseph
wrote:
> You are right, Hamel. It should get 10 TB /2. And In hadoop-2.7.0, it is
> working fine. But in hadoop-2.5.1, it gets only 10TB/230. The same
> configuration used in both versions.
the security update has been released, and it's a doozy!
https://wiki.jenkins-ci.org/display/SECURITY/Security+Advisory+2016-02-24
i will be putting jenkins in to quiet mode ~7am PST tomorrow morning
for the upgrade, and expect to be back up and building by 9am PST at
the latest.
Have you tried using scp ?
scp file i...@people.apache.org
Thanks
On Wed, Feb 24, 2016 at 5:04 PM, Michael Armbrust
wrote:
> Unfortunately I don't think thats sufficient as they don't seem to support
> sftp in the same way they did before. We'll still need to update
Unfortunately I don't think thats sufficient as they don't seem to support
sftp in the same way they did before. We'll still need to update our
release scripts.
On Wed, Feb 24, 2016 at 2:09 AM, Yin Yang wrote:
> Looks like access to people.apache.org has been restored.
>
>
Hi,
Will this be resolved in any forthcoming release?
https://issues.apache.org/jira/browse/SPARK-10625
Rgds,
Dushyant.
You are right, Hamel. It should get 10 TB /2. And In hadoop-2.7.0, it is
working fine. But in hadoop-2.5.1, it gets only 10TB/230. The same
configuration used in both versions.
So i think a JIRA could have fixed the issue after hadoop-2.5.1.
On Thu, Feb 25, 2016 at 1:28 AM, Hamel Kothari
Hi Spark devs,
I have sent an email about my problem some time ago where I want to merge a
large number of small files with Spark. Currently I am using Hive with the
CombineHiveInputFormat and I can control the size of the output files with
the max split size parameter (which is used for
Just want to send a reminder in case people don't know about it. If you are
working on (or with, using) Spark, consider submitting your work to Spark
Summit, coming up in June in San Francisco.
https://spark-summit.org/2016/call-for-presentations/
Cheers.
Thank you for the suggestions. We looked at the live spark UI and yarn app
logs and found what we think to be the issue: in spark 1.5.2, the FPGrowth
algorithm doesn't require you to specify the number of partitions in your
input data. Without specifying, FPGrowth puts all of its data into one
The instantaneous fair share is what Queue B should get according to the
code (and my experience). Assuming your queues are all equal it would be
10TB/2.
I can't help much more unless I can see your config files and ideally also
the YARN Scheduler UI to get an idea of what your queues/actual
Hi Hamel,
Thanks for looking into the issue. What i am not understanding is,
after preemption what is the share that the second queue gets in case if
the first queue holds the entire cluster resource without releasing, is it
instantaneous fair share or fair share.
Queue A and B are
The error is right there. Just read the output more carefully.
On Wed, Feb 24, 2016 at 11:37 AM, Minudika Malshan
wrote:
> [INFO] --- maven-enforcer-plugin:1.4.1:enforce (enforce-versions) @
> spark-parent_2.11 ---
> [WARNING] Rule 0:
Here is the full stack trace..
@Yin : yeah it seems like a problem with maven version. I am going to
update maven.
@ Marcelo : Yes, couldn't decide what's wrong at first :)
Thanks for your help!
[INFO] Scanning for projects...
[INFO]
Well, did you do what the message instructed you to do and looked
above the message you copied for more specific messages for why the
build failed?
On Wed, Feb 24, 2016 at 11:28 AM, Minudika Malshan
wrote:
> Hi,
>
> I am trying to build from spark source code which was
Hi,
I am trying to build from spark source code which was cloned from
https://github.com/apache/spark.git.
But it fails with following error.
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-enforcer-plugin:1.4.1:enforce
(enforce-versions) on project spark-parent_2.11: Some Enforcer
If all queues are identical, this behavior should not be happening.
Preemption as designed in fair scheduler (IIRC) takes place based on the
instantaneous fair share, not the steady state fair share. The fair
scheduler docs
Looks like access to people.apache.org has been restored.
FYI
On Mon, Feb 22, 2016 at 10:07 PM, Luciano Resende
wrote:
>
>
> On Mon, Feb 22, 2016 at 9:08 PM, Michael Armbrust
> wrote:
>
>> An update: people.apache.org has been shut down so the
17 matches
Mail list logo