[ 
https://issues.apache.org/jira/browse/IGNITE-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15722224#comment-15722224
 ] 

ASF GitHub Bot commented on IGNITE-4355:
----------------------------------------

GitHub user devozerov opened a pull request:

    https://github.com/apache/ignite/pull/1313

    IGNITE-4355

    

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/gridgain/apache-ignite ignite-4355-1

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/ignite/pull/1313.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1313
    
----
commit abe6a6b679bf728648c30c30de745b2c3e446f6f
Author: devozerov <voze...@gridgain.com>
Date:   2016-12-05T13:05:32Z

    Done.

----


> Hadoop: Eliminate map threads pauses during startup
> ---------------------------------------------------
>
>                 Key: IGNITE-4355
>                 URL: https://issues.apache.org/jira/browse/IGNITE-4355
>             Project: Ignite
>          Issue Type: Sub-task
>          Components: hadoop
>            Reporter: Ivan Veselovsky
>            Assignee: Vladimir Ozerov
>             Fix For: 2.0
>
>
> Pauses in all Map threads but one are observed in the beginning . This is 
> caused by waiting on future.get()  in 
> HadoopV2Job.getTaskContext(HadoopTaskInfo) .
> {code}
>  at sun.misc.Unsafe.park(boolean, long)
>  at java.util.concurrent.locks.LockSupport.park(Object)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt()
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(int)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(int)
>  at org.apache.ignite.internal.util.future.GridFutureAdapter.get0(boolean)
>  at org.apache.ignite.internal.util.future.GridFutureAdapter.get()
>  at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Job.getTaskContext(HadoopTaskInfo)
>  at 
> org.apache.ignite.internal.processors.hadoop.shuffle.HadoopShuffleJob.<init>(Object,
>  IgniteLogger, HadoopJob, GridUnsafeMemory, int, int[], int)
>  at 
> org.apache.ignite.internal.processors.hadoop.shuffle.HadoopShuffle.newJob(HadoopJobId)
>  at 
> org.apache.ignite.internal.processors.hadoop.shuffle.HadoopShuffle.job(HadoopJobId)
>  at 
> org.apache.ignite.internal.processors.hadoop.shuffle.HadoopShuffle.output(HadoopTaskContext)
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopEmbeddedTaskExecutor$1.createOutput(HadoopTaskContext)
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.createOutputInternal(HadoopTaskContext)
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.runTask(HadoopPerformanceCounter)
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call0()
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call()
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call()
>  at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.runAsJobOwner(Callable)
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call()
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call()
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopExecutorService$2.body()
>  at org.apache.ignite.internal.util.worker.GridWorker.run()
>  at java.lang.Thread.run()
> {code}
> while the working thread initializes the context:
> {code}
> Java Monitor Wait
>  at java.lang.Object.wait(long)
>  at java.lang.Thread.join(long)
>  at java.lang.Thread.join()
>  at org.apache.hadoop.util.Shell.joinThread(Thread)
>  at org.apache.hadoop.util.Shell.runCommand()
>  at org.apache.hadoop.util.Shell.run()
>  at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute()
>  at org.apache.hadoop.util.Shell.isSetsidSupported()
>  at org.apache.hadoop.util.Shell.<clinit>()
>  at org.apache.hadoop.util.StringUtils.<clinit>()
>  at 
> org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(Configuration)
>  at org.apache.hadoop.security.UserGroupInformation.initialize(Configuration, 
> boolean)
>  at org.apache.hadoop.security.UserGroupInformation.ensureInitialized()
>  at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(Subject)
>  at org.apache.hadoop.security.UserGroupInformation.getLoginUser()
>  at org.apache.hadoop.security.UserGroupInformation.getCurrentUser()
>  at org.apache.hadoop.mapreduce.task.JobContextImpl.<init>(Configuration, 
> JobID)
>  at org.apache.hadoop.mapred.JobContextImpl.<init>(JobConf, JobID, 
> Progressable)
>  at org.apache.hadoop.mapred.JobContextImpl.<init>(JobConf, JobID)
>  at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.<init>(HadoopTaskInfo,
>  HadoopJob, HadoopJobId, UUID, DataInput)
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Constructor, 
> Object[])
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance(Object[])
>  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Object[])
>  at java.lang.reflect.Constructor.newInstance(Object[])
>  at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Job.getTaskContext(HadoopTaskInfo)
>  at 
> org.apache.ignite.internal.processors.hadoop.shuffle.HadoopShuffleJob.<init>(Object,
>  IgniteLogger, HadoopJob, GridUnsafeMemory, int, int[], int)
>  at 
> org.apache.ignite.internal.processors.hadoop.shuffle.HadoopShuffle.newJob(HadoopJobId)
>  at 
> org.apache.ignite.internal.processors.hadoop.shuffle.HadoopShuffle.job(HadoopJobId)
>  at 
> org.apache.ignite.internal.processors.hadoop.shuffle.HadoopShuffle.output(HadoopTaskContext)
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopEmbeddedTaskExecutor$1.createOutput(HadoopTaskContext)
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.createOutputInternal(HadoopTaskContext)
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.runTask(HadoopPerformanceCounter)
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call0()
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call()
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call()
>  at 
> org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2TaskContext.runAsJobOwner(Callable)
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call()
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call()
>  at 
> org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopExecutorService$2.body()
>  at org.apache.ignite.internal.util.worker.GridWorker.run()
>  at java.lang.Thread.run()
> {code}
> Experimental solution can be found in branch 
> "ignite-4270-sync-direct-gzip-avoid-setup-pauses". 2 changes are made there:
> 1) HadoopShuffle.job() avoided "speculative execution"
> 2) task contexts creation paralleled in HadoopShuffleJob.<init>().
> JFR profiles show that there are no pauses any more, however, there is stil a 
> lot of work done in each thread loading classes, parsing XML configs, and 
> initializing Hadoop statics (in case of UserGroupInformation.getCurrentUser() 
> this even involves a shell script execution). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to