Re: Happy Diwali to those forum members who celebrate this great festival

2016-10-30 Thread Shivansh Srivastava
Thanks a lot and same to you all as well ;)

On 31-Oct-2016 1:35 AM, "Mich Talebzadeh"  wrote:

> I can hear and see plenty of firework in this foggy London tonight  :)
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
> On 30 October 2016 at 18:58, Sreekanth Jella 
> wrote:
>
>> Thank you 
>>
>> Thanks,
>> Sreekanth.
>>
>> Thanks,
>> Sreekanth,
>> +1 (571) 376-0714
>>
>> On Oct 30, 2016 9:08 AM, "Mich Talebzadeh" 
>> wrote:
>>
>>> Enjoy the festive season.
>>>
>>> Regards,
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>
>


Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread kant kodali
yes I did run ps -ef | grep "app_name" and it is root.



On Sun, Oct 30, 2016 at 8:00 PM, Chan Chor Pang 
wrote:

> sorry, the UID
>
> On 10/31/16 11:59 AM, Chan Chor Pang wrote:
>
> actually if the max user processes is not the problem, i have no idea
>
> but i still suspecting the user,
> as the user who run spark-submit is not necessary the pid for the JVM
> process
>
> can u make sure when you "ps -ef | grep {your app id} " the PID is root?
> On 10/31/16 11:21 AM, kant kodali wrote:
>
> The java process is run by the root and it has the same config
>
> sudo -i
>
> ulimit -a
>
> core file size  (blocks, -c) 0
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 120242
> max locked memory   (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 120242
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
>
>
>
> On Sun, Oct 30, 2016 at 7:01 PM, Chan Chor Pang 
> wrote:
>
>> I have the same Exception before and the problem fix after i change the
>> nproc conf.
>>
>> > max user processes  (-u) 120242
>> ↑this config does looks good.
>> are u sure the user who run ulimit -a is the same user who run the Java
>> process?
>> depend on how u submit the job and your setting, spark job may execute by
>> other user.
>>
>>
>> On 10/31/16 10:38 AM, kant kodali wrote:
>>
>> when I did this
>>
>> cat /proc/sys/kernel/pid_max
>>
>> I got 32768
>>
>> On Sun, Oct 30, 2016 at 6:36 PM, kant kodali  wrote:
>>
>>> I believe for ubuntu it is unlimited but I am not 100% sure (I just read
>>> somewhere online). I ran ulimit -a and this is what I get
>>>
>>> core file size  (blocks, -c) 0
>>> data seg size   (kbytes, -d) unlimited
>>> scheduling priority (-e) 0
>>> file size   (blocks, -f) unlimited
>>> pending signals (-i) 120242
>>> max locked memory   (kbytes, -l) 64
>>> max memory size (kbytes, -m) unlimited
>>> open files  (-n) 1024
>>> pipe size(512 bytes, -p) 8
>>> POSIX message queues (bytes, -q) 819200
>>> real-time priority  (-r) 0
>>> stack size  (kbytes, -s) 8192
>>> cpu time   (seconds, -t) unlimited
>>> max user processes  (-u) 120242
>>> virtual memory  (kbytes, -v) unlimited
>>> file locks  (-x) unlimited
>>>
>>> On Sun, Oct 30, 2016 at 6:15 PM, Chan Chor Pang 
>>> wrote:
>>>
 not sure for ubuntu, but i think you can just create the file by
 yourself
 the syntax will be the same as /etc/security/limits.conf

 nproc.conf not only limit java process but all process by the same user
 so even the jvm process does nothing,  if the corresponding user is
 busy in other way
 the jvm process will still not able to create new thread.

 btw the default limit for centos is 1024


 On 10/31/16 9:51 AM, kant kodali wrote:


 On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang  wrote:

> /etc/security/limits.d/90-nproc.conf
>

 Hi,

 I am using Ubuntu 16.04 LTS. I have this directory
 /etc/security/limits.d/ but I don't have any files underneath it. This
 error happens after running for 4 to 5 hours. I wonder if this is a GC
 issue? And I am thinking if I should use CMS. I have also posted this on SO
 since I havent got much response for this question
 http://stackoverflow.com/questions/40315589/dag-sch
 eduler-event-loop-java-lang-outofmemoryerror-unable-to-creat
 e-new-native


 Thanks,
 kant


 --
 ---**---*---*---*---
 株式会社INDETAIL
 ニアショア総合サービス事業本部
 ゲームサービス事業部
 陳 楚鵬
 E-mail :chin...@indetail.co.jp
 URL : http://www.indetail.co.jp

 【札幌本社/LABO/LABO2】
 〒060-0042
 札幌市中央区大通西9丁目3番地33
 キタコーセンタービルディング
 (札幌本社/LABO2:2階、LABO:9階)
 TEL:011-206-9235 FAX:011-206-9236

 【東京支店】
 〒108-0014
 東京都港区芝5丁目29番20号 クロスオフィス三田
 TEL:03-6809-6502 FAX:03-6809-6504

 【名古屋サテライト】
 〒460-0002
 愛知県名古屋市中区丸の内3丁目17番24号 NAYUTA BLD
 TEL:052-971-0086




Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread Chan Chor Pang

sorry, the UID


On 10/31/16 11:59 AM, Chan Chor Pang wrote:


actually if the max user processes is not the problem, i have no idea

but i still suspecting the user,
as the user who run spark-submit is not necessary the pid for the JVM 
process


can u make sure when you "ps -ef | grep {your app id} " the PID is root?

On 10/31/16 11:21 AM, kant kodali wrote:

The java process is run by the root and it has the same config

sudo -i

ulimit -a

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 120242
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 120242
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited



On Sun, Oct 30, 2016 at 7:01 PM, Chan Chor Pang 
> wrote:


I have the same Exception before and the problem fix after i
change the nproc conf.

> max user processes (-u) 120242
↑this config does looks good.
are u sure the user who run ulimit -a is the same user who run
the Java process?
depend on how u submit the job and your setting, spark job may
execute by other user.


On 10/31/16 10:38 AM, kant kodali wrote:

when I did this

cat /proc/sys/kernel/pid_max

I got 32768


On Sun, Oct 30, 2016 at 6:36 PM, kant kodali > wrote:

I believe for ubuntu it is unlimited but I am not 100% sure
(I just read somewhere online). I ran ulimit -a and this is
what I get

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 120242
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 120242
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

On Sun, Oct 30, 2016 at 6:15 PM, Chan Chor Pang
> wrote:

not sure for ubuntu, but i think you can just create the
file by yourself
the syntax will be the same as /etc/security/limits.conf

nproc.conf not only limit java process but all process
by the same user

so even the jvm process does nothing,  if the
corresponding user is busy in other way
the jvm process will still not able to create new thread.

btw the default limit for centos is 1024


On 10/31/16 9:51 AM, kant kodali wrote:


On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang
> wrote:

/etc/security/limits.d/90-nproc.conf


Hi,

I am using Ubuntu 16.04 LTS. I have this directory
/etc/security/limits.d/ but I don't have any files
underneath it. This error happens after running for 4
to 5 hours. I wonder if this is a GC issue? And I am
thinking if I should use CMS. I have also posted this
on SO since I havent got much response for this
question

http://stackoverflow.com/questions/40315589/dag-scheduler-event-loop-java-lang-outofmemoryerror-unable-to-create-new-native




Thanks,
kant


-- 
---**---*---*---*---

株式会社INDETAIL
ニアショア総合サービス事業本部
ゲームサービス事業部
陳 楚鵬
E-mail :chin...@indetail.co.jp 
URL :http://www.indetail.co.jp

【札幌本社/LABO/LABO2】
〒060-0042
札幌市中央区大通西9丁目3番地33
キタコーセンタービルディング
(札幌本社/LABO2:2階、LABO:9階)
TEL:011-206-9235 FAX:011-206-9236

【東京支店】
〒108-0014
東京都港区芝5丁目29番20号 クロスオフィス三田
TEL:03-6809-6502 FAX:03-6809-6504


Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread Chan Chor Pang

actually if the max user processes is not the problem, i have no idea

but i still suspecting the user,
as the user who run spark-submit is not necessary the pid for the JVM 
process


can u make sure when you "ps -ef | grep {your app id} " the PID is root?

On 10/31/16 11:21 AM, kant kodali wrote:

The java process is run by the root and it has the same config

sudo -i

ulimit -a

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 120242
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 120242
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited



On Sun, Oct 30, 2016 at 7:01 PM, Chan Chor Pang 
> wrote:


I have the same Exception before and the problem fix after i
change the nproc conf.

> max user processes (-u) 120242
↑this config does looks good.
are u sure the user who run ulimit -a is the same user who run the
Java process?
depend on how u submit the job and your setting, spark job may
execute by other user.


On 10/31/16 10:38 AM, kant kodali wrote:

when I did this

cat /proc/sys/kernel/pid_max

I got 32768


On Sun, Oct 30, 2016 at 6:36 PM, kant kodali > wrote:

I believe for ubuntu it is unlimited but I am not 100% sure
(I just read somewhere online). I ran ulimit -a and this is
what I get

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 120242
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 120242
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

On Sun, Oct 30, 2016 at 6:15 PM, Chan Chor Pang
> wrote:

not sure for ubuntu, but i think you can just create the
file by yourself
the syntax will be the same as /etc/security/limits.conf

nproc.conf not only limit java process but all process by
the same user

so even the jvm process does nothing,  if the
corresponding user is busy in other way
the jvm process will still not able to create new thread.

btw the default limit for centos is 1024


On 10/31/16 9:51 AM, kant kodali wrote:


On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang
>
wrote:

/etc/security/limits.d/90-nproc.conf


Hi,

I am using Ubuntu 16.04 LTS. I have this directory
/etc/security/limits.d/ but I don't have any files
underneath it. This error happens after running for 4 to
5 hours. I wonder if this is a GC issue? And I am
thinking if I should use CMS. I have also posted this on
SO since I havent got much response for this question

http://stackoverflow.com/questions/40315589/dag-scheduler-event-loop-java-lang-outofmemoryerror-unable-to-create-new-native




Thanks,
kant


-- 
---**---*---*---*---

株式会社INDETAIL
ニアショア総合サービス事業本部
ゲームサービス事業部
陳 楚鵬
E-mail :chin...@indetail.co.jp 
URL :http://www.indetail.co.jp

【札幌本社/LABO/LABO2】
〒060-0042
札幌市中央区大通西9丁目3番地33
キタコーセンタービルディング
(札幌本社/LABO2:2階、LABO:9階)
TEL:011-206-9235 FAX:011-206-9236

【東京支店】
〒108-0014
東京都港区芝5丁目29番20号 クロスオフィス三田
TEL:03-6809-6502 FAX:03-6809-6504

  

Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread kant kodali
The java process is run by the root and it has the same config

sudo -i

ulimit -a

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 120242
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 120242
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited



On Sun, Oct 30, 2016 at 7:01 PM, Chan Chor Pang 
wrote:

> I have the same Exception before and the problem fix after i change the
> nproc conf.
>
> > max user processes  (-u) 120242
> ↑this config does looks good.
> are u sure the user who run ulimit -a is the same user who run the Java
> process?
> depend on how u submit the job and your setting, spark job may execute by
> other user.
>
>
> On 10/31/16 10:38 AM, kant kodali wrote:
>
> when I did this
>
> cat /proc/sys/kernel/pid_max
>
> I got 32768
>
> On Sun, Oct 30, 2016 at 6:36 PM, kant kodali  wrote:
>
>> I believe for ubuntu it is unlimited but I am not 100% sure (I just read
>> somewhere online). I ran ulimit -a and this is what I get
>>
>> core file size  (blocks, -c) 0
>> data seg size   (kbytes, -d) unlimited
>> scheduling priority (-e) 0
>> file size   (blocks, -f) unlimited
>> pending signals (-i) 120242
>> max locked memory   (kbytes, -l) 64
>> max memory size (kbytes, -m) unlimited
>> open files  (-n) 1024
>> pipe size(512 bytes, -p) 8
>> POSIX message queues (bytes, -q) 819200
>> real-time priority  (-r) 0
>> stack size  (kbytes, -s) 8192
>> cpu time   (seconds, -t) unlimited
>> max user processes  (-u) 120242
>> virtual memory  (kbytes, -v) unlimited
>> file locks  (-x) unlimited
>>
>> On Sun, Oct 30, 2016 at 6:15 PM, Chan Chor Pang 
>> wrote:
>>
>>> not sure for ubuntu, but i think you can just create the file by yourself
>>> the syntax will be the same as /etc/security/limits.conf
>>>
>>> nproc.conf not only limit java process but all process by the same user
>>> so even the jvm process does nothing,  if the corresponding user is busy
>>> in other way
>>> the jvm process will still not able to create new thread.
>>>
>>> btw the default limit for centos is 1024
>>>
>>>
>>> On 10/31/16 9:51 AM, kant kodali wrote:
>>>
>>>
>>> On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang 
>>> wrote:
>>>
 /etc/security/limits.d/90-nproc.conf

>>>
>>> Hi,
>>>
>>> I am using Ubuntu 16.04 LTS. I have this directory
>>> /etc/security/limits.d/ but I don't have any files underneath it. This
>>> error happens after running for 4 to 5 hours. I wonder if this is a GC
>>> issue? And I am thinking if I should use CMS. I have also posted this on SO
>>> since I havent got much response for this question http://stackoverflow.
>>> com/questions/40315589/dag-scheduler-event-loop-java-lang-ou
>>> tofmemoryerror-unable-to-create-new-native
>>>
>>>
>>> Thanks,
>>> kant
>>>
>>>
>>> --
>>> ---**---*---*---*---
>>> 株式会社INDETAIL
>>> ニアショア総合サービス事業本部
>>> ゲームサービス事業部
>>> 陳 楚鵬
>>> E-mail :chin...@indetail.co.jp
>>> URL : http://www.indetail.co.jp
>>>
>>> 【札幌本社/LABO/LABO2】
>>> 〒060-0042
>>> 札幌市中央区大通西9丁目3番地33
>>> キタコーセンタービルディング
>>> (札幌本社/LABO2:2階、LABO:9階)
>>> TEL:011-206-9235 FAX:011-206-9236
>>>
>>> 【東京支店】
>>> 〒108-0014
>>> 東京都港区芝5丁目29番20号 クロスオフィス三田
>>> TEL:03-6809-6502 FAX:03-6809-6504
>>>
>>> 【名古屋サテライト】
>>> 〒460-0002
>>> 愛知県名古屋市中区丸の内3丁目17番24号 NAYUTA BLD
>>> TEL:052-971-0086
>>>
>>>


Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread Chan Chor Pang
I have the same Exception before and the problem fix after i change the 
nproc conf.


> max user processes  (-u) 120242
↑this config does looks good.
are u sure the user who run ulimit -a is the same user who run the Java 
process?
depend on how u submit the job and your setting, spark job may execute 
by other user.



On 10/31/16 10:38 AM, kant kodali wrote:

when I did this

cat /proc/sys/kernel/pid_max

I got 32768


On Sun, Oct 30, 2016 at 6:36 PM, kant kodali > wrote:


I believe for ubuntu it is unlimited but I am not 100% sure (I
just read somewhere online). I ran ulimit -a and this is what I get

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 120242
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 120242
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

On Sun, Oct 30, 2016 at 6:15 PM, Chan Chor Pang
> wrote:

not sure for ubuntu, but i think you can just create the file
by yourself
the syntax will be the same as /etc/security/limits.conf

nproc.conf not only limit java process but all process by the
same user

so even the jvm process does nothing,  if the corresponding
user is busy in other way
the jvm process will still not able to create new thread.

btw the default limit for centos is 1024


On 10/31/16 9:51 AM, kant kodali wrote:


On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang
> wrote:

/etc/security/limits.d/90-nproc.conf


Hi,

I am using Ubuntu 16.04 LTS. I have this directory
/etc/security/limits.d/ but I don't have any files underneath
it. This error happens after running for 4 to 5 hours. I
wonder if this is a GC issue? And I am thinking if I should
use CMS. I have also posted this on SO since I havent got
much response for this question

http://stackoverflow.com/questions/40315589/dag-scheduler-event-loop-java-lang-outofmemoryerror-unable-to-create-new-native




Thanks,
kant


-- 
---**---*---*---*---

株式会社INDETAIL
ニアショア総合サービス事業本部
ゲームサービス事業部
陳 楚鵬
E-mail :chin...@indetail.co.jp 
URL :http://www.indetail.co.jp

【札幌本社/LABO/LABO2】
〒060-0042
札幌市中央区大通西9丁目3番地33
キタコーセンタービルディング
(札幌本社/LABO2:2階、LABO:9階)
TEL:011-206-9235 FAX:011-206-9236

【東京支店】
〒108-0014
東京都港区芝5丁目29番20号 クロスオフィス三田
TEL:03-6809-6502 FAX:03-6809-6504

【名古屋サテライト】
〒460-0002
愛知県名古屋市中区丸の内3丁目17番24号 NAYUTA BLD
TEL:052-971-0086



Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread kant kodali
when I did this

cat /proc/sys/kernel/pid_max

I got 32768

On Sun, Oct 30, 2016 at 6:36 PM, kant kodali  wrote:

> I believe for ubuntu it is unlimited but I am not 100% sure (I just read
> somewhere online). I ran ulimit -a and this is what I get
>
> core file size  (blocks, -c) 0
> data seg size   (kbytes, -d) unlimited
> scheduling priority (-e) 0
> file size   (blocks, -f) unlimited
> pending signals (-i) 120242
> max locked memory   (kbytes, -l) 64
> max memory size (kbytes, -m) unlimited
> open files  (-n) 1024
> pipe size(512 bytes, -p) 8
> POSIX message queues (bytes, -q) 819200
> real-time priority  (-r) 0
> stack size  (kbytes, -s) 8192
> cpu time   (seconds, -t) unlimited
> max user processes  (-u) 120242
> virtual memory  (kbytes, -v) unlimited
> file locks  (-x) unlimited
>
> On Sun, Oct 30, 2016 at 6:15 PM, Chan Chor Pang 
> wrote:
>
>> not sure for ubuntu, but i think you can just create the file by yourself
>> the syntax will be the same as /etc/security/limits.conf
>>
>> nproc.conf not only limit java process but all process by the same user
>> so even the jvm process does nothing,  if the corresponding user is busy
>> in other way
>> the jvm process will still not able to create new thread.
>>
>> btw the default limit for centos is 1024
>>
>>
>> On 10/31/16 9:51 AM, kant kodali wrote:
>>
>>
>> On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang 
>> wrote:
>>
>>> /etc/security/limits.d/90-nproc.conf
>>>
>>
>> Hi,
>>
>> I am using Ubuntu 16.04 LTS. I have this directory
>> /etc/security/limits.d/ but I don't have any files underneath it. This
>> error happens after running for 4 to 5 hours. I wonder if this is a GC
>> issue? And I am thinking if I should use CMS. I have also posted this on SO
>> since I havent got much response for this question http://stackoverflow.
>> com/questions/40315589/dag-scheduler-event-loop-java-lang-
>> outofmemoryerror-unable-to-create-new-native
>>
>>
>> Thanks,
>> kant
>>
>>
>> --
>> ---**---*---*---*---
>> 株式会社INDETAIL
>> ニアショア総合サービス事業本部
>> ゲームサービス事業部
>> 陳 楚鵬
>> E-mail :chin...@indetail.co.jp
>> URL : http://www.indetail.co.jp
>>
>> 【札幌本社/LABO/LABO2】
>> 〒060-0042
>> 札幌市中央区大通西9丁目3番地33
>> キタコーセンタービルディング
>> (札幌本社/LABO2:2階、LABO:9階)
>> TEL:011-206-9235 FAX:011-206-9236
>>
>> 【東京支店】
>> 〒108-0014
>> 東京都港区芝5丁目29番20号 クロスオフィス三田
>> TEL:03-6809-6502 FAX:03-6809-6504
>>
>> 【名古屋サテライト】
>> 〒460-0002
>> 愛知県名古屋市中区丸の内3丁目17番24号 NAYUTA BLD
>> TEL:052-971-0086
>>
>>
>


Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread kant kodali
I believe for ubuntu it is unlimited but I am not 100% sure (I just read
somewhere online). I ran ulimit -a and this is what I get

core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 120242
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 1024
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 120242
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited

On Sun, Oct 30, 2016 at 6:15 PM, Chan Chor Pang 
wrote:

> not sure for ubuntu, but i think you can just create the file by yourself
> the syntax will be the same as /etc/security/limits.conf
>
> nproc.conf not only limit java process but all process by the same user
> so even the jvm process does nothing,  if the corresponding user is busy
> in other way
> the jvm process will still not able to create new thread.
>
> btw the default limit for centos is 1024
>
>
> On 10/31/16 9:51 AM, kant kodali wrote:
>
>
> On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang 
> wrote:
>
>> /etc/security/limits.d/90-nproc.conf
>>
>
> Hi,
>
> I am using Ubuntu 16.04 LTS. I have this directory /etc/security/limits.d/
> but I don't have any files underneath it. This error happens after running
> for 4 to 5 hours. I wonder if this is a GC issue? And I am thinking if I
> should use CMS. I have also posted this on SO since I havent got much
> response for this question http://stackoverflow.
> com/questions/40315589/dag-scheduler-event-loop-java-
> lang-outofmemoryerror-unable-to-create-new-native
>
>
> Thanks,
> kant
>
>
> --
> ---**---*---*---*---
> 株式会社INDETAIL
> ニアショア総合サービス事業本部
> ゲームサービス事業部
> 陳 楚鵬
> E-mail :chin...@indetail.co.jp
> URL : http://www.indetail.co.jp
>
> 【札幌本社/LABO/LABO2】
> 〒060-0042
> 札幌市中央区大通西9丁目3番地33
> キタコーセンタービルディング
> (札幌本社/LABO2:2階、LABO:9階)
> TEL:011-206-9235 FAX:011-206-9236
>
> 【東京支店】
> 〒108-0014
> 東京都港区芝5丁目29番20号 クロスオフィス三田
> TEL:03-6809-6502 FAX:03-6809-6504
>
> 【名古屋サテライト】
> 〒460-0002
> 愛知県名古屋市中区丸の内3丁目17番24号 NAYUTA BLD
> TEL:052-971-0086
>
>


Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread Chan Chor Pang

not sure for ubuntu, but i think you can just create the file by yourself
the syntax will be the same as /etc/security/limits.conf

nproc.conf not only limit java process but all process by the same user

so even the jvm process does nothing,  if the corresponding user is busy 
in other way

the jvm process will still not able to create new thread.

btw the default limit for centos is 1024

On 10/31/16 9:51 AM, kant kodali wrote:


On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang 
> wrote:


/etc/security/limits.d/90-nproc.conf


Hi,

I am using Ubuntu 16.04 LTS. I have this directory 
/etc/security/limits.d/ but I don't have any files underneath it. This 
error happens after running for 4 to 5 hours. I wonder if this is a GC 
issue? And I am thinking if I should use CMS. I have also posted this 
on SO since I havent got much response for this question 
http://stackoverflow.com/questions/40315589/dag-scheduler-event-loop-java-lang-outofmemoryerror-unable-to-create-new-native



Thanks,
kant


--
---**---*---*---*---
株式会社INDETAIL
ニアショア総合サービス事業本部
ゲームサービス事業部
陳 楚鵬
E-mail :chin...@indetail.co.jp
URL : http://www.indetail.co.jp

【札幌本社/LABO/LABO2】
〒060-0042
札幌市中央区大通西9丁目3番地33
キタコーセンタービルディング
(札幌本社/LABO2:2階、LABO:9階)
TEL:011-206-9235 FAX:011-206-9236

【東京支店】
〒108-0014
東京都港区芝5丁目29番20号 クロスオフィス三田
TEL:03-6809-6502 FAX:03-6809-6504

【名古屋サテライト】
〒460-0002
愛知県名古屋市中区丸の内3丁目17番24号 NAYUTA BLD
TEL:052-971-0086



Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread kant kodali
On Sun, Oct 30, 2016 at 5:22 PM, Chan Chor Pang 
wrote:

> /etc/security/limits.d/90-nproc.conf
>

Hi,

I am using Ubuntu 16.04 LTS. I have this directory /etc/security/limits.d/
but I don't have any files underneath it. This error happens after running
for 4 to 5 hours. I wonder if this is a GC issue? And I am thinking if I
should use CMS. I have also posted this on SO since I havent got much
response for this question
http://stackoverflow.com/questions/40315589/dag-scheduler-event-loop-java-lang-outofmemoryerror-unable-to-create-new-native


Thanks,
kant


Re: java.lang.OutOfMemoryError: unable to create new native thread

2016-10-30 Thread Chan Chor Pang
you may want to check the process limit of the user who responsible for 
starting the JVM.

/etc/security/limits.d/90-nproc.conf


On 10/29/16 4:47 AM, kant kodali wrote:
 "dag-scheduler-event-loop" java.lang.OutOfMemoryError: unable to 
create new native thread

at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
scala.concurrent.forkjoin.ForkJoinPool.tryAddWorker(ForkJoinPool.java:1672)
at 
scala.concurrent.forkjoin.ForkJoinPool.signalWork(ForkJoinPool.java:1966)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.push(ForkJoinPool.java:1072)
at 
scala.concurrent.forkjoin.ForkJoinTask.fork(ForkJoinTask.java:654)

at scala.collection.parallel.ForkJoinTasks$WrappedTask$

This is the error produced by the Spark Driver program which is 
running on client mode by default so some people say just increase the 
heap size by passing the --driver-memory 3g flag however the message 
*"**unable to create new native thread**"*  really says that the JVM 
is asking OS to create a new thread but OS couldn't allocate it 
anymore and the number of threads a JVM can create by requesting OS is 
platform dependent but typically it is 32K threads on a 64-bit JVM. so 
I am wondering why spark is even creating so many threads and how do I 
control this number?




Re: Happy Diwali to those forum members who celebrate this great festival

2016-10-30 Thread Mich Talebzadeh
I can hear and see plenty of firework in this foggy London tonight  :)

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.



On 30 October 2016 at 18:58, Sreekanth Jella  wrote:

> Thank you 
>
> Thanks,
> Sreekanth.
>
> Thanks,
> Sreekanth,
> +1 (571) 376-0714
>
> On Oct 30, 2016 9:08 AM, "Mich Talebzadeh" 
> wrote:
>
>> Enjoy the festive season.
>>
>> Regards,
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>


Performance bug in UDAF?

2016-10-30 Thread Spark User
Hi All,

I have a UDAF that seems to perform poorly when its input is skewed. I have
been debugging the UDAF implementation but I don't see any code that is
causing the performance to degrade. More details on the data and the
experiments I have run.

DataSet: Assume 3 columns, column1 being the key.
Column1   Column2  Column3
a   1 x
a   2 x
a   3 x
a   4 x
a   5 x
a   6 z
5 million row for a

a   100   y
b   9 y
b   9 y
b   10   y
3 million rows for b
...
more rows
total rows is 100 million


a has 5 million rows.Column2 for a has 1 million unique values.
b has 3 million rows. Column2 for b has 80 unique values.

Column 3 has just 100s of unique values not in the order of millions, for
both a and b.

Say totally there are 100 million rows as the input to a UDAF aggregation.
And the skew in data is for the keys a and b. All other rows can be ignored
and do not cause any performance issue/ hot partitions.

The code does a dataSet.groupBy("Column1").agg(udaf("Column2", "Column3").

I commented out the UDAF implementation for update and merge methods, so
essentially the UDAF was doing nothing.

With this code (empty updated and merge for UDAF) the performance for a
mircro-batch is 16 minutes per micro-batch, micro-batch containing 100
million rows, with 5million rows for a and 1 million unique values for
Column2 for a.

But when I pass empty values for Column2 with nothing else change,
effectively reducing the 1 million unique values for Column2 to just 1
unique value, empty value. The batch processing time goes down to 4 minutes.

So I am trying to understand why is there such a big performance
difference? What in UDAF causes the processing time to increase in orders
of magnitude when there is a skew in the data as observed above?

Any insight from spark developers, contributors, or anyone else who has a
deeper understanding of UDAF would be helpful.

Thanks,
Bharath


Re: Happy Diwali to those forum members who celebrate this great festival

2016-10-30 Thread Lefty Leverenz
+1

-- Lefty


On Sun, Oct 30, 2016 at 1:01 PM, Ashok Kumar  wrote:

> You are very kind Sir
>
>
> On Sunday, 30 October 2016, 16:42, Devopam Mittra 
> wrote:
>
>
> +1
> Thanks and regards
> Devopam
>
> On 30 Oct 2016 9:37 pm, "Mich Talebzadeh" 
> wrote:
>
> Enjoy the festive season.
>
> Regards,
>
> Dr Mich Talebzadeh
>
> LinkedIn * https://www.linkedin.com/ profile/view?id=
> AAEWh2gBxianrbJd6zP6AcPCCd OABUrV8Pw
> *
>
> http://talebzadehmich. wordpress.com
> 
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
>


Re: Happy Diwali to those forum members who celebrate this great festival

2016-10-30 Thread Ashok Kumar
You are very kind Sir 

On Sunday, 30 October 2016, 16:42, Devopam Mittra  wrote:
 

 +1
Thanks and regards
Devopam
On 30 Oct 2016 9:37 pm, "Mich Talebzadeh"  wrote:

Enjoy the festive season.
Regards,
Dr Mich Talebzadeh LinkedIn  https://www.linkedin.com/ profile/view?id= 
AAEWh2gBxianrbJd6zP6AcPCCd OABUrV8Pw http://talebzadehmich. wordpress.com
Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destructionof data or any other property which may arise from relying 
on this email's technical content is explicitly disclaimed.The author will in 
no case be liable for any monetary damages arising from suchloss, damage or 
destruction.  


   

Re: Happy Diwali to those forum members who celebrate this great festival

2016-10-30 Thread Sivakumaran S
Thank you Dr Mich :)

Regards

Sivakumaran S

> On 30-Oct-2016, at 4:07 PM, Mich Talebzadeh  wrote:
> 
> Enjoy the festive season.
> 
> Regards,
> 
> Dr Mich Talebzadeh
>  
> LinkedIn  
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>  
> 
>  
> http://talebzadehmich.wordpress.com 
> 
> Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
> damage or destruction of data or any other property which may arise from 
> relying on this email's technical content is explicitly disclaimed. The 
> author will in no case be liable for any monetary damages arising from such 
> loss, damage or destruction.
>  



Happy Diwali to those forum members who celebrate this great festival

2016-10-30 Thread Mich Talebzadeh
Enjoy the festive season.

Regards,

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.


Re: Spark 2.0 with Hadoop 3.0?

2016-10-30 Thread adam kramer
The version problems are related to using hadoop-aws-2.7.3 alongside
aws-sdk-1.7.4  in hadoop-2.7.3 where DynamoDb functionality is limited
(it may not even operate with deployed versions of the service). I've
stripped out usage of DynamoDB from the driver program in the meantime
(using it in a calling program which reads standard output instead).

I believe anything in 1.10.x sdk should be fine including 1.10.6
included in 2.0-alpha1 release (we were using 1.10.31 elsewhere), so I
don't think the 10.10+ patch is necessary if we try hadoop-3. I'll let
you if we end up patching and testing anything from trunk to get it
working.






On Sat, Oct 29, 2016 at 6:08 AM, Steve Loughran  wrote:
>
> On 27 Oct 2016, at 23:04, adam kramer  wrote:
>
> Is the version of Spark built for Hadoop 2.7 and later only for 2.x
> releases?
>
> Is there any reason why Hadoop 3.0 is a non-starter for use with Spark
> 2.0? The version of aws-sdk in 3.0 actually works for DynamoDB which
> would resolve our driver dependency issues.
>
>
> what version problems are you having there?
>
>
> There's a patch to move to AWS SDK 10.10, but that has a jackson 2.6.6+
> dependency; that being something I'd like to do in Hadoop branch-2 as well,
> as it is Time to Move On ( HADOOP-12705 ) . FWIW all jackson 1.9
> dependencies have been ripped out, leaving on that 2.x version problem.
>
> https://issues.apache.org/jira/browse/HADOOP-13050
>
> The HADOOP-13345 s3guard work will pull in a (provided) dependency on
> dynamodb; looks like the HADOOP-13449 patch moves to SDK 1.11.0.
>
> I think we are likely to backport that to branch-2 as well, though it'd help
> the dev & test there if you built and tested your code against trunk early
> —not least to find any changes in that transitive dependency set.
>
>
> Thanks,
> Adam
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>
>

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org