[jira] [Updated] (SPARK-10712) JVM crashes with spark.sql.tungsten.enabled = true

2015-09-19 Thread Mauro Pirrone (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-10712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mauro Pirrone updated SPARK-10712:
--
Description: 
When turning on tungsten, I get the following error when executing a query/job 
with a few joins. When tungsten is turned off, the error does not appear. Also 
note that tungsten works for me in other cases.

# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x7ffadaf59200, pid=7598, tid=140710015645440
#
# JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 1.8.0_45-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# V  [libjvm.so+0x7eb200]
#
# Core dump written. Default location: //core or core.7598 (max size 100 
kB). To ensure a full core dump, try "ulimit -c unlimited" before starting Java 
again
#
# An error report file with more information is saved as:
# //hs_err_pid7598.log
Compiled method (nm)   44403 10436 n 0   sun.misc.Unsafe::copyMemory 
(native)
 total in heap  [0x7ffac6b49290,0x7ffac6b495f8] = 872
 relocation [0x7ffac6b493b8,0x7ffac6b49400] = 72
 main code  [0x7ffac6b49400,0x7ffac6b495f8] = 504
Compiled method (nm)   44403 10436 n 0   sun.misc.Unsafe::copyMemory 
(native)
 total in heap  [0x7ffac6b49290,0x7ffac6b495f8] = 872
 relocation [0x7ffac6b493b8,0x7ffac6b49400] = 72
 main code  [0x7ffac6b49400,0x7ffac6b495f8] = 504
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#





---  T H R E A D  ---

Current thread (0x7ff7902e7800):  JavaThread "broadcast-hash-join-1" daemon 
[_thread_in_vm, id=16548, stack(0x7ff66bd98000,0x7ff66be99000)]

siginfo: si_signo: 11 (SIGSEGV), si_code: 2 (SEGV_ACCERR), si_addr: 
0x00069f572b10

Registers:
RAX=0x00069f672b08, RBX=0x7ff7902e7800, RCX=0x000394132140, 
RDX=0xfffe0004
RSP=0x7ff66be97048, RBP=0x7ff66be970a0, RSI=0x000394032148, 
RDI=0x00069f572b10
R8 =0x7ff66be970d0, R9 =0x0028, R10=0x7ff79cc0e1e7, 
R11=0x7ff79cc0e198
R12=0x7ff66be970c0, R13=0x7ff66be970d0, R14=0x0028, 
R15=0x30323048
RIP=0x7ff7b0dae200, EFLAGS=0x00010282, CSGSFS=0xe033, 
ERR=0x0004
  TRAPNO=0x000e

Top of Stack: (sp=0x7ff66be97048)
0x7ff66be97048:   7ff7b1042b1a 7ff7902e7800
0x7ff66be97058:   7ff7 7ff7902e7800
0x7ff66be97068:   7ff7902e7800 7ff7ad2846a0
0x7ff66be97078:   7ff7897048d8 
0x7ff66be97088:   7ff66be97110 7ff66be971f0
0x7ff66be97098:   7ff7902e7800 7ff66be970f0
0x7ff66be970a8:   7ff79cc0e261 0010
0x7ff66be970b8:   000390c04048 00066f24fac8
0x7ff66be970c8:   7ff7902e7800 000394032120
0x7ff66be970d8:   7ff7902e7800 7ff66f971af0
0x7ff66be970e8:   7ff7902e7800 7ff66be97198
0x7ff66be970f8:   7ff79c9d4c4d 7ff66a454b10
0x7ff66be97108:   7ff79c9d4c4d 0010
0x7ff66be97118:   7ff7902e5a90 0028
0x7ff66be97128:   7ff79c9d4760 000394032120
0x7ff66be97138:   30323048 7ff66be97160
0x7ff66be97148:   00066f24fac8 000390c04048
0x7ff66be97158:   7ff66be97158 7ff66f978eeb
0x7ff66be97168:   7ff66be971f0 7ff66f9791c8
0x7ff66be97178:   7ff668e90c60 7ff66f978f60
0x7ff66be97188:   7ff66be97110 7ff66be971b8
0x7ff66be97198:   7ff66be97238 7ff79c9d4c4d
0x7ff66be971a8:   0010 
0x7ff66be971b8:   38363130 38363130
0x7ff66be971c8:   0028 7ff66f973388
0x7ff66be971d8:   000394032120 30323048
0x7ff66be971e8:   000665823080 00066f24fac8
0x7ff66be971f8:   7ff66be971f8 7ff66f973357
0x7ff66be97208:   7ff66be97260 7ff66f976fe0
0x7ff66be97218:    7ff66f973388
0x7ff66be97228:   7ff66be971b8 7ff66be97248
0x7ff66be97238:   7ff66be972a8 7ff79c9d4c4d 

Instructions: (pc=0x7ff7b0dae200)
0x7ff7b0dae1e0:   00 00 00 48 8d 4c d6 f8 48 f7 da eb 39 48 8b 74
0x7ff7b0dae1f0:   d0 08 48 89 74 d1 08 48 83 c2 01 75 f0 c3 66 90
0x7ff7b0dae200:   48 8b 74 d0 e8 48 89 74 d1 e8 48 8b 74 d0 f0 48
0x7ff7b0dae210:   89 74 d1 f0 48 8b 74 d0 f8 48 89 74 d1 f8 48 8b 

Register to memory mapping:

RAX=0x00069f672b08 is an unallocated location in the heap
RBX=0x7ff7902e7800 is a thread
RCX=0x000394132140 is pointing into object: 0x000394032120
[B 
 - klass: {type array byte}
 - length: 1886151312
RDX=0xfffe0004 is an unknown value
RSP=0x7ff66be97048 is pointing 

[jira] [Commented] (SPARK-10712) JVM crashes with spark.sql.tungsten.enabled = true

2015-09-19 Thread Mauro Pirrone (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-10712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877223#comment-14877223
 ] 

Mauro Pirrone commented on SPARK-10712:
---

A workaround to this problem is set increase 
spark.sql.autoBroadcastJoinThreshold or set the value to -1. 

> JVM crashes with spark.sql.tungsten.enabled = true
> --
>
> Key: SPARK-10712
> URL: https://issues.apache.org/jira/browse/SPARK-10712
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.5.0
> Environment: 1 node - Linux, 64GB ram, 8 core
>Reporter: Mauro Pirrone
>Priority: Critical
>
> When turning on tungsten, I get the following error when executing a 
> query/job with a few joins. When tungsten is turned off, the error does not 
> appear. Also note that tungsten works for me in other cases.
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7ffadaf59200, pid=7598, tid=140710015645440
> #
> # JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 
> 1.8.0_45-b14)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # V  [libjvm.so+0x7eb200]
> #
> # Core dump written. Default location: //core or core.7598 (max size 100 
> kB). To ensure a full core dump, try "ulimit -c unlimited" before starting 
> Java again
> #
> # An error report file with more information is saved as:
> # //hs_err_pid7598.log
> Compiled method (nm)   44403 10436 n 0   sun.misc.Unsafe::copyMemory 
> (native)
>  total in heap  [0x7ffac6b49290,0x7ffac6b495f8] = 872
>  relocation [0x7ffac6b493b8,0x7ffac6b49400] = 72
>  main code  [0x7ffac6b49400,0x7ffac6b495f8] = 504
> Compiled method (nm)   44403 10436 n 0   sun.misc.Unsafe::copyMemory 
> (native)
>  total in heap  [0x7ffac6b49290,0x7ffac6b495f8] = 872
>  relocation [0x7ffac6b493b8,0x7ffac6b49400] = 72
>  main code  [0x7ffac6b49400,0x7ffac6b495f8] = 504
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> #
> ---  T H R E A D  ---
> Current thread (0x7ff7902e7800):  JavaThread "broadcast-hash-join-1" 
> daemon [_thread_in_vm, id=16548, stack(0x7ff66bd98000,0x7ff66be99000)]
> siginfo: si_signo: 11 (SIGSEGV), si_code: 2 (SEGV_ACCERR), si_addr: 
> 0x00069f572b10
> Registers:
> RAX=0x00069f672b08, RBX=0x7ff7902e7800, RCX=0x000394132140, 
> RDX=0xfffe0004
> RSP=0x7ff66be97048, RBP=0x7ff66be970a0, RSI=0x000394032148, 
> RDI=0x00069f572b10
> R8 =0x7ff66be970d0, R9 =0x0028, R10=0x7ff79cc0e1e7, 
> R11=0x7ff79cc0e198
> R12=0x7ff66be970c0, R13=0x7ff66be970d0, R14=0x0028, 
> R15=0x30323048
> RIP=0x7ff7b0dae200, EFLAGS=0x00010282, CSGSFS=0xe033, 
> ERR=0x0004
>   TRAPNO=0x000e
> Top of Stack: (sp=0x7ff66be97048)
> 0x7ff66be97048:   7ff7b1042b1a 7ff7902e7800
> 0x7ff66be97058:   7ff7 7ff7902e7800
> 0x7ff66be97068:   7ff7902e7800 7ff7ad2846a0
> 0x7ff66be97078:   7ff7897048d8 
> 0x7ff66be97088:   7ff66be97110 7ff66be971f0
> 0x7ff66be97098:   7ff7902e7800 7ff66be970f0
> 0x7ff66be970a8:   7ff79cc0e261 0010
> 0x7ff66be970b8:   000390c04048 00066f24fac8
> 0x7ff66be970c8:   7ff7902e7800 000394032120
> 0x7ff66be970d8:   7ff7902e7800 7ff66f971af0
> 0x7ff66be970e8:   7ff7902e7800 7ff66be97198
> 0x7ff66be970f8:   7ff79c9d4c4d 7ff66a454b10
> 0x7ff66be97108:   7ff79c9d4c4d 0010
> 0x7ff66be97118:   7ff7902e5a90 0028
> 0x7ff66be97128:   7ff79c9d4760 000394032120
> 0x7ff66be97138:   30323048 7ff66be97160
> 0x7ff66be97148:   00066f24fac8 000390c04048
> 0x7ff66be97158:   7ff66be97158 7ff66f978eeb
> 0x7ff66be97168:   7ff66be971f0 7ff66f9791c8
> 0x7ff66be97178:   7ff668e90c60 7ff66f978f60
> 0x7ff66be97188:   7ff66be97110 7ff66be971b8
> 0x7ff66be97198:   7ff66be97238 7ff79c9d4c4d
> 0x7ff66be971a8:   0010 
> 0x7ff66be971b8:   38363130 38363130
> 0x7ff66be971c8:   0028 7ff66f973388
> 0x7ff66be971d8:   000394032120 30323048
> 0x7ff66be971e8:   000665823080 00066f24fac8
> 0x7ff66be971f8:   7ff66be971f8 7ff66f973357
> 0x7ff66be97208:   7ff66be97260 7ff66f976fe0
> 0x7ff66be97218:    7ff66f973388
> 

[jira] [Comment Edited] (SPARK-10712) JVM crashes with spark.sql.tungsten.enabled = true

2015-09-19 Thread Mauro Pirrone (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-10712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877223#comment-14877223
 ] 

Mauro Pirrone edited comment on SPARK-10712 at 9/19/15 5:40 PM:


A workaround to this problem is to increase 
spark.sql.autoBroadcastJoinThreshold or set the value to -1. 


was (Author: mauro.pirrone):
A workaround to this problem is set increase 
spark.sql.autoBroadcastJoinThreshold or set the value to -1. 

> JVM crashes with spark.sql.tungsten.enabled = true
> --
>
> Key: SPARK-10712
> URL: https://issues.apache.org/jira/browse/SPARK-10712
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.5.0
> Environment: 1 node - Linux, 64GB ram, 8 core
>Reporter: Mauro Pirrone
>Priority: Critical
>
> When turning on tungsten, I get the following error when executing a 
> query/job with a few joins. When tungsten is turned off, the error does not 
> appear. Also note that tungsten works for me in other cases.
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7ffadaf59200, pid=7598, tid=140710015645440
> #
> # JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 
> 1.8.0_45-b14)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # V  [libjvm.so+0x7eb200]
> #
> # Core dump written. Default location: //core or core.7598 (max size 100 
> kB). To ensure a full core dump, try "ulimit -c unlimited" before starting 
> Java again
> #
> # An error report file with more information is saved as:
> # //hs_err_pid7598.log
> Compiled method (nm)   44403 10436 n 0   sun.misc.Unsafe::copyMemory 
> (native)
>  total in heap  [0x7ffac6b49290,0x7ffac6b495f8] = 872
>  relocation [0x7ffac6b493b8,0x7ffac6b49400] = 72
>  main code  [0x7ffac6b49400,0x7ffac6b495f8] = 504
> Compiled method (nm)   44403 10436 n 0   sun.misc.Unsafe::copyMemory 
> (native)
>  total in heap  [0x7ffac6b49290,0x7ffac6b495f8] = 872
>  relocation [0x7ffac6b493b8,0x7ffac6b49400] = 72
>  main code  [0x7ffac6b49400,0x7ffac6b495f8] = 504
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> #
> ---  T H R E A D  ---
> Current thread (0x7ff7902e7800):  JavaThread "broadcast-hash-join-1" 
> daemon [_thread_in_vm, id=16548, stack(0x7ff66bd98000,0x7ff66be99000)]
> siginfo: si_signo: 11 (SIGSEGV), si_code: 2 (SEGV_ACCERR), si_addr: 
> 0x00069f572b10
> Registers:
> RAX=0x00069f672b08, RBX=0x7ff7902e7800, RCX=0x000394132140, 
> RDX=0xfffe0004
> RSP=0x7ff66be97048, RBP=0x7ff66be970a0, RSI=0x000394032148, 
> RDI=0x00069f572b10
> R8 =0x7ff66be970d0, R9 =0x0028, R10=0x7ff79cc0e1e7, 
> R11=0x7ff79cc0e198
> R12=0x7ff66be970c0, R13=0x7ff66be970d0, R14=0x0028, 
> R15=0x30323048
> RIP=0x7ff7b0dae200, EFLAGS=0x00010282, CSGSFS=0xe033, 
> ERR=0x0004
>   TRAPNO=0x000e
> Top of Stack: (sp=0x7ff66be97048)
> 0x7ff66be97048:   7ff7b1042b1a 7ff7902e7800
> 0x7ff66be97058:   7ff7 7ff7902e7800
> 0x7ff66be97068:   7ff7902e7800 7ff7ad2846a0
> 0x7ff66be97078:   7ff7897048d8 
> 0x7ff66be97088:   7ff66be97110 7ff66be971f0
> 0x7ff66be97098:   7ff7902e7800 7ff66be970f0
> 0x7ff66be970a8:   7ff79cc0e261 0010
> 0x7ff66be970b8:   000390c04048 00066f24fac8
> 0x7ff66be970c8:   7ff7902e7800 000394032120
> 0x7ff66be970d8:   7ff7902e7800 7ff66f971af0
> 0x7ff66be970e8:   7ff7902e7800 7ff66be97198
> 0x7ff66be970f8:   7ff79c9d4c4d 7ff66a454b10
> 0x7ff66be97108:   7ff79c9d4c4d 0010
> 0x7ff66be97118:   7ff7902e5a90 0028
> 0x7ff66be97128:   7ff79c9d4760 000394032120
> 0x7ff66be97138:   30323048 7ff66be97160
> 0x7ff66be97148:   00066f24fac8 000390c04048
> 0x7ff66be97158:   7ff66be97158 7ff66f978eeb
> 0x7ff66be97168:   7ff66be971f0 7ff66f9791c8
> 0x7ff66be97178:   7ff668e90c60 7ff66f978f60
> 0x7ff66be97188:   7ff66be97110 7ff66be971b8
> 0x7ff66be97198:   7ff66be97238 7ff79c9d4c4d
> 0x7ff66be971a8:   0010 
> 0x7ff66be971b8:   38363130 38363130
> 0x7ff66be971c8:   0028 7ff66f973388
> 0x7ff66be971d8:   000394032120 30323048
> 0x7ff66be971e8:   000665823080 

[jira] [Created] (SPARK-10712) JVM crashes with spark.sql.tungsten.enabled = true

2015-09-18 Thread Mauro Pirrone (JIRA)
Mauro Pirrone created SPARK-10712:
-

 Summary: JVM crashes with spark.sql.tungsten.enabled = true
 Key: SPARK-10712
 URL: https://issues.apache.org/jira/browse/SPARK-10712
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.5.0
 Environment: 1 node - Linux, 64GB ram, 8 core
Reporter: Mauro Pirrone
Priority: Blocker


When turning on tungsten, I get the following error when executing a query/job 
with a few joins. When tungsten is turned off, the error does not appear. Also 
note that tungsten works for me in other cases.

# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x7ffadaf59200, pid=7598, tid=140710015645440
#
# JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 1.8.0_45-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode linux-amd64 
compressed oops)
# Problematic frame:
# V  [libjvm.so+0x7eb200]
#
# Core dump written. Default location: //core or core.7598 (max size 100 
kB). To ensure a full core dump, try "ulimit -c unlimited" before starting Java 
again
#
# An error report file with more information is saved as:
# //hs_err_pid7598.log
Compiled method (nm)   44403 10436 n 0   sun.misc.Unsafe::copyMemory 
(native)
 total in heap  [0x7ffac6b49290,0x7ffac6b495f8] = 872
 relocation [0x7ffac6b493b8,0x7ffac6b49400] = 72
 main code  [0x7ffac6b49400,0x7ffac6b495f8] = 504
Compiled method (nm)   44403 10436 n 0   sun.misc.Unsafe::copyMemory 
(native)
 total in heap  [0x7ffac6b49290,0x7ffac6b495f8] = 872
 relocation [0x7ffac6b493b8,0x7ffac6b49400] = 72
 main code  [0x7ffac6b49400,0x7ffac6b495f8] = 504
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-10712) JVM crashes with spark.sql.tungsten.enabled = true

2015-09-18 Thread Mauro Pirrone (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-10712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mauro Pirrone updated SPARK-10712:
--
Priority: Critical  (was: Blocker)

> JVM crashes with spark.sql.tungsten.enabled = true
> --
>
> Key: SPARK-10712
> URL: https://issues.apache.org/jira/browse/SPARK-10712
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 1.5.0
> Environment: 1 node - Linux, 64GB ram, 8 core
>Reporter: Mauro Pirrone
>Priority: Critical
>
> When turning on tungsten, I get the following error when executing a 
> query/job with a few joins. When tungsten is turned off, the error does not 
> appear. Also note that tungsten works for me in other cases.
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7ffadaf59200, pid=7598, tid=140710015645440
> #
> # JRE version: Java(TM) SE Runtime Environment (8.0_45-b14) (build 
> 1.8.0_45-b14)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.45-b02 mixed mode 
> linux-amd64 compressed oops)
> # Problematic frame:
> # V  [libjvm.so+0x7eb200]
> #
> # Core dump written. Default location: //core or core.7598 (max size 100 
> kB). To ensure a full core dump, try "ulimit -c unlimited" before starting 
> Java again
> #
> # An error report file with more information is saved as:
> # //hs_err_pid7598.log
> Compiled method (nm)   44403 10436 n 0   sun.misc.Unsafe::copyMemory 
> (native)
>  total in heap  [0x7ffac6b49290,0x7ffac6b495f8] = 872
>  relocation [0x7ffac6b493b8,0x7ffac6b49400] = 72
>  main code  [0x7ffac6b49400,0x7ffac6b495f8] = 504
> Compiled method (nm)   44403 10436 n 0   sun.misc.Unsafe::copyMemory 
> (native)
>  total in heap  [0x7ffac6b49290,0x7ffac6b495f8] = 872
>  relocation [0x7ffac6b493b8,0x7ffac6b49400] = 72
>  main code  [0x7ffac6b49400,0x7ffac6b495f8] = 504
> #
> # If you would like to submit a bug report, please visit:
> #   http://bugreport.java.com/bugreport/crash.jsp
> #



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-5303) applySchema returns NullPointerException

2015-01-26 Thread Mauro Pirrone (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mauro Pirrone closed SPARK-5303.

Resolution: Not a Problem

 applySchema returns NullPointerException
 

 Key: SPARK-5303
 URL: https://issues.apache.org/jira/browse/SPARK-5303
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: Mauro Pirrone

 The following code snippet returns NullPointerException:
 val result = .
   
 val rows = result.take(10)
 val rowRdd = SparkManager.getContext().parallelize(rows, 1)
 val schemaRdd = SparkManager.getSQLContext().applySchema(rowRdd, 
 result.schema)
 java.lang.NullPointerException
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeReference.hashCode(namedExpressions.scala:147)
   at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
   at scala.util.hashing.MurmurHash3.listHash(MurmurHash3.scala:168)
   at scala.util.hashing.MurmurHash3$.seqHash(MurmurHash3.scala:216)
   at scala.collection.LinearSeqLike$class.hashCode(LinearSeqLike.scala:53)
   at scala.collection.immutable.List.hashCode(List.scala:84)
   at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
   at scala.util.hashing.MurmurHash3.productHash(MurmurHash3.scala:63)
   at scala.util.hashing.MurmurHash3$.productHash(MurmurHash3.scala:210)
   at scala.runtime.ScalaRunTime$._hashCode(ScalaRunTime.scala:172)
   at 
 org.apache.spark.sql.execution.LogicalRDD.hashCode(ExistingRDD.scala:58)
   at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
   at 
 scala.collection.mutable.HashTable$HashUtils$class.elemHashCode(HashTable.scala:398)
   at scala.collection.mutable.HashMap.elemHashCode(HashMap.scala:39)
   at 
 scala.collection.mutable.HashTable$class.findEntry(HashTable.scala:130)
   at scala.collection.mutable.HashMap.findEntry(HashMap.scala:39)
   at scala.collection.mutable.HashMap.get(HashMap.scala:69)
   at 
 scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:187)
   at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:91)
   at 
 scala.collection.TraversableLike$$anonfun$groupBy$1.apply(TraversableLike.scala:329)
   at 
 scala.collection.TraversableLike$$anonfun$groupBy$1.apply(TraversableLike.scala:327)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at 
 scala.collection.TraversableLike$class.groupBy(TraversableLike.scala:327)
   at scala.collection.AbstractTraversable.groupBy(Traversable.scala:105)
   at 
 org.apache.spark.sql.catalyst.analysis.NewRelationInstances$.apply(MultiInstanceRelation.scala:44)
   at 
 org.apache.spark.sql.catalyst.analysis.NewRelationInstances$.apply(MultiInstanceRelation.scala:40)
   at 
 org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
   at 
 org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
   at 
 scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
   at 
 scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)
   at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:34)
   at 
 org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
   at 
 org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
   at scala.collection.immutable.List.foreach(List.scala:318)
   at 
 org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:411)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:411)
   at org.apache.spark.sql.SchemaRDD.schema$lzycompute(SchemaRDD.scala:135)
   at org.apache.spark.sql.SchemaRDD.schema(SchemaRDD.scala:135)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-5409) Broken link in documentation

2015-01-26 Thread Mauro Pirrone (JIRA)
Mauro Pirrone created SPARK-5409:


 Summary: Broken link in documentation
 Key: SPARK-5409
 URL: https://issues.apache.org/jira/browse/SPARK-5409
 Project: Spark
  Issue Type: Documentation
Reporter: Mauro Pirrone
Priority: Minor


https://spark.apache.org/docs/1.2.0/streaming-kafka-integration.html

See the API docs and the example.

Link to example is broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-5304) applySchema returns NullPointerException

2015-01-17 Thread Mauro Pirrone (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mauro Pirrone closed SPARK-5304.

Resolution: Duplicate

 applySchema returns NullPointerException
 

 Key: SPARK-5304
 URL: https://issues.apache.org/jira/browse/SPARK-5304
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: Mauro Pirrone

 The following code snippet returns NullPointerException:
 val result = .
   
 val rows = result.take(10)
 val rowRdd = SparkManager.getContext().parallelize(rows, 1)
 val schemaRdd = SparkManager.getSQLContext().applySchema(rowRdd, 
 result.schema)
 java.lang.NullPointerException
   at 
 org.apache.spark.sql.catalyst.expressions.AttributeReference.hashCode(namedExpressions.scala:147)
   at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
   at scala.util.hashing.MurmurHash3.listHash(MurmurHash3.scala:168)
   at scala.util.hashing.MurmurHash3$.seqHash(MurmurHash3.scala:216)
   at scala.collection.LinearSeqLike$class.hashCode(LinearSeqLike.scala:53)
   at scala.collection.immutable.List.hashCode(List.scala:84)
   at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
   at scala.util.hashing.MurmurHash3.productHash(MurmurHash3.scala:63)
   at scala.util.hashing.MurmurHash3$.productHash(MurmurHash3.scala:210)
   at scala.runtime.ScalaRunTime$._hashCode(ScalaRunTime.scala:172)
   at 
 org.apache.spark.sql.execution.LogicalRDD.hashCode(ExistingRDD.scala:58)
   at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
   at 
 scala.collection.mutable.HashTable$HashUtils$class.elemHashCode(HashTable.scala:398)
   at scala.collection.mutable.HashMap.elemHashCode(HashMap.scala:39)
   at 
 scala.collection.mutable.HashTable$class.findEntry(HashTable.scala:130)
   at scala.collection.mutable.HashMap.findEntry(HashMap.scala:39)
   at scala.collection.mutable.HashMap.get(HashMap.scala:69)
   at 
 scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:187)
   at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:91)
   at 
 scala.collection.TraversableLike$$anonfun$groupBy$1.apply(TraversableLike.scala:329)
   at 
 scala.collection.TraversableLike$$anonfun$groupBy$1.apply(TraversableLike.scala:327)
   at 
 scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
   at 
 scala.collection.TraversableLike$class.groupBy(TraversableLike.scala:327)
   at scala.collection.AbstractTraversable.groupBy(Traversable.scala:105)
   at 
 org.apache.spark.sql.catalyst.analysis.NewRelationInstances$.apply(MultiInstanceRelation.scala:44)
   at 
 org.apache.spark.sql.catalyst.analysis.NewRelationInstances$.apply(MultiInstanceRelation.scala:40)
   at 
 org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
   at 
 org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
   at 
 scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
   at 
 scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)
   at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:34)
   at 
 org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
   at 
 org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
   at scala.collection.immutable.List.foreach(List.scala:318)
   at 
 org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:411)
   at 
 org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:411)
   at org.apache.spark.sql.SchemaRDD.schema$lzycompute(SchemaRDD.scala:135)
   at org.apache.spark.sql.SchemaRDD.schema(SchemaRDD.scala:135)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-5304) applySchema returns NullPointerException

2015-01-17 Thread Mauro Pirrone (JIRA)
Mauro Pirrone created SPARK-5304:


 Summary: applySchema returns NullPointerException
 Key: SPARK-5304
 URL: https://issues.apache.org/jira/browse/SPARK-5304
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: Mauro Pirrone


The following code snippet returns NullPointerException:

val result = .
  
val rows = result.take(10)
val rowRdd = SparkManager.getContext().parallelize(rows, 1)
val schemaRdd = SparkManager.getSQLContext().applySchema(rowRdd, result.schema)

java.lang.NullPointerException
at 
org.apache.spark.sql.catalyst.expressions.AttributeReference.hashCode(namedExpressions.scala:147)
at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
at scala.util.hashing.MurmurHash3.listHash(MurmurHash3.scala:168)
at scala.util.hashing.MurmurHash3$.seqHash(MurmurHash3.scala:216)
at scala.collection.LinearSeqLike$class.hashCode(LinearSeqLike.scala:53)
at scala.collection.immutable.List.hashCode(List.scala:84)
at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
at scala.util.hashing.MurmurHash3.productHash(MurmurHash3.scala:63)
at scala.util.hashing.MurmurHash3$.productHash(MurmurHash3.scala:210)
at scala.runtime.ScalaRunTime$._hashCode(ScalaRunTime.scala:172)
at 
org.apache.spark.sql.execution.LogicalRDD.hashCode(ExistingRDD.scala:58)
at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
at 
scala.collection.mutable.HashTable$HashUtils$class.elemHashCode(HashTable.scala:398)
at scala.collection.mutable.HashMap.elemHashCode(HashMap.scala:39)
at 
scala.collection.mutable.HashTable$class.findEntry(HashTable.scala:130)
at scala.collection.mutable.HashMap.findEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.get(HashMap.scala:69)
at 
scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:187)
at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:91)
at 
scala.collection.TraversableLike$$anonfun$groupBy$1.apply(TraversableLike.scala:329)
at 
scala.collection.TraversableLike$$anonfun$groupBy$1.apply(TraversableLike.scala:327)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at 
scala.collection.TraversableLike$class.groupBy(TraversableLike.scala:327)
at scala.collection.AbstractTraversable.groupBy(Traversable.scala:105)
at 
org.apache.spark.sql.catalyst.analysis.NewRelationInstances$.apply(MultiInstanceRelation.scala:44)
at 
org.apache.spark.sql.catalyst.analysis.NewRelationInstances$.apply(MultiInstanceRelation.scala:40)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
at 
scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
at 
scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)
at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:34)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
at scala.collection.immutable.List.foreach(List.scala:318)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
at 
org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:411)
at 
org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:411)
at org.apache.spark.sql.SchemaRDD.schema$lzycompute(SchemaRDD.scala:135)
at org.apache.spark.sql.SchemaRDD.schema(SchemaRDD.scala:135)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-5303) applySchema returns NullPointerException

2015-01-17 Thread Mauro Pirrone (JIRA)
Mauro Pirrone created SPARK-5303:


 Summary: applySchema returns NullPointerException
 Key: SPARK-5303
 URL: https://issues.apache.org/jira/browse/SPARK-5303
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 1.2.0
Reporter: Mauro Pirrone


The following code snippet returns NullPointerException:

val result = .
  
val rows = result.take(10)
val rowRdd = SparkManager.getContext().parallelize(rows, 1)
val schemaRdd = SparkManager.getSQLContext().applySchema(rowRdd, result.schema)

java.lang.NullPointerException
at 
org.apache.spark.sql.catalyst.expressions.AttributeReference.hashCode(namedExpressions.scala:147)
at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
at scala.util.hashing.MurmurHash3.listHash(MurmurHash3.scala:168)
at scala.util.hashing.MurmurHash3$.seqHash(MurmurHash3.scala:216)
at scala.collection.LinearSeqLike$class.hashCode(LinearSeqLike.scala:53)
at scala.collection.immutable.List.hashCode(List.scala:84)
at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
at scala.util.hashing.MurmurHash3.productHash(MurmurHash3.scala:63)
at scala.util.hashing.MurmurHash3$.productHash(MurmurHash3.scala:210)
at scala.runtime.ScalaRunTime$._hashCode(ScalaRunTime.scala:172)
at 
org.apache.spark.sql.execution.LogicalRDD.hashCode(ExistingRDD.scala:58)
at scala.runtime.ScalaRunTime$.hash(ScalaRunTime.scala:210)
at 
scala.collection.mutable.HashTable$HashUtils$class.elemHashCode(HashTable.scala:398)
at scala.collection.mutable.HashMap.elemHashCode(HashMap.scala:39)
at 
scala.collection.mutable.HashTable$class.findEntry(HashTable.scala:130)
at scala.collection.mutable.HashMap.findEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.get(HashMap.scala:69)
at 
scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:187)
at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:91)
at 
scala.collection.TraversableLike$$anonfun$groupBy$1.apply(TraversableLike.scala:329)
at 
scala.collection.TraversableLike$$anonfun$groupBy$1.apply(TraversableLike.scala:327)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at 
scala.collection.TraversableLike$class.groupBy(TraversableLike.scala:327)
at scala.collection.AbstractTraversable.groupBy(Traversable.scala:105)
at 
org.apache.spark.sql.catalyst.analysis.NewRelationInstances$.apply(MultiInstanceRelation.scala:44)
at 
org.apache.spark.sql.catalyst.analysis.NewRelationInstances$.apply(MultiInstanceRelation.scala:40)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:61)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1$$anonfun$apply$2.apply(RuleExecutor.scala:59)
at 
scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:51)
at 
scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:60)
at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:34)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:59)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$apply$1.apply(RuleExecutor.scala:51)
at scala.collection.immutable.List.foreach(List.scala:318)
at 
org.apache.spark.sql.catalyst.rules.RuleExecutor.apply(RuleExecutor.scala:51)
at 
org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:411)
at 
org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:411)
at org.apache.spark.sql.SchemaRDD.schema$lzycompute(SchemaRDD.scala:135)
at org.apache.spark.sql.SchemaRDD.schema(SchemaRDD.scala:135)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org