I have been struggling through this error since past 3 days and have tried
all possible ways/suggestions people have provided on stackoverflow and
here in this group.

I am trying to read a parquet file using sparkR and convert it into an R
dataframe for further usage. The file size is not that big, ~4G and 250 mil
records.

My standalone cluster has more than enough memory and processing power : 24
core, 128 GB RAM. Here is configuration to give an idea:

Tried this on both spark 1.4.1 and 1.5.1.  I have attached both stack
traces/logs. Parquet file has 24 partitions.

spark.default.confs=list(spark.cores.max="24",
>                          spark.executor.memory="50g",
>                          spark.driver.memory="30g",
>                          spark.driver.extraJavaOptions="-Xms5g -Xmx5g
> -XX:MaxPermSize=1024M")
> sc <- sparkR.init(master="local[24]",sparkEnvir = spark.default.confs)

.......
> ........ reading parquet file and storing in R dataframe
> med.Rdf <- collect(mednew.DF)
15/11/06 10:45:18 INFO MemoryStore: ensureFreeSpace(213512) called with 
curMem=89661, maxMem=555755765
15/11/06 10:45:18 INFO MemoryStore: Block broadcast_3 stored as values in 
memory (estimated size 208.5 KB, free 529.7 MB)
15/11/06 10:45:18 INFO MemoryStore: ensureFreeSpace(19788) called with 
curMem=303173, maxMem=555755765
15/11/06 10:45:18 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in 
memory (estimated size 19.3 KB, free 529.7 MB)
15/11/06 10:45:18 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on 
localhost:39562 (size: 19.3 KB, free: 530.0 MB)
15/11/06 10:45:18 INFO SparkContext: Created broadcast 3 from dfToCols at 
NativeMethodAccessorImpl.java:-2
15/11/06 10:45:18 INFO ParquetRelation: Reading Parquet file(s) from 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00005-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00021-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00017-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00006-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00012-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00009-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00015-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00014-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00020-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00008-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00002-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00018-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00000-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00010-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00022-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00013-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00011-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00007-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00001-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00003-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00004-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00016-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet,
 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00019-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
15/11/06 10:45:18 INFO SparkContext: Starting job: dfToCols at 
NativeMethodAccessorImpl.java:-2
15/11/06 10:45:18 INFO DAGScheduler: Got job 2 (dfToCols at 
NativeMethodAccessorImpl.java:-2) with 23 output partitions
15/11/06 10:45:18 INFO DAGScheduler: Final stage: ResultStage 2(dfToCols at 
NativeMethodAccessorImpl.java:-2)
15/11/06 10:45:18 INFO DAGScheduler: Parents of final stage: List()
15/11/06 10:45:18 INFO DAGScheduler: Missing parents: List()
15/11/06 10:45:18 INFO DAGScheduler: Submitting ResultStage 2 
(MapPartitionsRDD[5] at dfToCols at NativeMethodAccessorImpl.java:-2), which 
has no missing parents
15/11/06 10:45:18 INFO MemoryStore: ensureFreeSpace(5392) called with 
curMem=322961, maxMem=555755765
15/11/06 10:45:18 INFO MemoryStore: Block broadcast_4 stored as values in 
memory (estimated size 5.3 KB, free 529.7 MB)
15/11/06 10:45:18 INFO MemoryStore: ensureFreeSpace(3032) called with 
curMem=328353, maxMem=555755765
15/11/06 10:45:18 INFO MemoryStore: Block broadcast_4_piece0 stored as bytes in 
memory (estimated size 3.0 KB, free 529.7 MB)
15/11/06 10:45:18 INFO BlockManagerInfo: Added broadcast_4_piece0 in memory on 
localhost:39562 (size: 3.0 KB, free: 530.0 MB)
15/11/06 10:45:18 INFO SparkContext: Created broadcast 4 from broadcast at 
DAGScheduler.scala:861
15/11/06 10:45:18 INFO DAGScheduler: Submitting 23 missing tasks from 
ResultStage 2 (MapPartitionsRDD[5] at dfToCols at 
NativeMethodAccessorImpl.java:-2)
15/11/06 10:45:18 INFO TaskSchedulerImpl: Adding task set 2.0 with 23 tasks
15/11/06 10:45:18 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 13, 
localhost, PROCESS_LOCAL, 2255 bytes)
15/11/06 10:45:18 INFO TaskSetManager: Starting task 1.0 in stage 2.0 (TID 14, 
localhost, PROCESS_LOCAL, 2254 bytes)
15/11/06 10:45:18 INFO TaskSetManager: Starting task 2.0 in stage 2.0 (TID 15, 
localhost, PROCESS_LOCAL, 2255 bytes)
15/11/06 10:45:18 INFO TaskSetManager: Starting task 3.0 in stage 2.0 (TID 16, 
localhost, PROCESS_LOCAL, 2254 bytes)
15/11/06 10:45:18 INFO TaskSetManager: Starting task 4.0 in stage 2.0 (TID 17, 
localhost, PROCESS_LOCAL, 2256 bytes)
15/11/06 10:45:18 INFO TaskSetManager: Starting task 5.0 in stage 2.0 (TID 18, 
localhost, PROCESS_LOCAL, 2254 bytes)
15/11/06 10:45:18 INFO TaskSetManager: Starting task 6.0 in stage 2.0 (TID 19, 
localhost, PROCESS_LOCAL, 2256 bytes)
15/11/06 10:45:18 INFO TaskSetManager: Starting task 7.0 in stage 2.0 (TID 20, 
localhost, PROCESS_LOCAL, 2256 bytes)
15/11/06 10:45:18 INFO TaskSetManager: Starting task 8.0 in stage 2.0 (TID 21, 
localhost, PROCESS_LOCAL, 2256 bytes)
15/11/06 10:45:18 INFO TaskSetManager: Starting task 9.0 in stage 2.0 (TID 22, 
localhost, PROCESS_LOCAL, 2255 bytes)
15/11/06 10:45:18 INFO TaskSetManager: Starting task 10.0 in stage 2.0 (TID 23, 
localhost, PROCESS_LOCAL, 2254 bytes)
15/11/06 10:45:18 INFO TaskSetManager: Starting task 11.0 in stage 2.0 (TID 24, 
localhost, PROCESS_LOCAL, 2256 bytes)
15/11/06 10:45:18 INFO Executor: Running task 0.0 in stage 2.0 (TID 13)
15/11/06 10:45:18 INFO Executor: Running task 6.0 in stage 2.0 (TID 19)
15/11/06 10:45:18 INFO Executor: Running task 11.0 in stage 2.0 (TID 24)
15/11/06 10:45:18 INFO Executor: Running task 3.0 in stage 2.0 (TID 16)
15/11/06 10:45:18 INFO Executor: Running task 2.0 in stage 2.0 (TID 15)
15/11/06 10:45:18 INFO Executor: Running task 10.0 in stage 2.0 (TID 23)
15/11/06 10:45:18 INFO Executor: Running task 4.0 in stage 2.0 (TID 17)
15/11/06 10:45:18 INFO Executor: Running task 9.0 in stage 2.0 (TID 22)
15/11/06 10:45:18 INFO Executor: Running task 8.0 in stage 2.0 (TID 21)
15/11/06 10:45:18 INFO Executor: Running task 1.0 in stage 2.0 (TID 14)
15/11/06 10:45:18 INFO Executor: Running task 7.0 in stage 2.0 (TID 20)
15/11/06 10:45:18 INFO Executor: Running task 5.0 in stage 2.0 (TID 18)
15/11/06 10:45:18 INFO ParquetRelation$$anonfun$buildScan$1$$anon$1: Input 
split: ParquetInputSplit{part: 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00015-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
 start: 0 end: 971287 length: 971287 hosts: []}
15/11/06 10:45:18 INFO ParquetRelation$$anonfun$buildScan$1$$anon$1: Input 
split: ParquetInputSplit{part: 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00002-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
 start: 0 end: 972541 length: 972541 hosts: []}
15/11/06 10:45:18 INFO ParquetRelation$$anonfun$buildScan$1$$anon$1: Input 
split: ParquetInputSplit{part: 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00020-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
 start: 0 end: 972636 length: 972636 hosts: []}
15/11/06 10:45:18 INFO ParquetRelation$$anonfun$buildScan$1$$anon$1: Input 
split: ParquetInputSplit{part: 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00018-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
 start: 0 end: 972447 length: 972447 hosts: []}
15/11/06 10:45:18 INFO ParquetRelation$$anonfun$buildScan$1$$anon$1: Input 
split: ParquetInputSplit{part: 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00017-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
 start: 0 end: 972236 length: 972236 hosts: []}
15/11/06 10:45:18 INFO ParquetRelation$$anonfun$buildScan$1$$anon$1: Input 
split: ParquetInputSplit{part: 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00012-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
 start: 0 end: 973206 length: 973206 hosts: []}
15/11/06 10:45:18 INFO ParquetRelation$$anonfun$buildScan$1$$anon$1: Input 
split: ParquetInputSplit{part: 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00008-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
 start: 0 end: 973230 length: 973230 hosts: []}
15/11/06 10:45:18 INFO ParquetRelation$$anonfun$buildScan$1$$anon$1: Input 
split: ParquetInputSplit{part: 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00014-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
 start: 0 end: 972676 length: 972676 hosts: []}
15/11/06 10:45:18 INFO ParquetRelation$$anonfun$buildScan$1$$anon$1: Input 
split: ParquetInputSplit{part: 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00009-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
 start: 0 end: 973616 length: 973616 hosts: []}
15/11/06 10:45:18 INFO ParquetRelation$$anonfun$buildScan$1$$anon$1: Input 
split: ParquetInputSplit{part: 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00005-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
 start: 0 end: 975135 length: 975135 hosts: []}
15/11/06 10:45:18 INFO ParquetRelation$$anonfun$buildScan$1$$anon$1: Input 
split: ParquetInputSplit{part: 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00021-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
 start: 0 end: 972338 length: 972338 hosts: []}
15/11/06 10:45:18 INFO ParquetRelation$$anonfun$buildScan$1$$anon$1: Input 
split: ParquetInputSplit{part: 
file:/home/rwdna/shared/RA_patient_medication.parquet/part-r-00006-2c752ebc-3292-4966-ab17-d8adcd82e058.gz.parquet
 start: 0 end: 973130 length: 973130 hosts: []}
15/11/06 10:45:18 WARN ParquetRecordReader: Can not initialize counter due to 
context is not a instance of TaskInputOutputContext, but is 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
15/11/06 10:45:18 WARN ParquetRecordReader: Can not initialize counter due to 
context is not a instance of TaskInputOutputContext, but is 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
15/11/06 10:45:18 WARN ParquetRecordReader: Can not initialize counter due to 
context is not a instance of TaskInputOutputContext, but is 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
15/11/06 10:45:18 WARN ParquetRecordReader: Can not initialize counter due to 
context is not a instance of TaskInputOutputContext, but is 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
15/11/06 10:45:18 WARN ParquetRecordReader: Can not initialize counter due to 
context is not a instance of TaskInputOutputContext, but is 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
15/11/06 10:45:18 WARN ParquetRecordReader: Can not initialize counter due to 
context is not a instance of TaskInputOutputContext, but is 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
15/11/06 10:45:18 INFO CatalystReadSupport: Going to read the following fields 
from the Parquet file:

Parquet form:
message root {
  optional binary pat_id (UTF8);
  optional binary ndc (UTF8);
  optional binary proc_cde (UTF8);
  optional binary dayssup (UTF8);
  optional binary quan (UTF8);
  optional binary srv_unit (UTF8);
  optional binary from_dt (UTF8);
}

Catalyst form:
StructType(StructField(pat_id,StringType,true), 
StructField(ndc,StringType,true), StructField(proc_cde,StringType,true), 
StructField(dayssup,StringType,true), StructField(quan,StringType,true), 
StructField(srv_unit,StringType,true), StructField(from_dt,StringType,true))
       
15/11/06 10:45:18 INFO CatalystReadSupport: Going to read the following fields 
from the Parquet file:

Parquet form:
message root {
  optional binary pat_id (UTF8);
  optional binary ndc (UTF8);
  optional binary proc_cde (UTF8);
  optional binary dayssup (UTF8);
  optional binary quan (UTF8);
  optional binary srv_unit (UTF8);
  optional binary from_dt (UTF8);
}

Catalyst form:
StructType(StructField(pat_id,StringType,true), 
StructField(ndc,StringType,true), StructField(proc_cde,StringType,true), 
StructField(dayssup,StringType,true), StructField(quan,StringType,true), 
StructField(srv_unit,StringType,true), StructField(from_dt,StringType,true))
       
15/11/06 10:45:18 WARN ParquetRecordReader: Can not initialize counter due to 
context is not a instance of TaskInputOutputContext, but is 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
15/11/06 10:45:18 WARN ParquetRecordReader: Can not initialize counter due to 
context is not a instance of TaskInputOutputContext, but is 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
15/11/06 10:45:18 WARN ParquetRecordReader: Can not initialize counter due to 
context is not a instance of TaskInputOutputContext, but is 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
15/11/06 10:45:18 INFO InternalParquetRecordReader: RecordReader initialized 
will read a total of 121963 records.
15/11/06 10:45:18 INFO InternalParquetRecordReader: RecordReader initialized 
will read a total of 121962 records.
15/11/06 10:45:18 INFO InternalParquetRecordReader: at row 0. reading next block
15/11/06 10:45:18 INFO InternalParquetRecordReader: at row 0. reading next block
15/11/06 10:45:18 WARN ParquetRecordReader: Can not initialize counter due to 
context is not a instance of TaskInputOutputContext, but is 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
15/11/06 10:45:18 WARN ParquetRecordReader: Can not initialize counter due to 
context is not a instance of TaskInputOutputContext, but is 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
15/11/06 10:45:18 WARN ParquetRecordReader: Can not initialize counter due to 
context is not a instance of TaskInputOutputContext, but is 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
15/11/06 10:45:18 INFO CatalystReadSupport: Going to read the following fields 
from the Parquet file:

Parquet form:
message root {
  optional binary pat_id (UTF8);
  optional binary ndc (UTF8);
  optional binary proc_cde (UTF8);
  optional binary dayssup (UTF8);
  optional binary quan (UTF8);
  optional binary srv_unit (UTF8);
  optional binary from_dt (UTF8);
}

Catalyst form:
StructType(StructField(pat_id,StringType,true), 
StructField(ndc,StringType,true), StructField(proc_cde,StringType,true), 
StructField(dayssup,StringType,true), StructField(quan,StringType,true), 
StructField(srv_unit,StringType,true), StructField(from_dt,StringType,true))
       
15/11/06 10:45:18 INFO CatalystReadSupport: Going to read the following fields 
from the Parquet file:

Parquet form:
message root {
  optional binary pat_id (UTF8);
  optional binary ndc (UTF8);
  optional binary proc_cde (UTF8);
  optional binary dayssup (UTF8);
  optional binary quan (UTF8);
  optional binary srv_unit (UTF8);
  optional binary from_dt (UTF8);
}

Catalyst form:
StructType(StructField(pat_id,StringType,true), 
StructField(ndc,StringType,true), StructField(proc_cde,StringType,true), 
StructField(dayssup,StringType,true), StructField(quan,StringType,true), 
StructField(srv_unit,StringType,true), StructField(from_dt,StringType,true))
       
15/11/06 10:45:18 INFO InternalParquetRecordReader: RecordReader initialized 
will read a total of 121969 records.
15/11/06 10:45:18 INFO InternalParquetRecordReader: at row 0. reading next block
15/11/06 10:45:18 INFO CodecPool: Got brand-new decompressor [.gz]
15/11/06 10:45:18 INFO CodecPool: Got brand-new decompressor [.gz]
15/11/06 10:45:18 INFO InternalParquetRecordReader: RecordReader initialized 
will read a total of 121962 records.
15/11/06 10:45:18 INFO CatalystReadSupport: Going to read the following fields 
from the Parquet file:

Parquet form:
message root {
  optional binary pat_id (UTF8);
  optional binary ndc (UTF8);
  optional binary proc_cde (UTF8);
  optional binary dayssup (UTF8);
  optional binary quan (UTF8);
  optional binary srv_unit (UTF8);
  optional binary from_dt (UTF8);
}

Catalyst form:
StructType(StructField(pat_id,StringType,true), 
StructField(ndc,StringType,true), StructField(proc_cde,StringType,true), 
StructField(dayssup,StringType,true), StructField(quan,StringType,true), 
StructField(srv_unit,StringType,true), StructField(from_dt,StringType,true))
       
15/11/06 10:45:18 INFO CatalystReadSupport: Going to read the following fields 
from the Parquet file:

Parquet form:
message root {
  optional binary pat_id (UTF8);
  optional binary ndc (UTF8);
  optional binary proc_cde (UTF8);
  optional binary dayssup (UTF8);
  optional binary quan (UTF8);
  optional binary srv_unit (UTF8);
  optional binary from_dt (UTF8);
}

Catalyst form:
StructType(StructField(pat_id,StringType,true), 
StructField(ndc,StringType,true), StructField(proc_cde,StringType,true), 
StructField(dayssup,StringType,true), StructField(quan,StringType,true), 
StructField(srv_unit,StringType,true), StructField(from_dt,StringType,true))
       
15/11/06 10:45:18 INFO CodecPool: Got brand-new decompressor [.gz]
15/11/06 10:45:18 INFO CatalystReadSupport: Going to read the following fields 
from the Parquet file:

Parquet form:
message root {
  optional binary pat_id (UTF8);
  optional binary ndc (UTF8);
  optional binary proc_cde (UTF8);
  optional binary dayssup (UTF8);
  optional binary quan (UTF8);
  optional binary srv_unit (UTF8);
  optional binary from_dt (UTF8);
}

Catalyst form:
StructType(StructField(pat_id,StringType,true), 
StructField(ndc,StringType,true), StructField(proc_cde,StringType,true), 
StructField(dayssup,StringType,true), StructField(quan,StringType,true), 
StructField(srv_unit,StringType,true), StructField(from_dt,StringType,true))
       
15/11/06 10:45:18 INFO CatalystReadSupport: Going to read the following fields 
from the Parquet file:

Parquet form:
message root {
  optional binary pat_id (UTF8);
  optional binary ndc (UTF8);
  optional binary proc_cde (UTF8);
  optional binary dayssup (UTF8);
  optional binary quan (UTF8);
  optional binary srv_unit (UTF8);
  optional binary from_dt (UTF8);
}

Catalyst form:
StructType(StructField(pat_id,StringType,true), 
StructField(ndc,StringType,true), StructField(proc_cde,StringType,true), 
StructField(dayssup,StringType,true), StructField(quan,StringType,true), 
StructField(srv_unit,StringType,true), StructField(from_dt,StringType,true))
       
15/11/06 10:45:18 INFO InternalParquetRecordReader: block read in memory in 8 
ms. row count = 121962
15/11/06 10:45:18 INFO InternalParquetRecordReader: block read in memory in 7 
ms. row count = 121963
15/11/06 10:45:18 INFO CatalystReadSupport: Going to read the following fields 
from the Parquet file:

Parquet form:
message root {
  optional binary pat_id (UTF8);
  optional binary ndc (UTF8);
  optional binary proc_cde (UTF8);
  optional binary dayssup (UTF8);
  optional binary quan (UTF8);
  optional binary srv_unit (UTF8);
  optional binary from_dt (UTF8);
}

Catalyst form:
StructType(StructField(pat_id,StringType,true), 
StructField(ndc,StringType,true), StructField(proc_cde,StringType,true), 
StructField(dayssup,StringType,true), StructField(quan,StringType,true), 
StructField(srv_unit,StringType,true), StructField(from_dt,StringType,true))
       
15/11/06 10:45:18 INFO CatalystReadSupport: Going to read the following fields 
from the Parquet file:

Parquet form:
message root {
  optional binary pat_id (UTF8);
  optional binary ndc (UTF8);
  optional binary proc_cde (UTF8);
  optional binary dayssup (UTF8);
  optional binary quan (UTF8);
  optional binary srv_unit (UTF8);
  optional binary from_dt (UTF8);
}

Catalyst form:
StructType(StructField(pat_id,StringType,true), 
StructField(ndc,StringType,true), StructField(proc_cde,StringType,true), 
StructField(dayssup,StringType,true), StructField(quan,StringType,true), 
StructField(srv_unit,StringType,true), StructField(from_dt,StringType,true))
       
15/11/06 10:45:18 INFO InternalParquetRecordReader: at row 0. reading next block
15/11/06 10:45:18 INFO InternalParquetRecordReader: block read in memory in 11 
ms. row count = 121969
15/11/06 10:45:18 INFO CatalystReadSupport: Going to read the following fields 
from the Parquet file:

Parquet form:
message root {
  optional binary pat_id (UTF8);
  optional binary ndc (UTF8);
  optional binary proc_cde (UTF8);
  optional binary dayssup (UTF8);
  optional binary quan (UTF8);
  optional binary srv_unit (UTF8);
  optional binary from_dt (UTF8);
}

Catalyst form:
StructType(StructField(pat_id,StringType,true), 
StructField(ndc,StringType,true), StructField(proc_cde,StringType,true), 
StructField(dayssup,StringType,true), StructField(quan,StringType,true), 
StructField(srv_unit,StringType,true), StructField(from_dt,StringType,true))
       
15/11/06 10:45:18 INFO InternalParquetRecordReader: RecordReader initialized 
will read a total of 121970 records.
15/11/06 10:45:18 INFO CatalystReadSupport: Going to read the following fields 
from the Parquet file:

Parquet form:
message root {
  optional binary pat_id (UTF8);
  optional binary ndc (UTF8);
  optional binary proc_cde (UTF8);
  optional binary dayssup (UTF8);
  optional binary quan (UTF8);
  optional binary srv_unit (UTF8);
  optional binary from_dt (UTF8);
}

Catalyst form:
StructType(StructField(pat_id,StringType,true), 
StructField(ndc,StringType,true), StructField(proc_cde,StringType,true), 
StructField(dayssup,StringType,true), StructField(quan,StringType,true), 
StructField(srv_unit,StringType,true), StructField(from_dt,StringType,true))
       
15/11/06 10:45:18 INFO InternalParquetRecordReader: RecordReader initialized 
will read a total of 121962 records.
15/11/06 10:45:18 INFO InternalParquetRecordReader: RecordReader initialized 
will read a total of 121969 records.
15/11/06 10:45:18 INFO InternalParquetRecordReader: RecordReader initialized 
will read a total of 121972 records.
15/11/06 10:45:18 INFO InternalParquetRecordReader: at row 0. reading next block
15/11/06 10:45:18 INFO InternalParquetRecordReader: RecordReader initialized 
will read a total of 121966 records.
15/11/06 10:45:18 INFO InternalParquetRecordReader: at row 0. reading next block
15/11/06 10:45:18 INFO InternalParquetRecordReader: at row 0. reading next block
15/11/06 10:45:18 INFO InternalParquetRecordReader: at row 0. reading next block
15/11/06 10:45:18 INFO InternalParquetRecordReader: RecordReader initialized 
will read a total of 121969 records.
15/11/06 10:45:18 INFO InternalParquetRecordReader: RecordReader initialized 
will read a total of 121969 records.
15/11/06 10:45:18 INFO InternalParquetRecordReader: at row 0. reading next block
15/11/06 10:45:18 INFO InternalParquetRecordReader: at row 0. reading next block
15/11/06 10:45:18 INFO InternalParquetRecordReader: at row 0. reading next block
15/11/06 10:45:18 INFO InternalParquetRecordReader: RecordReader initialized 
will read a total of 121963 records.
15/11/06 10:45:18 INFO InternalParquetRecordReader: at row 0. reading next block
15/11/06 10:45:18 INFO CodecPool: Got brand-new decompressor [.gz]
15/11/06 10:45:18 INFO InternalParquetRecordReader: block read in memory in 8 
ms. row count = 121962
15/11/06 10:45:18 INFO CodecPool: Got brand-new decompressor [.gz]
15/11/06 10:45:18 INFO InternalParquetRecordReader: block read in memory in 7 
ms. row count = 121970
15/11/06 10:45:18 INFO CodecPool: Got brand-new decompressor [.gz]
15/11/06 10:45:18 INFO CodecPool: Got brand-new decompressor [.gz]
15/11/06 10:45:18 INFO CodecPool: Got brand-new decompressor [.gz]
15/11/06 10:45:18 INFO CodecPool: Got brand-new decompressor [.gz]
15/11/06 10:45:18 INFO InternalParquetRecordReader: block read in memory in 7 
ms. row count = 121969
15/11/06 10:45:18 INFO CodecPool: Got brand-new decompressor [.gz]
15/11/06 10:45:18 INFO CodecPool: Got brand-new decompressor [.gz]
15/11/06 10:45:18 INFO CodecPool: Got brand-new decompressor [.gz]
15/11/06 10:45:18 INFO InternalParquetRecordReader: block read in memory in 7 
ms. row count = 121969
15/11/06 10:45:18 INFO InternalParquetRecordReader: block read in memory in 8 
ms. row count = 121962
15/11/06 10:45:18 INFO InternalParquetRecordReader: block read in memory in 10 
ms. row count = 121966
15/11/06 10:45:18 INFO InternalParquetRecordReader: block read in memory in 10 
ms. row count = 121972
15/11/06 10:45:18 INFO InternalParquetRecordReader: block read in memory in 9 
ms. row count = 121963
15/11/06 10:45:18 INFO InternalParquetRecordReader: block read in memory in 10 
ms. row count = 121969
15/11/06 10:45:20 INFO BlockManagerInfo: Removed broadcast_2_piece0 on 
localhost:39562 in memory (size: 2.4 KB, free: 530.0 MB)
15/11/06 10:45:20 INFO ContextCleaner: Cleaned accumulator 2
15/11/06 10:45:53 WARN ServletHandler: Error for /static/timeline-view.css
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.zip.ZipCoder.toString(ZipCoder.java:49)
        at java.util.zip.ZipFile.getZipEntry(ZipFile.java:567)
        at java.util.zip.ZipFile.access$900(ZipFile.java:61)
        at java.util.zip.ZipFile$ZipEntryIterator.next(ZipFile.java:525)
        at java.util.zip.ZipFile$ZipEntryIterator.nextElement(ZipFile.java:500)
        at java.util.zip.ZipFile$ZipEntryIterator.nextElement(ZipFile.java:481)
        at java.util.jar.JarFile$JarEntryIterator.next(JarFile.java:257)
        at java.util.jar.JarFile$JarEntryIterator.nextElement(JarFile.java:266)
        at java.util.jar.JarFile$JarEntryIterator.nextElement(JarFile.java:247)
        at 
org.spark-project.jetty.util.resource.JarFileResource.exists(JarFileResource.java:189)
        at 
org.spark-project.jetty.servlet.DefaultServlet.getResource(DefaultServlet.java:398)
        at 
org.spark-project.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:476)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
        at 
org.spark-project.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
        at 
org.spark-project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
        at 
org.spark-project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
        at 
org.spark-project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428)
        at 
org.spark-project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
        at 
org.spark-project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
        at 
org.spark-project.jetty.server.handler.GzipHandler.handle(GzipHandler.java:264)
        at 
org.spark-project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
        at 
org.spark-project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
        at org.spark-project.jetty.server.Server.handle(Server.java:370)
        at 
org.spark-project.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
        at 
org.spark-project.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:971)
        at 
org.spark-project.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1033)
        at 
org.spark-project.jetty.http.HttpParser.parseNext(HttpParser.java:644)
        at 
org.spark-project.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
        at 
org.spark-project.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
        at 
org.spark-project.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
        at 
org.spark-project.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
15/11/06 10:45:57 WARN ServletHandler: Error for /static/bootstrap.min.css
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at sun.util.calendar.Gregorian.newCalendarDate(Gregorian.java:85)
        at sun.util.calendar.Gregorian.newCalendarDate(Gregorian.java:37)
        at java.util.Date.<init>(Date.java:254)
        at java.util.zip.ZipUtils.dosToJavaTime(ZipUtils.java:74)
        at java.util.zip.ZipFile.getZipEntry(ZipFile.java:570)
        at java.util.zip.ZipFile.access$900(ZipFile.java:61)
        at java.util.zip.ZipFile$ZipEntryIterator.next(ZipFile.java:525)
        at java.util.zip.ZipFile$ZipEntryIterator.nextElement(ZipFile.java:500)
        at java.util.zip.ZipFile$ZipEntryIterator.nextElement(ZipFile.java:481)
        at java.util.jar.JarFile$JarEntryIterator.next(JarFile.java:257)
        at java.util.jar.JarFile$JarEntryIterator.nextElement(JarFile.java:266)
        at java.util.jar.JarFile$JarEntryIterator.nextElement(JarFile.java:247)
        at 
org.spark-project.jetty.util.resource.JarFileResource.exists(JarFileResource.java:189)
        at 
org.spark-project.jetty.servlet.DefaultServlet.getResource(DefaultServlet.java:398)
        at 
org.spark-project.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:476)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
        at 
org.spark-project.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
        at 
org.spark-project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
        at 
org.spark-project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
        at 
org.spark-project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428)
        at 
org.spark-project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
        at 
org.spark-project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
        at 
org.spark-project.jetty.server.handler.GzipHandler.handle(GzipHandler.java:264)
        at 
org.spark-project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
        at 
org.spark-project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
        at org.spark-project.jetty.server.Server.handle(Server.java:370)
        at 
org.spark-project.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
        at 
org.spark-project.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:971)
        at 
org.spark-project.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1033)
        at 
org.spark-project.jetty.http.HttpParser.parseNext(HttpParser.java:644)
        at 
org.spark-project.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
15/11/06 10:46:03 WARN ServletHandler: Error for /static/sorttable.js
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.zip.ZipCoder.toString(ZipCoder.java:59)
        at java.util.zip.ZipFile.getZipEntry(ZipFile.java:567)
        at java.util.zip.ZipFile.access$900(ZipFile.java:61)
        at java.util.zip.ZipFile$ZipEntryIterator.next(ZipFile.java:525)
        at java.util.zip.ZipFile$ZipEntryIterator.nextElement(ZipFile.java:500)
        at java.util.zip.ZipFile$ZipEntryIterator.nextElement(ZipFile.java:481)
        at java.util.jar.JarFile$JarEntryIterator.next(JarFile.java:257)
        at java.util.jar.JarFile$JarEntryIterator.nextElement(JarFile.java:266)
        at java.util.jar.JarFile$JarEntryIterator.nextElement(JarFile.java:247)
        at 
org.spark-project.jetty.util.resource.JarFileResource.exists(JarFileResource.java:189)
        at 
org.spark-project.jetty.servlet.DefaultServlet.getResource(DefaultServlet.java:398)
        at 
org.spark-project.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:476)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
        at 
org.spark-project.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
        at 
org.spark-project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
        at 
org.spark-project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
        at 
org.spark-project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428)
        at 
org.spark-project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
        at 
org.spark-project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
        at 
org.spark-project.jetty.server.handler.GzipHandler.handle(GzipHandler.java:264)
        at 
org.spark-project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
        at 
org.spark-project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
        at org.spark-project.jetty.server.Server.handle(Server.java:370)
        at 
org.spark-project.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
        at 
org.spark-project.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:971)
        at 
org.spark-project.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1033)
        at 
org.spark-project.jetty.http.HttpParser.parseNext(HttpParser.java:644)
        at 
org.spark-project.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
        at 
org.spark-project.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
        at 
org.spark-project.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
        at 
org.spark-project.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
15/11/06 10:46:10 WARN ServletHandler: Error for /static/vis.min.css
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.zip.ZipFile.getZipEntry(ZipFile.java:558)
        at java.util.zip.ZipFile.access$900(ZipFile.java:61)
        at java.util.zip.ZipFile$ZipEntryIterator.next(ZipFile.java:525)
        at java.util.zip.ZipFile$ZipEntryIterator.nextElement(ZipFile.java:500)
        at java.util.zip.ZipFile$ZipEntryIterator.nextElement(ZipFile.java:481)
        at java.util.jar.JarFile$JarEntryIterator.next(JarFile.java:257)
        at java.util.jar.JarFile$JarEntryIterator.nextElement(JarFile.java:266)
        at java.util.jar.JarFile$JarEntryIterator.nextElement(JarFile.java:247)
        at 
org.spark-project.jetty.util.resource.JarFileResource.exists(JarFileResource.java:189)
        at 
org.spark-project.jetty.servlet.DefaultServlet.getResource(DefaultServlet.java:398)
        at 
org.spark-project.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:476)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:735)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
        at 
org.spark-project.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
        at 
org.spark-project.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:501)
        at 
org.spark-project.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)
        at 
org.spark-project.jetty.servlet.ServletHandler.doScope(ServletHandler.java:428)
        at 
org.spark-project.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)
        at 
org.spark-project.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
        at 
org.spark-project.jetty.server.handler.GzipHandler.handle(GzipHandler.java:264)
        at 
org.spark-project.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
        at 
org.spark-project.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
        at org.spark-project.jetty.server.Server.handle(Server.java:370)
        at 
org.spark-project.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)
        at 
org.spark-project.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:971)
        at 
org.spark-project.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1033)
        at 
org.spark-project.jetty.http.HttpParser.parseNext(HttpParser.java:644)
        at 
org.spark-project.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
        at 
org.spark-project.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
        at 
org.spark-project.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
        at 
org.spark-project.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
        at 
org.spark-project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
15/11/06 10:46:14 WARN ServletHandler: Error for /static/jquery-1.11.1.min.js
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:17 WARN ServletHandler: Error for /static/bootstrap-tooltip.js
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:19 WARN ServletHandler: Error for /static/vis.min.js
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:22 ERROR Utils: Uncaught exception in thread driver-heartbeater
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:26 WARN ServletHandler: Error for /static/additional-metrics.js
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:27 WARN ServletHandler: Error for /static/webui.css
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:30 WARN ServletHandler: Error for /static/timeline-view.js
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:32 WARN ServletHandler: Error for /static/initialize-tooltips.js
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:35 ERROR Executor: Exception in task 9.0 in stage 2.0 (TID 22)
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "Executor task launch worker-10" 
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:40 ERROR Executor: Exception in task 10.0 in stage 2.0 (TID 23)
java.lang.OutOfMemoryError: Java heap space
Exception in thread "Executor task launch worker-1" java.lang.OutOfMemoryError: 
GC overhead limit exceeded
15/11/06 10:46:47 ERROR Executor: Exception in task 6.0 in stage 2.0 (TID 19)
java.lang.OutOfMemoryError: Java heap space
Exception in thread "Executor task launch worker-3" java.lang.OutOfMemoryError: 
GC overhead limit exceeded
15/11/06 10:46:50 ERROR Executor: Exception in task 5.0 in stage 2.0 (TID 18)
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:50 ERROR Executor: Exception in task 11.0 in stage 2.0 (TID 24)
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:50 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 13)
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "Executor task launch worker-9" java.lang.OutOfMemoryError: 
GC overhead limit exceeded
Exception in thread "Executor task launch worker-8" Exception in thread 
"Executor task launch worker-11" java.lang.OutOfMemoryError: GC overhead limit 
exceeded
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:51 ERROR Executor: Exception in task 2.0 in stage 2.0 (TID 15)
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:51 ERROR Executor: Exception in task 4.0 in stage 2.0 (TID 17)
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "Executor task launch worker-6" java.lang.OutOfMemoryError: 
GC overhead limit exceeded
Exception in thread "Executor task launch worker-7" java.lang.OutOfMemoryError: 
GC overhead limit exceeded
15/11/06 10:46:51 ERROR Executor: Exception in task 1.0 in stage 2.0 (TID 14)
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:46:51 ERROR Executor: Exception in task 8.0 in stage 2.0 (TID 21)
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "Executor task launch worker-4" java.lang.OutOfMemoryError: 
GC overhead limit exceeded
Exception in thread "Executor task launch worker-2" java.lang.OutOfMemoryError: 
GC overhead limit exceeded
15/11/06 10:46:51 ERROR Executor: Exception in task 3.0 in stage 2.0 (TID 16)
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "Executor task launch worker-0" java.lang.OutOfMemoryError: 
GC overhead limit exceeded
15/11/06 10:46:51 ERROR Executor: Exception in task 7.0 in stage 2.0 (TID 20)
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "Executor task launch worker-5" java.lang.OutOfMemoryError: 
GC overhead limit exceeded
15/11/06 10:48:06 WARN HeartbeatReceiver: Removing executor driver with no 
recent heartbeats: 144917 ms exceeds timeout 120000 ms
15/11/06 10:48:06 ERROR TaskSchedulerImpl: Lost executor driver on localhost: 
Executor heartbeat timed out after 144917 ms
15/11/06 10:48:06 INFO TaskSetManager: Re-queueing tasks for driver from 
TaskSet 2.0
15/11/06 10:48:06 WARN TaskSetManager: Lost task 10.0 in stage 2.0 (TID 23, 
localhost): ExecutorLostFailure (executor driver lost)
15/11/06 10:48:06 ERROR TaskSetManager: Task 10 in stage 2.0 failed 1 times; 
aborting job
15/11/06 10:48:06 WARN TaskSetManager: Lost task 4.0 in stage 2.0 (TID 17, 
localhost): ExecutorLostFailure (executor driver lost)
15/11/06 10:48:06 WARN TaskSetManager: Lost task 7.0 in stage 2.0 (TID 20, 
localhost): ExecutorLostFailure (executor driver lost)
15/11/06 10:48:06 WARN TaskSetManager: Lost task 1.0 in stage 2.0 (TID 14, 
localhost): ExecutorLostFailure (executor driver lost)
15/11/06 10:48:06 WARN TaskSetManager: Lost task 9.0 in stage 2.0 (TID 22, 
localhost): ExecutorLostFailure (executor driver lost)
15/11/06 10:48:06 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 13, 
localhost): ExecutorLostFailure (executor driver lost)
15/11/06 10:48:06 WARN TaskSetManager: Lost task 3.0 in stage 2.0 (TID 16, 
localhost): ExecutorLostFailure (executor driver lost)
15/11/06 10:48:06 WARN TaskSetManager: Lost task 6.0 in stage 2.0 (TID 19, 
localhost): ExecutorLostFailure (executor driver lost)
15/11/06 10:48:06 WARN TaskSetManager: Lost task 5.0 in stage 2.0 (TID 18, 
localhost): ExecutorLostFailure (executor driver lost)
15/11/06 10:48:06 WARN TaskSetManager: Lost task 8.0 in stage 2.0 (TID 21, 
localhost): ExecutorLostFailure (executor driver lost)
15/11/06 10:48:06 WARN TaskSetManager: Lost task 11.0 in stage 2.0 (TID 24, 
localhost): ExecutorLostFailure (executor driver lost)
15/11/06 10:48:06 WARN TaskSetManager: Lost task 2.0 in stage 2.0 (TID 15, 
localhost): ExecutorLostFailure (executor driver lost)
15/11/06 10:48:06 INFO TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have 
all completed, from pool 
15/11/06 10:48:06 INFO TaskSchedulerImpl: Cancelling stage 2
15/11/06 10:48:06 WARN SparkContext: Killing executors is only supported in 
coarse-grained mode
15/11/06 10:48:06 INFO DAGScheduler: ResultStage 2 (dfToCols at 
NativeMethodAccessorImpl.java:-2) failed in 167.915 s
15/11/06 10:48:06 INFO DAGScheduler: Job 2 failed: dfToCols at 
NativeMethodAccessorImpl.java:-2, took 167.934410 s
15/11/06 10:48:06 INFO DAGScheduler: Executor lost: driver (epoch 0)
15/11/06 10:48:06 ERROR RBackendHandler: dfToCols on 
org.apache.spark.sql.api.r.SQLUtils failed
15/11/06 10:48:06 INFO BlockManagerMasterEndpoint: Trying to remove executor 
driver from BlockManagerMaster.
15/11/06 10:48:06 INFO BlockManagerMasterEndpoint: Removing block manager 
BlockManagerId(driver, localhost, 39562)
Error in invokeJava(isStatic = TRUE, className, methodName, ...) : 
  org.apache.spark.SparkException: Job aborted due to stage failure: Task 10 in 
stage 2.0 failed 1 times, most recent failure: Lost task 10.0 in stage 2.0 (TID 
23, localhost): ExecutorLostFailure (executor driver lost)
Driver stacktrace:
        at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
        at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DA
15/11/06 10:48:06 INFO BlockManagerMaster: Removed driver successfully in 
removeExecutor
15/11/06 10:48:06 INFO DAGScheduler: Host added was in lost list earlier: 
localhost
15/11/06 10:00:01 WARN ReliableDeliverySupervisor: Association with remote 
system [akka.tcp://sparkExecutor@mytestserver:51939] has failed, address is now 
gated for [5000] ms. Reason is: [unread block data].
15/11/06 10:00:01 ERROR TaskSchedulerImpl: Lost executor 0 on mytestserver: 
remote Rpc client disassociated
15/11/06 10:00:01 INFO TaskSetManager: Re-queueing tasks for 0 from TaskSet 2.0
15/11/06 10:00:02 WARN TaskSetManager: Lost task 17.0 in stage 2.0 (TID 41, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 8.0 in stage 2.0 (TID 32, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 20.0 in stage 2.0 (TID 44, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 2.0 in stage 2.0 (TID 26, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 11.0 in stage 2.0 (TID 35, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 14.0 in stage 2.0 (TID 38, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 5.0 in stage 2.0 (TID 29, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 22.0 in stage 2.0 (TID 46, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 16.0 in stage 2.0 (TID 40, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 7.0 in stage 2.0 (TID 31, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 19.0 in stage 2.0 (TID 43, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 1.0 in stage 2.0 (TID 25, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 10.0 in stage 2.0 (TID 34, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 13.0 in stage 2.0 (TID 37, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 4.0 in stage 2.0 (TID 28, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 21.0 in stage 2.0 (TID 45, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 3.0 in stage 2.0 (TID 27, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 12.0 in stage 2.0 (TID 36, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 15.0 in stage 2.0 (TID 39, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 6.0 in stage 2.0 (TID 30, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 18.0 in stage 2.0 (TID 42, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 24, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 WARN TaskSetManager: Lost task 9.0 in stage 2.0 (TID 33, 
mytestserver): ExecutorLostFailure (executor 0 lost)
15/11/06 10:00:02 INFO TaskSetManager: Finished task 1.0 in stage 2.0 (TID 25) 
in 10288 ms on mytestserver (1/23)
15/11/06 10:00:02 INFO DAGScheduler: Executor lost: 0 (epoch 1)
15/11/06 10:00:02 INFO BlockManagerMasterEndpoint: Trying to remove executor 0 
from BlockManagerMaster.
15/11/06 10:00:02 INFO BlockManagerMasterEndpoint: Removing block manager 
BlockManagerId(0, mytestserver, 36050)
15/11/06 10:00:02 INFO BlockManagerMaster: Removed 0 successfully in 
removeExecutor
15/11/06 10:00:02 INFO TaskSetManager: Finished task 21.0 in stage 2.0 (TID 45) 
in 10656 ms on mytestserver (2/23)
15/11/06 10:00:02 INFO AppClient$ClientActor: Executor updated: 
app-20151106095859-0019/0 is now EXITED (Command exited with code 1)
15/11/06 10:00:02 INFO SparkDeploySchedulerBackend: Executor 
app-20151106095859-0019/0 removed: Command exited with code 1
15/11/06 10:00:02 ERROR SparkDeploySchedulerBackend: Asked to remove 
non-existent executor 0
15/11/06 10:00:02 INFO AppClient$ClientActor: Executor added: 
app-20151106095859-0019/1 on worker-20151030084700-mytestserver-37261 
(mytestserver:37261) with 24 cores
15/11/06 10:00:02 INFO SparkDeploySchedulerBackend: Granted executor ID 
app-20151106095859-0019/1 on hostPort mytestserver:37261 with 24 cores, 50.0 GB 
RAM
15/11/06 10:00:02 INFO AppClient$ClientActor: Executor updated: 
app-20151106095859-0019/1 is now LOADING
15/11/06 10:00:02 INFO AppClient$ClientActor: Executor updated: 
app-20151106095859-0019/1 is now RUNNING
15/11/06 10:00:04 INFO TaskSetManager: Finished task 13.0 in stage 2.0 (TID 37) 
in 12510 ms on mytestserver (3/23)
15/11/06 10:00:08 INFO TaskSetManager: Finished task 12.0 in stage 2.0 (TID 36) 
in 17068 ms on mytestserver (4/23)
Exception in thread "qtp909371038-62" java.lang.OutOfMemoryError: GC overhead 
limit exceeded
        at java.util.HashMap$KeySet.iterator(HashMap.java:912)
        at java.util.HashSet.iterator(HashSet.java:172)
        at sun.nio.ch.Util$2.iterator(Util.java:243)
        at 
org.spark-project.jetty.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:600)
        at 
org.spark-project.jetty.io.nio.SelectorManager$1.run(SelectorManager.java:290)
        at 
org.spark-project.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
        at 
org.spark-project.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
        at java.lang.Thread.run(Thread.java:745)
15/11/06 10:00:23 ERROR Utils: Uncaught exception in thread task-result-getter-3
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at 
java.io.ObjectInputStream$HandleTable.grow(ObjectInputStream.java:3467)
        at 
java.io.ObjectInputStream$HandleTable.assign(ObjectInputStream.java:3275)
        at java.io.ObjectInputStream.readString(ObjectInputStream.java:1650)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1342)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1707)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1345)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
        at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1707)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1345)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
        at 
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:69)
        at 
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:89)
        at 
org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:95)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:60)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2.run(TaskResultGetter.scala:50)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
15/11/06 10:00:23 ERROR Utils: Uncaught exception in thread task-result-getter-0
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:68)
        at java.lang.StringBuilder.<init>(StringBuilder.java:89)
        at 
java.io.ObjectInputStream$BlockDataInputStream.readUTFBody(ObjectInputStream.java:3047)
        at 
java.io.ObjectInputStream$BlockDataInputStream.readUTF(ObjectInputStream.java:2867)
        at java.io.ObjectInputStream.readString(ObjectInputStream.java:1639)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1342)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1707)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1345)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
        at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1707)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1345)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
        at 
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:69)
        at 
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:89)
        at 
org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:95)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:60)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2.run(TaskResultGetter.scala:50)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Exception in thread "task-result-getter-3" java.lang.OutOfMemoryError: GC 
overhead limit exceeded
        at 
java.io.ObjectInputStream$HandleTable.grow(ObjectInputStream.java:3467)
        at 
java.io.ObjectInputStream$HandleTable.assign(ObjectInputStream.java:3275)
        at java.io.ObjectInputStream.readString(ObjectInputStream.java:1650)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1342)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1707)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1345)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
        at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1707)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1345)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
        at 
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:69)
        at 
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:89)
        at 
org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:95)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:60)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2.run(TaskResultGetter.scala:50)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Exception in thread "task-result-getter-0" java.lang.OutOfMemoryError: GC 
overhead limit exceeded
        at java.lang.AbstractStringBuilder.<init>(AbstractStringBuilder.java:68)
        at java.lang.StringBuilder.<init>(StringBuilder.java:89)
        at 
java.io.ObjectInputStream$BlockDataInputStream.readUTFBody(ObjectInputStream.java:3047)
        at 
java.io.ObjectInputStream$BlockDataInputStream.readUTF(ObjectInputStream.java:2867)
        at java.io.ObjectInputStream.readString(ObjectInputStream.java:1639)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1342)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1707)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1345)
        at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1993)
        at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1918)
        at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
        at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1707)
        at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1345)
        at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
        at 
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:69)
        at 
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:89)
        at 
org.apache.spark.scheduler.DirectTaskResult.value(TaskResult.scala:95)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply$mcV$sp(TaskResultGetter.scala:60)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2$$anonfun$run$1.apply(TaskResultGetter.scala:51)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1772)
        at 
org.apache.spark.scheduler.TaskResultGetter$$anon$2.run(TaskResultGetter.scala:50)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
15/11/06 10:00:23 INFO SparkDeploySchedulerBackend: Registered executor: 
AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@mytestserver:60899/user/Executor#-1112865574])
 with ID 1
15/11/06 10:00:23 INFO TaskSetManager: Starting task 9.1 in stage 2.0 (TID 47, 
mytestserver, PROCESS_LOCAL, 1745 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 0.1 in stage 2.0 (TID 48, 
mytestserver, PROCESS_LOCAL, 1744 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 18.1 in stage 2.0 (TID 49, 
mytestserver, PROCESS_LOCAL, 1744 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 6.1 in stage 2.0 (TID 50, 
mytestserver, PROCESS_LOCAL, 1745 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 15.1 in stage 2.0 (TID 51, 
mytestserver, PROCESS_LOCAL, 1746 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 3.1 in stage 2.0 (TID 52, 
mytestserver, PROCESS_LOCAL, 1744 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 4.1 in stage 2.0 (TID 53, 
mytestserver, PROCESS_LOCAL, 1746 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 10.1 in stage 2.0 (TID 54, 
mytestserver, PROCESS_LOCAL, 1744 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 19.1 in stage 2.0 (TID 55, 
mytestserver, PROCESS_LOCAL, 1744 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 7.1 in stage 2.0 (TID 56, 
mytestserver, PROCESS_LOCAL, 1746 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 16.1 in stage 2.0 (TID 57, 
mytestserver, PROCESS_LOCAL, 1744 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 22.1 in stage 2.0 (TID 58, 
mytestserver, PROCESS_LOCAL, 1746 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 5.1 in stage 2.0 (TID 59, 
mytestserver, PROCESS_LOCAL, 1742 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 14.1 in stage 2.0 (TID 60, 
mytestserver, PROCESS_LOCAL, 1745 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 11.1 in stage 2.0 (TID 61, 
mytestserver, PROCESS_LOCAL, 1746 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 2.1 in stage 2.0 (TID 62, 
mytestserver, PROCESS_LOCAL, 1745 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 20.1 in stage 2.0 (TID 63, 
mytestserver, PROCESS_LOCAL, 1743 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 8.1 in stage 2.0 (TID 64, 
mytestserver, PROCESS_LOCAL, 1744 bytes)
15/11/06 10:00:23 INFO TaskSetManager: Starting task 17.1 in stage 2.0 (TID 65, 
mytestserver, PROCESS_LOCAL, 1744 bytes)
15/11/06 10:00:24 INFO BlockManagerMasterEndpoint: Registering block manager 
mytestserver:37591 with 25.9 GB RAM, BlockManagerId(1, mytestserver, 37591)
Exception in thread "qtp909371038-56" java.lang.OutOfMemoryError: GC overhead 
limit exceeded
Exception in thread "qtp909371038-58" java.lang.OutOfMemoryError: GC overhead 
limit exceeded
15/11/06 10:00:29 ERROR TransportRequestHandler: Error sending result 
RpcResponse{requestId=7191816906635149410, response=[B@c227b88} to 
/mytestserver:56293; closing connection
15/11/06 10:00:48 ERROR Utils: Uncaught exception in thread task-result-getter-1
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:00:48 WARN TransportChannelHandler: Exception in connection from 
/mytestserver:56293
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "task-result-getter-1" java.lang.OutOfMemoryError: GC 
overhead limit exceeded
15/11/06 10:00:48 ERROR Utils: Uncaught exception in thread task-result-getter-2
java.lang.OutOfMemoryError: GC overhead limit exceeded
Exception in thread "task-result-getter-2" java.lang.OutOfMemoryError: GC 
overhead limit exceeded
15/11/06 10:00:48 ERROR ErrorMonitor: Uncaught fatal error from thread 
[sparkDriver-scheduler-1] shutting down ActorSystem [sparkDriver]
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:00:48 ERROR ActorSystemImpl: Uncaught fatal error from thread 
[sparkDriver-scheduler-1] shutting down ActorSystem [sparkDriver]
java.lang.OutOfMemoryError: GC overhead limit exceeded
15/11/06 10:00:48 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down 
remote daemon.
15/11/06 10:00:48 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon 
shut down; proceeding with flushing remote transports.
15/11/06 10:00:48 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut 
down.
[ERROR] [11/06/2015 10:00:53.646] 
[sparkDriver-akka.actor.default-dispatcher-18] [ActorSystem(sparkDriver)] 
Failed to run termination callback, due to [Futures timed out after [5000 
milliseconds]]
java.util.concurrent.TimeoutException: Futures timed out after [5000 
milliseconds]
        at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
        at 
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
        at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
        at 
akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread$$anon$3.block(ThreadPoolBuilder.scala:169)
        at 
scala.concurrent.forkjoin.ForkJoinPool.managedBlock(ForkJoinPool.java:3640)
        at 
akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread.blockOn(ThreadPoolBuilder.scala:167)
        at 
akka.dispatch.BatchingExecutor$Batch.blockOn(BatchingExecutor.scala:101)
        at scala.concurrent.Await$.result(package.scala:107)
        at akka.actor.LightArrayRevolverScheduler.close(Scheduler.scala:280)
        at akka.actor.ActorSystemImpl.stopScheduler(ActorSystem.scala:687)
        at 
akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply$mcV$sp(ActorSystem.scala:616)
        at 
akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:616)
        at 
akka.actor.ActorSystemImpl$$anonfun$liftedTree2$1$1.apply(ActorSystem.scala:616)
        at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:640)
        at 
akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.runNext$1(ActorSystem.scala:807)
        at 
akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply$mcV$sp(ActorSystem.scala:810)
        at 
akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:803)
        at 
akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$run$1.apply(ActorSystem.scala:803)
        at akka.util.ReentrantGuard.withGuard(LockUtil.scala:15)
        at 
akka.actor.ActorSystemImpl$TerminationCallbacks.run(ActorSystem.scala:803)
        at 
akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:637)
        at 
akka.actor.ActorSystemImpl$$anonfun$terminationCallbacks$1.apply(ActorSystem.scala:637)
        at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
        at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
        at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
        at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
        at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
        at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
        at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
        at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
        at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
        at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

15/11/06 10:01:55 ERROR ContextCleaner: Error cleaning broadcast 2
java.util.concurrent.TimeoutException: Futures timed out after [120 seconds]
        at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
        at 
scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
        at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
        at 
scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
        at scala.concurrent.Await$.result(package.scala:107)
        at 
org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:135)
        at 
org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:228)
        at 
org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45)
        at 
org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:66)
        at 
org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:214)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:170)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
15/11/06 10:01:55 WARN AkkaRpcEndpointRef: Error sending message [message = 
RemoveBroadcast(1,true)] in 1 attempts
akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-1015002418]] had 
already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at 
org.apache.spark.rpc.akka.AkkaRpcEndpointRef.ask(AkkaRpcEnv.scala:299)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
        at 
org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:127)
        at 
org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:228)
        at 
org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45)
        at 
org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:66)
        at 
org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:214)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:170)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
15/11/06 10:01:58 WARN AkkaRpcEndpointRef: Error sending message [message = 
RemoveBroadcast(1,true)] in 2 attempts
akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-1015002418]] had 
already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at 
org.apache.spark.rpc.akka.AkkaRpcEndpointRef.ask(AkkaRpcEnv.scala:299)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
        at 
org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:127)
        at 
org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:228)
        at 
org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45)
        at 
org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:66)
        at 
org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:214)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:170)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
15/11/06 10:02:01 WARN AkkaRpcEndpointRef: Error sending message [message = 
RemoveBroadcast(1,true)] in 3 attempts
akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-1015002418]] had 
already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at 
org.apache.spark.rpc.akka.AkkaRpcEndpointRef.ask(AkkaRpcEnv.scala:299)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
        at 
org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:127)
        at 
org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:228)
        at 
org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45)
        at 
org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:66)
        at 
org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:214)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:170)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
15/11/06 10:02:04 ERROR ContextCleaner: Error cleaning broadcast 1
org.apache.spark.SparkException: Error sending message [message = 
RemoveBroadcast(1,true)]
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:116)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
        at 
org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:127)
        at 
org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:228)
        at 
org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45)
        at 
org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:66)
        at 
org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:214)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:170)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
Caused by: akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-1015002418]] had 
already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at 
org.apache.spark.rpc.akka.AkkaRpcEndpointRef.ask(AkkaRpcEnv.scala:299)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
        ... 13 more
15/11/06 10:02:04 WARN AkkaRpcEndpointRef: Error sending message [message = 
RemoveShuffle(0)] in 1 attempts
akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-1015002418]] had 
already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at 
org.apache.spark.rpc.akka.AkkaRpcEndpointRef.ask(AkkaRpcEnv.scala:299)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
        at 
org.apache.spark.storage.BlockManagerMaster.removeShuffle(BlockManagerMaster.scala:115)
        at 
org.apache.spark.ContextCleaner.doCleanupShuffle(ContextCleaner.scala:202)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:168)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
15/11/06 10:02:07 WARN AkkaRpcEndpointRef: Error sending message [message = 
RemoveShuffle(0)] in 2 attempts
akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-1015002418]] had 
already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at 
org.apache.spark.rpc.akka.AkkaRpcEndpointRef.ask(AkkaRpcEnv.scala:299)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
        at 
org.apache.spark.storage.BlockManagerMaster.removeShuffle(BlockManagerMaster.scala:115)
        at 
org.apache.spark.ContextCleaner.doCleanupShuffle(ContextCleaner.scala:202)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:168)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
15/11/06 10:02:10 WARN AkkaRpcEndpointRef: Error sending message [message = 
RemoveShuffle(0)] in 3 attempts
akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-1015002418]] had 
already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at 
org.apache.spark.rpc.akka.AkkaRpcEndpointRef.ask(AkkaRpcEnv.scala:299)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
        at 
org.apache.spark.storage.BlockManagerMaster.removeShuffle(BlockManagerMaster.scala:115)
        at 
org.apache.spark.ContextCleaner.doCleanupShuffle(ContextCleaner.scala:202)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:168)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
15/11/06 10:02:13 ERROR ContextCleaner: Error cleaning shuffle 0
org.apache.spark.SparkException: Error sending message [message = 
RemoveShuffle(0)]
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:116)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
        at 
org.apache.spark.storage.BlockManagerMaster.removeShuffle(BlockManagerMaster.scala:115)
        at 
org.apache.spark.ContextCleaner.doCleanupShuffle(ContextCleaner.scala:202)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:168)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
Caused by: akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-1015002418]] had 
already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at 
org.apache.spark.rpc.akka.AkkaRpcEndpointRef.ask(AkkaRpcEnv.scala:299)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
        ... 10 more
15/11/06 10:02:13 WARN AkkaRpcEndpointRef: Error sending message [message = 
RemoveBroadcast(0,true)] in 1 attempts
akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-1015002418]] had 
already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at 
org.apache.spark.rpc.akka.AkkaRpcEndpointRef.ask(AkkaRpcEnv.scala:299)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
        at 
org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:127)
        at 
org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:228)
        at 
org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45)
        at 
org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:66)
        at 
org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:214)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:170)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
15/11/06 10:02:16 WARN AkkaRpcEndpointRef: Error sending message [message = 
RemoveBroadcast(0,true)] in 2 attempts
akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-1015002418]] had 
already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at 
org.apache.spark.rpc.akka.AkkaRpcEndpointRef.ask(AkkaRpcEnv.scala:299)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
        at 
org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:127)
        at 
org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:228)
        at 
org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45)
        at 
org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:66)
        at 
org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:214)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:170)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
15/11/06 10:02:19 WARN AkkaRpcEndpointRef: Error sending message [message = 
RemoveBroadcast(0,true)] in 3 attempts
akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-1015002418]] had 
already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at 
org.apache.spark.rpc.akka.AkkaRpcEndpointRef.ask(AkkaRpcEnv.scala:299)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
        at 
org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:127)
        at 
org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:228)
        at 
org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45)
        at 
org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:66)
        at 
org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:214)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:170)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
15/11/06 10:02:22 ERROR ContextCleaner: Error cleaning broadcast 0
org.apache.spark.SparkException: Error sending message [message = 
RemoveBroadcast(0,true)]
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:116)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:78)
        at 
org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:127)
        at 
org.apache.spark.broadcast.TorrentBroadcast$.unpersist(TorrentBroadcast.scala:228)
        at 
org.apache.spark.broadcast.TorrentBroadcastFactory.unbroadcast(TorrentBroadcastFactory.scala:45)
        at 
org.apache.spark.broadcast.BroadcastManager.unbroadcast(BroadcastManager.scala:66)
        at 
org.apache.spark.ContextCleaner.doCleanupBroadcast(ContextCleaner.scala:214)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:170)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1$$anonfun$apply$mcV$sp$2.apply(ContextCleaner.scala:161)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:161)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1215)
        at 
org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:154)
        at org.apache.spark.ContextCleaner$$anon$3.run(ContextCleaner.scala:67)
Caused by: akka.pattern.AskTimeoutException: 
Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-1015002418]] had 
already been terminated.
        at akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:132)
        at 
org.apache.spark.rpc.akka.AkkaRpcEndpointRef.ask(AkkaRpcEnv.scala:299)
        at 
org.apache.spark.rpc.RpcEndpointRef.askWithRetry(RpcEndpointRef.scala:101)
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to