[jira] [Commented] (HIVE-8722) Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]

2014-12-22 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14255668#comment-14255668
 ] 

Rui Li commented on HIVE-8722:
--

Hi [~brocknoland], I suppose this will be a little tricky for 
CombineHiveInputFormat because after combination, it's possible that only a 
part of blocks are in cache. Maybe that's why {{CombineFileSplit}} doesn't 
implement {{InputSplitWithLocationInfo}}. Any ideas on this?

> Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]
> ---
>
> Key: HIVE-8722
> URL: https://issues.apache.org/jira/browse/HIVE-8722
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jimmy Xiang
>
> We got thie following exception in hive.log:
> {noformat}
> 2014-11-03 11:45:49,865 DEBUG rdd.HadoopRDD
> (Logging.scala:logDebug(84)) - Failed to use InputSplitWithLocations.
> java.lang.ClassCastException: Cannot cast
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat$CombineHiveInputSplit
> to org.apache.hadoop.mapred.InputSplitWithLocationInfo
> at java.lang.Class.cast(Class.java:3094)
> at 
> org.apache.spark.rdd.HadoopRDD.getPreferredLocations(HadoopRDD.scala:278)
> at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
> at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.preferredLocations(RDD.scala:215)
> at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1303)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply$mcVI$sp(DAGScheduler.scala:1313)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply(DAGScheduler.scala:1312)
> {noformat}
> My understanding is that the split location info helps Spark to execute tasks 
> more efficiently. This could help other execution engine too. So we should 
> consider to enhance InputSplitShim to implement InputSplitWithLocationInfo if 
> possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8722) Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]

2014-12-20 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14254841#comment-14254841
 ] 

Brock Noland commented on HIVE-8722:


The

MAPREDUCE-5896 introduced {{InputSplitWithLocationInfo}} so that tasks can be 
scheduled where blocks are located in memory via hdfs caching. It seems we 
should implement this.

Brock 

> Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]
> ---
>
> Key: HIVE-8722
> URL: https://issues.apache.org/jira/browse/HIVE-8722
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jimmy Xiang
>
> We got thie following exception in hive.log:
> {noformat}
> 2014-11-03 11:45:49,865 DEBUG rdd.HadoopRDD
> (Logging.scala:logDebug(84)) - Failed to use InputSplitWithLocations.
> java.lang.ClassCastException: Cannot cast
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat$CombineHiveInputSplit
> to org.apache.hadoop.mapred.InputSplitWithLocationInfo
> at java.lang.Class.cast(Class.java:3094)
> at 
> org.apache.spark.rdd.HadoopRDD.getPreferredLocations(HadoopRDD.scala:278)
> at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
> at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.preferredLocations(RDD.scala:215)
> at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1303)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply$mcVI$sp(DAGScheduler.scala:1313)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply(DAGScheduler.scala:1312)
> {noformat}
> My understanding is that the split location info helps Spark to execute tasks 
> more efficiently. This could help other execution engine too. So we should 
> consider to enhance InputSplitShim to implement InputSplitWithLocationInfo if 
> possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8722) Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]

2014-12-18 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14252859#comment-14252859
 ] 

Rui Li commented on HIVE-8722:
--

Hi [~jxiang], yes I think data locality can have dramatic impact on 
performance. I saw nearly 2.5X difference in previous work (SPARK-1937).
But I think we don't have to make {{CombineHiveInputSplit}} as 
{{InputSplitWithLocationInfo}} to get location info. 
{{CombineHiveInputSplit.getLocations}} can get what we need. 
{{InputSplitWithLocationInfo}} is only an enhancement to make cached replicas 
appear first in the location list (SPARK-1767).

> Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]
> ---
>
> Key: HIVE-8722
> URL: https://issues.apache.org/jira/browse/HIVE-8722
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jimmy Xiang
>
> We got thie following exception in hive.log:
> {noformat}
> 2014-11-03 11:45:49,865 DEBUG rdd.HadoopRDD
> (Logging.scala:logDebug(84)) - Failed to use InputSplitWithLocations.
> java.lang.ClassCastException: Cannot cast
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat$CombineHiveInputSplit
> to org.apache.hadoop.mapred.InputSplitWithLocationInfo
> at java.lang.Class.cast(Class.java:3094)
> at 
> org.apache.spark.rdd.HadoopRDD.getPreferredLocations(HadoopRDD.scala:278)
> at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
> at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.preferredLocations(RDD.scala:215)
> at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1303)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply$mcVI$sp(DAGScheduler.scala:1313)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply(DAGScheduler.scala:1312)
> {noformat}
> My understanding is that the split location info helps Spark to execute tasks 
> more efficiently. This could help other execution engine too. So we should 
> consider to enhance InputSplitShim to implement InputSplitWithLocationInfo if 
> possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8722) Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]

2014-12-18 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14252845#comment-14252845
 ] 

Jimmy Xiang commented on HIVE-8722:
---

Without location info, it works too. I was wondering if location info helps 
Spark to work more efficiently?

> Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]
> ---
>
> Key: HIVE-8722
> URL: https://issues.apache.org/jira/browse/HIVE-8722
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jimmy Xiang
>
> We got thie following exception in hive.log:
> {noformat}
> 2014-11-03 11:45:49,865 DEBUG rdd.HadoopRDD
> (Logging.scala:logDebug(84)) - Failed to use InputSplitWithLocations.
> java.lang.ClassCastException: Cannot cast
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat$CombineHiveInputSplit
> to org.apache.hadoop.mapred.InputSplitWithLocationInfo
> at java.lang.Class.cast(Class.java:3094)
> at 
> org.apache.spark.rdd.HadoopRDD.getPreferredLocations(HadoopRDD.scala:278)
> at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
> at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.preferredLocations(RDD.scala:215)
> at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1303)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply$mcVI$sp(DAGScheduler.scala:1313)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply(DAGScheduler.scala:1312)
> {noformat}
> My understanding is that the split location info helps Spark to execute tasks 
> more efficiently. This could help other execution engine too. So we should 
> consider to enhance InputSplitShim to implement InputSplitWithLocationInfo if 
> possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8722) Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]

2014-12-18 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14252804#comment-14252804
 ] 

Rui Li commented on HIVE-8722:
--

I think spark doesn't require the input split to be a 
{{InputSplitWithLocationInfo}}:
{code}
val locs: Option[Seq[String]] = HadoopRDD.SPLIT_INFO_REFLECTIONS match {
  case Some(c) =>
try {
  val lsplit = c.inputSplitWithLocationInfo.cast(hsplit)
  val infos = 
c.getLocationInfo.invoke(lsplit).asInstanceOf[Array[AnyRef]]
  Some(HadoopRDD.convertSplitLocationInfo(infos))
} catch {
  case e: Exception =>
logDebug("Failed to use InputSplitWithLocations.", e)
None
}
  case None => None
}
locs.getOrElse(hsplit.getLocations.filter(_ != "localhost"))
{code}
If failed using {{InputSplitWithLocationInfo}}, it will try calling the 
{{getLocations}} method. And {{CombineHiveInputSplit}} calls 
{{CombineFileSplit.getLocations}}.

> Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]
> ---
>
> Key: HIVE-8722
> URL: https://issues.apache.org/jira/browse/HIVE-8722
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jimmy Xiang
>
> We got thie following exception in hive.log:
> {noformat}
> 2014-11-03 11:45:49,865 DEBUG rdd.HadoopRDD
> (Logging.scala:logDebug(84)) - Failed to use InputSplitWithLocations.
> java.lang.ClassCastException: Cannot cast
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat$CombineHiveInputSplit
> to org.apache.hadoop.mapred.InputSplitWithLocationInfo
> at java.lang.Class.cast(Class.java:3094)
> at 
> org.apache.spark.rdd.HadoopRDD.getPreferredLocations(HadoopRDD.scala:278)
> at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
> at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.preferredLocations(RDD.scala:215)
> at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1303)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply$mcVI$sp(DAGScheduler.scala:1313)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply(DAGScheduler.scala:1312)
> {noformat}
> My understanding is that the split location info helps Spark to execute tasks 
> more efficiently. This could help other execution engine too. So we should 
> consider to enhance InputSplitShim to implement InputSplitWithLocationInfo if 
> possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8722) Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]

2014-12-18 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14251643#comment-14251643
 ] 

Rui Li commented on HIVE-8722:
--

Never mind my last comments. That's because I used hadoop-2.4 which doesn't 
have that class.

> Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]
> ---
>
> Key: HIVE-8722
> URL: https://issues.apache.org/jira/browse/HIVE-8722
> Project: Hive
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>
> We got thie following exception in hive.log:
> {noformat}
> 2014-11-03 11:45:49,865 DEBUG rdd.HadoopRDD
> (Logging.scala:logDebug(84)) - Failed to use InputSplitWithLocations.
> java.lang.ClassCastException: Cannot cast
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat$CombineHiveInputSplit
> to org.apache.hadoop.mapred.InputSplitWithLocationInfo
> at java.lang.Class.cast(Class.java:3094)
> at 
> org.apache.spark.rdd.HadoopRDD.getPreferredLocations(HadoopRDD.scala:278)
> at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
> at 
> org.apache.spark.rdd.RDD$$anonfun$preferredLocations$2.apply(RDD.scala:216)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.preferredLocations(RDD.scala:215)
> at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal(DAGScheduler.scala:1303)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply$mcVI$sp(DAGScheduler.scala:1313)
> at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$getPreferredLocsInternal$2$$anonfun$apply$2.apply(DAGScheduler.scala:1312)
> {noformat}
> My understanding is that the split location info helps Spark to execute tasks 
> more efficiently. This could help other execution engine too. So we should 
> consider to enhance InputSplitShim to implement InputSplitWithLocationInfo if 
> possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8722) Enhance InputSplitShims to extend InputSplitWithLocationInfo [Spark Branch]

2014-12-18 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14251557#comment-14251557
 ] 

Rui Li commented on HIVE-8722:
--

I got this exception which also seems related:
{noformat}
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) - 14/12/18 12:25:18 DEBUG rdd.HadoopRDD: 
SplitLocationInfo and other new Hadoop classes are unavailable. Using the older 
Hadoop location info code.
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) - java.lang.ClassNotFoundException: 
org.apache.hadoop.mapred.InputSplitWithLocationInfo
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
java.net.URLClassLoader$1.run(URLClassLoader.java:366)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
java.net.URLClassLoader$1.run(URLClassLoader.java:355)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
java.security.AccessController.doPrivileged(Native Method)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
java.net.URLClassLoader.findClass(URLClassLoader.java:354)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
java.lang.ClassLoader.loadClass(ClassLoader.java:425)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
java.lang.ClassLoader.loadClass(ClassLoader.java:358)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at java.lang.Class.forName0(Native 
Method)
2014-12-18 12:25:18,399 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
java.lang.Class.forName(Class.java:190)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.HadoopRDD$SplitInfoReflections.(HadoopRDD.scala:381)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.HadoopRDD$.liftedTree1$1(HadoopRDD.scala:391)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.HadoopRDD$.(HadoopRDD.scala:390)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.HadoopRDD$.(HadoopRDD.scala)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.HadoopRDD.getInputFormat(HadoopRDD.scala:179)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:197)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:206)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
scala.Option.getOrElse(Option.scala:120)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.RDD.partitions(RDD.scala:204)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:206)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
scala.Option.getOrElse(Option.scala:120)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(435)) -at 
org.apache.spark.rdd.RDD.partitions(RDD.scala:204)
2014-12-18 12:25:18,400 INFO  [stderr-redir-1]: client.Spa