I think it's really a mortal blow to livy about the repl scene . What I can do
, I think is to monitoring spark metrics, when the driver's memory was used to
a high leve I will isolate the session.
2018-11-14
lk_hadoop
发件人:"Harsch, Tim"
发送时间:2018-11-14 05:52
主题:Re
While it's true LIVY-424 creates a session leak due to REPL leak in Scala it's
not the only thing that can. I've run hundreds of simple scala commands and
the leak is only mild/moderate. However, some scala commands can be really
problematic. For instance
import org.apache.spark.sql._
ru
83082023_0636/container_e39_1541483082023_0636_01_01/stderr
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
2018-11-12
lk_hadoop
发件人:"Rabe, Jens"
发送时间:2018-11-12 14:55
主题:RE: about LIV
If you're hitting the problem of LIVY-424, this is exactly a Scala REPL
issue, not a Livy issue, it is hard to fix in the Livy side.
lk_hadoop 于2018年11月12日周一 上午9:37写道:
> hi,all:
> I meet this issue https://issues.apache.org/jira/browse/LIVY-424
> , anybody know how to resolve it?
> 2018-
Do you run Spark in local mode or on a cluster? If on a cluster, try increasing
executor memory.
From: lk_hadoop
Sent: Monday, November 12, 2018 7:53 AM
To: user ; lk_hadoop
Subject: Re: about LIVY-424
I'm using livy-0.5.0 with spark2.3.0,I started a session with 4GB mem for
Driver,
I'm using livy-0.5.0 with spark2.3.0,I started a session with 4GB mem for
Driver, And I run code server times :
var tmp1 = spark.sql("use tpcds_bin_partitioned_orc_2");var tmp2 =
spark.sql("select count(1) from tpcds_bin_partitioned_orc_2.store_sales").show
the table have 5760749 rows data.
af