+1 for not starting that war without data on usage.
P
> On 25 May 2017, at 10:56, Partridge, Lucas (GE Aviation)
> wrote:
>
> Some great ideas here.
>
> I’m just a bit concerned about the most popular interpreters being listed as
> Spark, JDBC and Python. Is that based on real usage data f
#L58
>
> <https://github.com/apache/zeppelin/blob/v0.7.1/python/src/main/resources/python/zeppelin_python.py#L58>
> [2] https://issues.apache.org/jira/browse/ZEPPELIN
> <https://issues.apache.org/jira/browse/ZEPPELIN>
>
> On Fri, Apr 21, 2017 at 5:10 AM Paul-Armand
s a workaround, I now issue z.max_result = 2000 to increase the size of the
returned csv and that works fine.
Thanks,
Paul
> On 21 Apr 2017, at 13:48, Paul-Armand Verhaegen
> wrote:
>
>
> Thanks for your reply. Based on your suggestions I've edited
> conf/zeppelin-env
are properly sourced into the shell, and they are.
Paul
> On 20 Apr 2017, at 23:28, So good <33146...@qq.com> wrote:
>
> The zeppelin configuration file has settings for the maximum number of rows
> and the maximum size of the file.
>
> ------ 原始邮件
Hi,
I have problems making zeppelin 0.7.1 (in %python or %spark.pyspark) to return
more than the default 1000 rows (from a pandas dataframe) in a visualisation or
csv download.
I tried to increase the values of all maxResults settings in interpreter.json,
but to no avail (and restarted zeppelin