Re: PostgreSQL JDBC Connections

2017-01-05 Thread t p
YMMV and I don’t think my approach will work for your use case.

Here is a suggestion based on what I’ve done. In the first paragraph you can 
register tables with code as such.

%spark
val example = sqlContext.read.format("jdbc").options(
Map("url" -> "jdbc:postgresql://localhost:5432/db_name",
"driver" -> "org.postgresql.Driver",
"user" -> “user_name", 
"password" -> “password",
"dbtable" -> “example" )).load()

assets.createOrReplaceTempView(“example")


Subsequently using spark.sql (limited to SQL syntax/semantics) supported by 
Spark. If there are Postgres specific features or some advanced SQL 
functions/syntax, this approach may not work for you.

%spark.sql
select * from example

The rest of the reports/visuals exports to CSV will look identical.



> On Jan 5, 2017, at 5:30 PM, Benjamin Kim  wrote:
> 
> We are using the JDBC interpreter. The business analysts only know SQL and 
> run ad-hoc queries for their report exports to CSV.
> 
> Cheers,
> Ben
> 
> 
>> On Jan 5, 2017, at 2:21 PM, t p  wrote:
>> 
>> Are you using JDBC or the PSQL interpreter? I had encountered something 
>> similar while using the PSQL interpreter and I had to restart Zeppelin. 
>> 
>> My experience using PSQL (Postgresql, HAWK) was not as good as using 
>> spark/scala wrappers (JDBC data source) to connect via JDBC and then 
>> register temp tables. This approach allowed me to work with dynamic forms in 
>> a more meaningful way e.g. use SQL results to create a new drop down to 
>> drive the next page etc…
>> 
>> 
>> 
>>> On Jan 5, 2017, at 12:57 PM, Benjamin Kim  wrote:
>>> 
>>> We are getting “out of shared memory” errors when multiple users are 
>>> running SQL queries against our PostgreSQL DB either simultaneously or 
>>> throughout the day. When this happens, Zeppelin 0.6.0 becomes unresponsive 
>>> for any more SQL queries. It looks like this is being caused by too many 
>>> locks being taken and not released, transactions never closing, and/or 
>>> connections never closing. Has anyone encountered Zeppelin 0.6.0 such an 
>>> issue as this? If so, is there a solution for it?
>>> 
>>> Thanks,
>>> Ben
>> 
> 



Re: PostgreSQL JDBC Connections

2017-01-05 Thread Benjamin Kim
We are using the JDBC interpreter. The business analysts only know SQL and run 
ad-hoc queries for their report exports to CSV.

Cheers,
Ben


> On Jan 5, 2017, at 2:21 PM, t p  wrote:
> 
> Are you using JDBC or the PSQL interpreter? I had encountered something 
> similar while using the PSQL interpreter and I had to restart Zeppelin. 
> 
> My experience using PSQL (Postgresql, HAWK) was not as good as using 
> spark/scala wrappers (JDBC data source) to connect via JDBC and then register 
> temp tables. This approach allowed me to work with dynamic forms in a more 
> meaningful way e.g. use SQL results to create a new drop down to drive the 
> next page etc…
> 
> 
> 
>> On Jan 5, 2017, at 12:57 PM, Benjamin Kim  wrote:
>> 
>> We are getting “out of shared memory” errors when multiple users are running 
>> SQL queries against our PostgreSQL DB either simultaneously or throughout 
>> the day. When this happens, Zeppelin 0.6.0 becomes unresponsive for any more 
>> SQL queries. It looks like this is being caused by too many locks being 
>> taken and not released, transactions never closing, and/or connections never 
>> closing. Has anyone encountered Zeppelin 0.6.0 such an issue as this? If so, 
>> is there a solution for it?
>> 
>> Thanks,
>> Ben
> 



Re: PostgreSQL JDBC Connections

2017-01-05 Thread t p
Are you using JDBC or the PSQL interpreter? I had encountered something similar 
while using the PSQL interpreter and I had to restart Zeppelin. 

My experience using PSQL (Postgresql, HAWK) was not as good as using 
spark/scala wrappers (JDBC data source) to connect via JDBC and then register 
temp tables. This approach allowed me to work with dynamic forms in a more 
meaningful way e.g. use SQL results to create a new drop down to drive the next 
page etc…



> On Jan 5, 2017, at 12:57 PM, Benjamin Kim  wrote:
> 
> We are getting “out of shared memory” errors when multiple users are running 
> SQL queries against our PostgreSQL DB either simultaneously or throughout the 
> day. When this happens, Zeppelin 0.6.0 becomes unresponsive for any more SQL 
> queries. It looks like this is being caused by too many locks being taken and 
> not released, transactions never closing, and/or connections never closing. 
> Has anyone encountered Zeppelin 0.6.0 such an issue as this? If so, is there 
> a solution for it?
> 
> Thanks,
> Ben



Re: JDBC Connections

2016-10-18 Thread Hyung Sung Shim
Hello.
AFAIK The connections did not closed until restart JDBC Interpreter.
so https://github.com/apache/zeppelin/pull/1396 use ConnectionPool for
control sessions.


2016-10-19 2:43 GMT+09:00 Benjamin Kim :

> We are using Zeppelin 0.6.0 as a self-service for our clients to query our
> PostgreSQL databases. We are noticing that the connections are not closing
> after each one of them are done. What is the normal operating procedure to
> have these connections close when idle? Our scope for the JDBC interpreter
> is “shared”, which I thought would make 1 connection for all notebooks. It
> would seem that I am wrong. Anyone have any ideas on what would help?
>
> Thanks,
> Ben
>
>


JDBC Connections

2016-10-18 Thread Benjamin Kim
We are using Zeppelin 0.6.0 as a self-service for our clients to query our 
PostgreSQL databases. We are noticing that the connections are not closing 
after each one of them are done. What is the normal operating procedure to have 
these connections close when idle? Our scope for the JDBC interpreter is 
“shared”, which I thought would make 1 connection for all notebooks. It would 
seem that I am wrong. Anyone have any ideas on what would help?

Thanks,
Ben