The existing callback does take care of it:  within the DAGScheduler  there
is a finally block to ensure the callbacks are executed.

      try {
        val result = job.func(taskContext, rdd.iterator(split, taskContext))
        job.listener.taskSucceeded(0, result)
      } finally {
        taskContext.executeOnCompleteCallbacks()
      }

So I have removed that exception handling code from the  PR and updated the
JIRA.


2014-08-05 14:01 GMT-07:00 Reynold Xin <r...@databricks.com>:

> Yes it is. I actually commented on it:
> https://github.com/apache/spark/pull/1792/files#r15840899
>
>
>
> On Tue, Aug 5, 2014 at 1:58 PM, Cody Koeninger <c...@koeninger.org> wrote:
>
>> The stmt.isClosed just looks like stupidity on my part, no secret
>> motivation :)  Thanks for noticing it.
>>
>> As for the leaking in the case of malformed statements, isn't that
>> addressed by
>>
>> context.addOnCompleteCallback{ () => closeIfNeeded() }
>>
>> or am I misunderstanding?
>>
>>
>> On Tue, Aug 5, 2014 at 3:15 PM, Reynold Xin <r...@databricks.com> wrote:
>>
>> > Thanks. Those are definitely great problems to fix!
>> >
>> >
>> >
>> > On Tue, Aug 5, 2014 at 1:11 PM, Stephen Boesch <java...@gmail.com>
>> wrote:
>> >
>> > > Thanks Reynold, Ted Yu did mention offline and I put in a jira
>> already.
>> > > Another small concern: there appears to be no exception handling from
>> the
>> > > creation of the prepared statement (line 74) through to the
>> executeQuery
>> > > (line 86).   In case of error/exception it would seem to be leaking
>> > > connections (/statements).  If that were the case then I would
>> include a
>> > > small patch for the exception trapping in that section of code as
>> well.
>> > >  BTW I was looking at this code for another reason, not intending to
>> be a
>> > > bother ;)
>> > >
>> > >
>> > >
>> > >
>> > > 2014-08-05 13:03 GMT-07:00 Reynold Xin <r...@databricks.com>:
>> > >
>> > > I'm pretty sure it is an oversight. Would you like to submit a pull
>> > >> request to fix that?
>> > >>
>> > >>
>> > >>
>> > >> On Tue, Aug 5, 2014 at 12:14 PM, Stephen Boesch <java...@gmail.com>
>> > >> wrote:
>> > >>
>> > >>> Within its compute.close method, the JdbcRDD class has this
>> interesting
>> > >>> logic for closing jdbc connection:
>> > >>>
>> > >>>
>> > >>>       try {
>> > >>>         if (null != conn && ! stmt.isClosed()) conn.close()
>> > >>>         logInfo("closed connection")
>> > >>>       } catch {
>> > >>>         case e: Exception => logWarning("Exception closing
>> connection",
>> > >>> e)
>> > >>>       }
>> > >>>
>> > >>> Notice that the second check is on stmt  having been closed - not on
>> > the
>> > >>> connection.
>> > >>>
>> > >>> I would wager this were not a simple oversight and there were some
>> > >>> motivation for this logic- curious if anyone would be able to shed
>> some
>> > >>> light?   My particular interest is that I have written custom ORM's
>> in
>> > >>> jdbc
>> > >>> since late 90's  and never did it this way.
>> > >>>
>> > >>
>> > >>
>> > >
>> >
>>
>
>

Reply via email to