Re: Spark interpreter Repl injection

2021-03-09 Thread Carlos Diogo
Thanks
I created the issue
Regards
Carlos

On Tue 9. Mar 2021 at 19:02, moon soo Lee  wrote:

> Pyspark interpreter have 'intp' variable exposed in its repl environment
> (for internal use). And we can resolve reference to Spark interpreter from
> the 'intp' variable. However, scala repl environment in Spark Interpreter
> doesn't expose any variables that is useful for finding Spark Interpreter
> itself. So had to find a way from pyspark interpreter.
>
> z.interpret() doesn't look like it can bring some problem, in my opinion.
>
> Thanks,
> moon
>
>
>
>
> On Tue, Mar 9, 2021 at 8:54 AM Carlos Diogo  wrote:
>
>> Looks good Moon
>> Is there a specific reason why you needed the pyspark interpreter  to
>> access the spark interpreter? Could not the spark interpreter
>> programmatically access itself (and the same for the pyspark interpreter)
>>
>> Would the issue be to expose the z.interpret() method?
>>
>> Best regards
>> Carlos
>>
>> On Tue, Mar 9, 2021 at 5:10 PM moon soo Lee  wrote:
>>
>>> I see. If you want to specify a file, precode might not the best option.
>>> I found a hacky way to do it. Accessing SparkInterpreter instance object
>>> from PysparkInterpreter.
>>>
>>> %pyspark
>>> sparkIntpField = intp.getClass().getDeclaredField("sparkInterpreter")
>>> sparkIntpField.setAccessible(True)
>>> sparkIntp = sparkIntpField.get(intp)
>>> # run my scala code
>>> sparkIntp.interpret("val a=10", z.getInterpreterContext())
>>>
>>>
>>> See attached screenshot.
>>>
>>> [image: image.png]
>>>
>>> This is accessing internal variables outside the official API. So it may
>>> break at any time.
>>>
>>> I think it's better to expose interpret() method through
>>> 'ZeppelinContext'. So inside Note,
>>>
>>> z.interpret(any_string)
>>>
>>> can work without accessing this method in a hacky way.
>>> Please feel free to file an issue.
>>>
>>> Thanks,
>>> moon
>>>
>>>
>>>
>>>
>>> On Mon, Mar 8, 2021 at 10:23 PM Carlos Diogo  wrote:
>>>
 Are you able to specify a file on the precode?
 For now my work around is from within the note and with the rest api ,
 to add a paragraph with the code I want to inject ( which can come from a
 file )
 It works ok , but with run all or schedule the code gets updated in the
 note , but the old Code still executes . Only on the next run it will take
 effect

 On Mon 8. Mar 2021 at 22:48, moon soo Lee  wrote:

> Hi,
>
> How about precode
> ?
>  "zeppelin.SparkInterpreter.precode"
> can run scala code.
>
> Thanks,
> moon
>
>
> On Sat, Mar 6, 2021 at 4:51 AM Carlos Diogo  wrote:
>
>> That does not work if you want to have Scala code in a file ( common
>> functions) which you want to invoke in the note
>> The alternative is to compile the code and then add the jar which
>> would be normal for an application.
>> But zeppelin is about scripting so this is a request I get very often
>> from the users.
>> Specially because the z.run does not work properly most of the times
>> Carlos
>>
>> On Sat 6. Mar 2021 at 11:36, Jeff Zhang  wrote:
>>
>>> Why not copying scala code in zeppelin and run the notebook directly
>>> ?
>>>
>>> Carlos Diogo  于2021年3月6日周六 下午3:51写道:
>>>
 Dear all
 I have been  trying  to find a was to inject scala Code ( from
 String) into the spark interpreter
 In pyspark is easy with the exec function
 It should not be very difficult  to access from the Note scala repl
 interpreter but i could not find a way . I was even able to create a 
 new
 repl session but then I could not bind the objects
 Any tips ?
 Thanks
 --
 Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
 Carlos Diogo

>>>
>>>
>>> --
>>> Best Regards
>>>
>>> Jeff Zhang
>>>
>> --
>> Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
>> Carlos Diogo
>>
> --
 Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
 Carlos Diogo

>>>
>>
>> --
>> Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
>> Carlos Diogo
>>
> --
Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
Carlos Diogo


Re: Spark interpreter Repl injection

2021-03-09 Thread moon soo Lee
Pyspark interpreter have 'intp' variable exposed in its repl environment
(for internal use). And we can resolve reference to Spark interpreter from
the 'intp' variable. However, scala repl environment in Spark Interpreter
doesn't expose any variables that is useful for finding Spark Interpreter
itself. So had to find a way from pyspark interpreter.

z.interpret() doesn't look like it can bring some problem, in my opinion.

Thanks,
moon




On Tue, Mar 9, 2021 at 8:54 AM Carlos Diogo  wrote:

> Looks good Moon
> Is there a specific reason why you needed the pyspark interpreter  to
> access the spark interpreter? Could not the spark interpreter
> programmatically access itself (and the same for the pyspark interpreter)
>
> Would the issue be to expose the z.interpret() method?
>
> Best regards
> Carlos
>
> On Tue, Mar 9, 2021 at 5:10 PM moon soo Lee  wrote:
>
>> I see. If you want to specify a file, precode might not the best option.
>> I found a hacky way to do it. Accessing SparkInterpreter instance object
>> from PysparkInterpreter.
>>
>> %pyspark
>> sparkIntpField = intp.getClass().getDeclaredField("sparkInterpreter")
>> sparkIntpField.setAccessible(True)
>> sparkIntp = sparkIntpField.get(intp)
>> # run my scala code
>> sparkIntp.interpret("val a=10", z.getInterpreterContext())
>>
>>
>> See attached screenshot.
>>
>> [image: image.png]
>>
>> This is accessing internal variables outside the official API. So it may
>> break at any time.
>>
>> I think it's better to expose interpret() method through
>> 'ZeppelinContext'. So inside Note,
>>
>> z.interpret(any_string)
>>
>> can work without accessing this method in a hacky way.
>> Please feel free to file an issue.
>>
>> Thanks,
>> moon
>>
>>
>>
>>
>> On Mon, Mar 8, 2021 at 10:23 PM Carlos Diogo  wrote:
>>
>>> Are you able to specify a file on the precode?
>>> For now my work around is from within the note and with the rest api ,
>>> to add a paragraph with the code I want to inject ( which can come from a
>>> file )
>>> It works ok , but with run all or schedule the code gets updated in the
>>> note , but the old Code still executes . Only on the next run it will take
>>> effect
>>>
>>> On Mon 8. Mar 2021 at 22:48, moon soo Lee  wrote:
>>>
 Hi,

 How about precode
 ?
  "zeppelin.SparkInterpreter.precode"
 can run scala code.

 Thanks,
 moon


 On Sat, Mar 6, 2021 at 4:51 AM Carlos Diogo  wrote:

> That does not work if you want to have Scala code in a file ( common
> functions) which you want to invoke in the note
> The alternative is to compile the code and then add the jar which
> would be normal for an application.
> But zeppelin is about scripting so this is a request I get very often
> from the users.
> Specially because the z.run does not work properly most of the times
> Carlos
>
> On Sat 6. Mar 2021 at 11:36, Jeff Zhang  wrote:
>
>> Why not copying scala code in zeppelin and run the notebook directly ?
>>
>> Carlos Diogo  于2021年3月6日周六 下午3:51写道:
>>
>>> Dear all
>>> I have been  trying  to find a was to inject scala Code ( from
>>> String) into the spark interpreter
>>> In pyspark is easy with the exec function
>>> It should not be very difficult  to access from the Note scala repl
>>> interpreter but i could not find a way . I was even able to create a new
>>> repl session but then I could not bind the objects
>>> Any tips ?
>>> Thanks
>>> --
>>> Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
>>> Carlos Diogo
>>>
>>
>>
>> --
>> Best Regards
>>
>> Jeff Zhang
>>
> --
> Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
> Carlos Diogo
>
 --
>>> Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
>>> Carlos Diogo
>>>
>>
>
> --
> Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
> Carlos Diogo
>


Re: Spark interpreter Repl injection

2021-03-09 Thread Carlos Diogo
Looks good Moon
Is there a specific reason why you needed the pyspark interpreter  to
access the spark interpreter? Could not the spark interpreter
programmatically access itself (and the same for the pyspark interpreter)

Would the issue be to expose the z.interpret() method?

Best regards
Carlos

On Tue, Mar 9, 2021 at 5:10 PM moon soo Lee  wrote:

> I see. If you want to specify a file, precode might not the best option.
> I found a hacky way to do it. Accessing SparkInterpreter instance object
> from PysparkInterpreter.
>
> %pyspark
> sparkIntpField = intp.getClass().getDeclaredField("sparkInterpreter")
> sparkIntpField.setAccessible(True)
> sparkIntp = sparkIntpField.get(intp)
> # run my scala code
> sparkIntp.interpret("val a=10", z.getInterpreterContext())
>
>
> See attached screenshot.
>
> [image: image.png]
>
> This is accessing internal variables outside the official API. So it may
> break at any time.
>
> I think it's better to expose interpret() method through
> 'ZeppelinContext'. So inside Note,
>
> z.interpret(any_string)
>
> can work without accessing this method in a hacky way.
> Please feel free to file an issue.
>
> Thanks,
> moon
>
>
>
>
> On Mon, Mar 8, 2021 at 10:23 PM Carlos Diogo  wrote:
>
>> Are you able to specify a file on the precode?
>> For now my work around is from within the note and with the rest api , to
>> add a paragraph with the code I want to inject ( which can come from a file
>> )
>> It works ok , but with run all or schedule the code gets updated in the
>> note , but the old Code still executes . Only on the next run it will take
>> effect
>>
>> On Mon 8. Mar 2021 at 22:48, moon soo Lee  wrote:
>>
>>> Hi,
>>>
>>> How about precode
>>> ?
>>>  "zeppelin.SparkInterpreter.precode"
>>> can run scala code.
>>>
>>> Thanks,
>>> moon
>>>
>>>
>>> On Sat, Mar 6, 2021 at 4:51 AM Carlos Diogo  wrote:
>>>
 That does not work if you want to have Scala code in a file ( common
 functions) which you want to invoke in the note
 The alternative is to compile the code and then add the jar which would
 be normal for an application.
 But zeppelin is about scripting so this is a request I get very often
 from the users.
 Specially because the z.run does not work properly most of the times
 Carlos

 On Sat 6. Mar 2021 at 11:36, Jeff Zhang  wrote:

> Why not copying scala code in zeppelin and run the notebook directly ?
>
> Carlos Diogo  于2021年3月6日周六 下午3:51写道:
>
>> Dear all
>> I have been  trying  to find a was to inject scala Code ( from
>> String) into the spark interpreter
>> In pyspark is easy with the exec function
>> It should not be very difficult  to access from the Note scala repl
>> interpreter but i could not find a way . I was even able to create a new
>> repl session but then I could not bind the objects
>> Any tips ?
>> Thanks
>> --
>> Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
>> Carlos Diogo
>>
>
>
> --
> Best Regards
>
> Jeff Zhang
>
 --
 Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
 Carlos Diogo

>>> --
>> Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
>> Carlos Diogo
>>
>

-- 
Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
Carlos Diogo


Re: Spark interpreter Repl injection

2021-03-09 Thread moon soo Lee
I see. If you want to specify a file, precode might not the best option.
I found a hacky way to do it. Accessing SparkInterpreter instance object
from PysparkInterpreter.

%pyspark
sparkIntpField = intp.getClass().getDeclaredField("sparkInterpreter")
sparkIntpField.setAccessible(True)
sparkIntp = sparkIntpField.get(intp)
# run my scala code
sparkIntp.interpret("val a=10", z.getInterpreterContext())


See attached screenshot.

[image: image.png]

This is accessing internal variables outside the official API. So it may
break at any time.

I think it's better to expose interpret() method through 'ZeppelinContext'.
So inside Note,

z.interpret(any_string)

can work without accessing this method in a hacky way.
Please feel free to file an issue.

Thanks,
moon




On Mon, Mar 8, 2021 at 10:23 PM Carlos Diogo  wrote:

> Are you able to specify a file on the precode?
> For now my work around is from within the note and with the rest api , to
> add a paragraph with the code I want to inject ( which can come from a file
> )
> It works ok , but with run all or schedule the code gets updated in the
> note , but the old Code still executes . Only on the next run it will take
> effect
>
> On Mon 8. Mar 2021 at 22:48, moon soo Lee  wrote:
>
>> Hi,
>>
>> How about precode
>> ?
>>  "zeppelin.SparkInterpreter.precode"
>> can run scala code.
>>
>> Thanks,
>> moon
>>
>>
>> On Sat, Mar 6, 2021 at 4:51 AM Carlos Diogo  wrote:
>>
>>> That does not work if you want to have Scala code in a file ( common
>>> functions) which you want to invoke in the note
>>> The alternative is to compile the code and then add the jar which would
>>> be normal for an application.
>>> But zeppelin is about scripting so this is a request I get very often
>>> from the users.
>>> Specially because the z.run does not work properly most of the times
>>> Carlos
>>>
>>> On Sat 6. Mar 2021 at 11:36, Jeff Zhang  wrote:
>>>
 Why not copying scala code in zeppelin and run the notebook directly ?

 Carlos Diogo  于2021年3月6日周六 下午3:51写道:

> Dear all
> I have been  trying  to find a was to inject scala Code ( from String)
> into the spark interpreter
> In pyspark is easy with the exec function
> It should not be very difficult  to access from the Note scala repl
> interpreter but i could not find a way . I was even able to create a new
> repl session but then I could not bind the objects
> Any tips ?
> Thanks
> --
> Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
> Carlos Diogo
>


 --
 Best Regards

 Jeff Zhang

>>> --
>>> Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
>>> Carlos Diogo
>>>
>> --
> Os meus cumprimentos / Best regards /  Mit freundlichen Grüße
> Carlos Diogo
>