Hi Tim,

On Fri, Dec 28, 2018 at 4:49 PM Tim Mackinnon <tim@testit.works> wrote:

> Hi Jan - reading through your docs, this looks very promising as I hadn’t
> got as far as using the api gateway - I was just connecting to the internal
> Alexa service.
>
> One thing I didn’t quite understand - you mention specifying a HANDLER as
> an env variable, but your example description doesn’t seem to show where
> that is set? Is it in the bootstrap file (also is that bootrap.sh or just
> simply bootstrap?
>

Up til now I have configured my Lambda functions mostly via the AWS
Console. When you create your function you must upload your code, specify
the type of runtime and give the name of the handler. For a Java runtime
this is the name of the main class, so for the Smalltalk implementation I
choose something similar. If you want you can leave this item empty or set
it to "Provided" and just hardcode the startup class in the image or in the
bootstrap file.

And are there any file attributes to set on that file?). It strikes me that
> rather than using an env variable - why don’t you specify the handler class
> when you invoke the image as the last command line option? (That is what I
> did in my example - or is it faster to query an env variable? Personally I
> find it easier to see it more explicitly on the command line).
>
> I also notice you include NeoJson - did you need that? I found that the
> default Ston reader was more than adequate for reading the json  that was
> sent over (and so it’s one less pre-req).
>
> With NeoJSON you can create mappings to serialize/deserialize custom
Smalltalk classes to Json. I use this functionality in the CloudWatch-Logs
interface to handle the request and response objects. I don't think this is
possible with STON.


> I haven’t yet fully understood the runtime layer - is this simply a zip
> file with the vm + files and without the image? Previously I had to add all
> of that in the zip I uploaded for each function, but this sounds like it
> simplifies things a lot. Do you have a script you used to create that - I
> ask as I found that trimming down the size of that made a difference to
> load times for Alexa (eg there are lots of sound and graphics dll’s you can
> remove - which I have in my script, and possibly, I could add to what you
> have done).
>

Yes, the layers are a great new feature. My layer just includes a standard
Pharo 64 bits VM without any additions or removals. I made this one by
hand, a script would be a better idea! You can remove shared libraries that
you don't need. I think the most important thing is that the image does not
load/initialize any unnecessary libraries.

>
> Equally, I also figured out the command line fu for uploading and
> registering your Lamba so it works in a ci - this might also be worthy of
> inclusion.
>
> Yes, a CI job that can build a 'deployment' image and that can upload this
image to AWS Lambda would be a great feature!


> Anyway, l’ll Give it a go and see how the results compare - it was
> surprisingly fast using the js shim - but this seems like a much better
> solution.
>
> Thanks for sharing - it’s an executing world.
>
> Tim
>
> Thanks for your comments!
Jan.


>
>
>
> Sent from my iPhone
> On Fri, 28 Dec 2018, at 11:35 AM, Jan van de Sandt wrote:
>
> Hi Tim,
>
> Yes, I read that you got Pharo working via the Javascript runtime. It
> should now be much easier and faster.
>
> I still have to figure out the best way to create a deployment image. With
> the new bootstrap/modular setup of Pharo 7 it should be possible to create
> a lean-and-mean runtime image that can run in the cheapest 128MB max. ram
> configuration.
>
> Jan.
>
> On Thu, Dec 27, 2018 at 2:18 PM Tim Mackinnon <tim@testit.works> wrote:
>
> Cool - I was using a JS shim and had asked AWS many times why they
> couldn’t open it up wider...
>
> Now I’m back from my travels I’ll reincarnate my previous work and see how
> it works with this. I was looking forward to doing more with Lambda, so
> this is great timing.
>
>  Tim
>
> Sent from my iPhone
>
> On 27 Dec 2018, at 10:32, Jan van de Sandt <jvdsa...@gmail.com> wrote:
>
> Hi,
>
> Last month Amazon extended their serverless runtime platform AWS Lambda
> with support for custom runtimes. I created a Pharo Lambda Runtime so now
> we can implement Lambda functions in Smalltalk and easily deploy them on
> the Lambda platform. Lamba has quite a large "free-tier", more than enough
> to do some experiments and to host small applications for free.
>
> See the GitHub project for more details
> https://github.com/jvdsandt/pharo-aws-toolbox
>
> Cheers,
> Jan.
>
>
>

Reply via email to