2019-07-24 05:46:49 UTC - Adam Varsano: Hi,

If I'll upload the go code without compile them?
then we can add prewarm actions, should I do that?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563947209100900?thread_ts=1563895719.099200&cid=C3TPCAQG1
----
2019-07-24 05:49:33 UTC - chetanm: Can you drop a mail on 
<mailto:dev@openwhisk.apache.org|dev@openwhisk.apache.org> with details. Then 
others can also share there thoughts on psosible solutions
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563947373101100?thread_ts=1563895719.099200&cid=C3TPCAQG1
----
2019-07-24 10:06:45 UTC - Adam Varsano: sure
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563962805101500?thread_ts=1563895719.099200&cid=C3TPCAQG1
----
2019-07-24 11:15:10 UTC - Roberto Santiago: Seeing some intermittent problems 
on my k8s install.  Looked into the logs of the pods and found this on the 
CouchDB logs.  We are running it in k8s versus externally as is suggested.  
None the less I am trying to understand what this error means:
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563966910103100
----
2019-07-24 11:15:12 UTC - Roberto Santiago: ```[error] 
2019-07-24T11:07:10.454399Z couchdb@couchdb0 &lt;0.263.0&gt; -------- Could not 
get design docs for 
&lt;&lt;"shards/a0000000-bfffffff/verifytestdb.1561574453"&gt;&gt; 
error:{database_does_not_exist,[{mem3_shards,load_shards_from_db,"verifytestdb",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shards,for_db,2,[{file,"src/mem3_shards.erl"},{line,54}]},{fabric_view_all_docs,go,5,[{file,"src/fabric_view_all_docs.erl"},{line,24}]},{couch_db,'-get_design_docs/1-fun-0-',1,[{file,"src/couch_db.erl"},{line,627}]}]}
[error] 2019-07-24T11:07:10.454609Z couchdb@couchdb0 emulator -------- Error in 
process &lt;0.22511.384&gt; on node couchdb@couchdb0 with exit value:
{database_does_not_exist,[{mem3_shards,load_shards_from_db,"verifytestdb",[{file,"src/mem3_shards.erl"},{line,395}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,370}]},{mem3_shards,for_db,2,[{file,"src/mem3_shards.erl"},{line,54}]},{fabric_view_all_docs,go,5,[{file,"src/fabric_view_all_docs.erl"},{line,24}]},{couch_db,'-get_design_docs/1-fun-0-',1,[{file,"src/couch_db.erl"},{line,627}]}]}```
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563966912103300?thread_ts=1563966912.103300&cid=C3TPCAQG1
----
2019-07-24 11:20:22 UTC - Satwik Kolhe: I had issues with CouchDB, where the PV 
was provisioned using Rook. Did you check if there is disk space available for 
the couchdb pod?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563967222103400?thread_ts=1563966912.103300&cid=C3TPCAQG1
----
2019-07-24 11:24:52 UTC - Roberto Santiago: How would I check that?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563967492103600?thread_ts=1563966912.103300&cid=C3TPCAQG1
----
2019-07-24 11:27:08 UTC - Satwik Kolhe: kubectl exec -it 
&lt;ow-couchdb-pod-name&gt; -n &lt;ow-namespace&gt; -- df -hT
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563967628103800?thread_ts=1563966912.103300&cid=C3TPCAQG1
----
2019-07-24 11:27:45 UTC - Roberto Santiago: ```Filesystem      Size  Used Avail 
Use% Mounted on
overlay         102G   70G   32G  70% /
tmpfs            68M     0   68M   0% /dev
tmpfs           4.0G     0  4.0G   0% /sys/fs/cgroup
/dev/sda1       102G   70G   32G  70% /etc/hosts
shm              68M     0   68M   0% /dev/shm
/dev/sdc        136G   48G   88G  35% /opt/couchdb/data
tmpfs           4.0G   13k  4.0G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           4.0G     0  4.0G   0% /proc/acpi
tmpfs           4.0G     0  4.0G   0% /proc/scsi
tmpfs           4.0G     0  4.0G   0% /sys/firmware```
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563967665104000?thread_ts=1563966912.103300&cid=C3TPCAQG1
----
2019-07-24 11:29:27 UTC - Satwik Kolhe: Ok - Disk space is not an issue!
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563967767104200?thread_ts=1563966912.103300&cid=C3TPCAQG1
----
2019-07-24 11:34:21 UTC - Roberto Santiago: Along the same lines as my previous 
message.  I am seeing this in logs from the controller:
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563968061105000
----
2019-07-24 11:34:24 UTC - Roberto Santiago: ```[2019-07-24T11:23:40.288Z] 
[INFO] [#tid_sid_invokerHealth] [InvokerPool] invoker status changed to 0 -&gt; 
Offline, 1 -&gt; Offline, 2 -&gt; Offline, 3 -&gt; Offline, 4 -&gt; Offline, 5 
-&gt; Offline, 6 -&gt; Offline, 7 -&gt; Offline, 8 -&gt; Offline, 9 -&gt; 
Offline, 10 -&gt; Offline, 11 -&gt; Offline, 12 -&gt; Offline, 13 -&gt; 
Offline, 14 -&gt; Offline, 15 -&gt; Healthy, 16 -&gt; Unresponsive, 17 -&gt; 
Healthy```
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563968064105200?thread_ts=1563968064.105200&cid=C3TPCAQG1
----
2019-07-24 11:35:01 UTC - Roberto Santiago: Is this normal?  Does offline 
simply mean that it is in a cold state ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563968101105800
----
2019-07-24 11:35:12 UTC - Roberto Santiago: How many invokers to people 
typically use?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563968112106100
----
2019-07-24 11:36:23 UTC - chetanm: offline means that no ping was received from 
the Invoker
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563968183106200?thread_ts=1563968064.105200&cid=C3TPCAQG1
----
2019-07-24 11:36:38 UTC - chetanm: So either no invoker pod exist
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563968198106400?thread_ts=1563968064.105200&cid=C3TPCAQG1
----
2019-07-24 11:37:52 UTC - Roberto Santiago: I only have 3 invoker pods which I 
guess are 15, 16 and 17.  I was just confused with the presence of 0 through 14.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563968272106600?thread_ts=1563968064.105200&cid=C3TPCAQG1
----
2019-07-24 11:38:14 UTC - Roberto Santiago: Should it be like that or is that a 
symptom of something not working correctly
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563968294106800?thread_ts=1563968064.105200&cid=C3TPCAQG1
----
2019-07-24 11:39:14 UTC - chetanm: I think if any invoker pod gets killed then 
k8s starts a new invoker pod and for some reason its giving it a newer index 
instead of reusing the index for any older pod
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563968354107000?thread_ts=1563968064.105200&cid=C3TPCAQG1
----
2019-07-24 11:40:42 UTC - Roberto Santiago: ah!  ok.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563968442107200?thread_ts=1563968064.105200&cid=C3TPCAQG1
----
2019-07-24 11:42:42 UTC - Roberto Santiago: So a deleted the controller pod and 
when it came back its still using those last three slots
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563968562107400?thread_ts=1563968064.105200&cid=C3TPCAQG1
----
2019-07-24 11:43:53 UTC - Roberto Santiago: btw, how many invokers do you use?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563968633107600?thread_ts=1563968064.105200&cid=C3TPCAQG1
----
2019-07-24 11:57:32 UTC - chetanm: We are not using k8s but using Mesos. So 
there its a different approach
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563969452107800?thread_ts=1563968064.105200&cid=C3TPCAQG1
----
2019-07-24 12:02:00 UTC - Roberto Santiago: Do you like it?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563969720108000?thread_ts=1563968064.105200&cid=C3TPCAQG1
----
2019-07-24 12:46:13 UTC - Sven Lange-Last: when invokers run on kube, they are 
supposed to ask zookeeper for an index. the expected behaviour is that this 
approach keeps the the index stable because it is always using the same key 
over and over again when asking zookeeper for an index.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972373110100
----
2019-07-24 12:46:21 UTC - Sven Lange-Last: ^^ @Roberto Santiago
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972381110400
----
2019-07-24 12:48:00 UTC - Sven Lange-Last: when an invoker sends a ping message 
to the controller, it includes its index. so controller checks whether it 
already knows this index. if not, it extends the invoker array and fills the 
gaps with offline invokers with the expectation that the invokers with unknown 
indices will send their ping messages soon.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972480112300
----
2019-07-24 12:48:43 UTC - Sven Lange-Last: the current load balancer design 
expects that the invoker array has no gaps and uses the indices sent by 
invokers.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972523113200
----
2019-07-24 12:49:06 UTC - Sven Lange-Last: btw - large providers use hundreds 
of invokers.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972546113600
----
2019-07-24 12:49:58 UTC - Sven Lange-Last: offline means that the controller 
did not receive a ping message from an invoker with that index yet or recently.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972598114600
----
2019-07-24 12:50:44 UTC - Sven Lange-Last: unresponsive means that out of the 
10 most recent invocations processed by the invoker, 3 did not complete in time.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972644115300
----
2019-07-24 12:51:28 UTC - Sven Lange-Last: in time means 2 * max(action time 
limit, 60 seconds) + 1 minute.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972688116000
----
2019-07-24 12:52:22 UTC - Sven Lange-Last: 
<https://github.com/apache/incubator-openwhisk/blob/4a09c73bb391fcc8b3cf0ecd76ac7abad7fe849d/core/controller/src/main/scala/org/apache/openwhisk/core/loadBalancer/CommonLoadBalancer.scala#L103>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972742116800
----
2019-07-24 12:53:30 UTC - Roberto Santiago: Ok.  This makes a lot of sense.  I 
wonder why zookeeper keeps starting with such a high index.  About 30 minutes 
ago a deleted all pods and when everything came back to life I still end up 
having indexes 15, 16 and 17 for my three invokers.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972810117800
----
2019-07-24 12:54:24 UTC - Roberto Santiago: Thank so much for the information.  
I've been slowly making my way through all the code to learn all the pieces 
really well.  So I appreciate all these details.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972864119300
----
2019-07-24 12:54:54 UTC - Sven Lange-Last: if you do not remove zookeeper pod 
or zookeeper uses a persistent volume for its data, it will keep data. the 
question is which data is used in your kube cluster as a key for zookeeper when 
assigning the id.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972894119800
----
2019-07-24 12:55:08 UTC - Sven Lange-Last: search for messages from 
<https://github.com/apache/incubator-openwhisk/blob/master/core/invoker/src/main/scala/org/apache/openwhisk/core/invoker/InstanceIdAssigner.scala>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563972908120200
----
2019-07-24 12:57:39 UTC - Sven Lange-Last: @Roberto Santiago check this part: 
<https://github.com/apache/incubator-openwhisk/blob/4a09c73bb391fcc8b3cf0ecd76ac7abad7fe849d/core/invoker/src/main/scala/org/apache/openwhisk/core/invoker/Invoker.scala#L107-L142>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563973059120800
----
2019-07-24 12:59:23 UTC - Roberto Santiago: very cool!
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563973163121100
----
2019-07-24 12:59:36 UTC - Roberto Santiago: Any insights about the couchDB 
error I posted above?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563973176121400
----
2019-07-24 13:01:49 UTC - Sven Lange-Last: @Roberto Santiago sorry, i’m no 
couchdb expert. i’m working at ibm and usually use ibm’s cloudant service.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563973309122200
----
2019-07-24 13:03:16 UTC - Roberto Santiago: No worries.  This has been of great 
help!  I hope to be useful enough to be contributing code soon.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563973396122700
----
2019-07-24 13:04:24 UTC - Sven Lange-Last: i’m looking forward to your 
contributions :slightly_smiling_face: do you already have anything in mind? any 
areas where you need changes?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563973464123700
----
2019-07-24 13:07:31 UTC - chetanm: Thanks @Sven Lange-Last for details. Somehow 
I was under the impression that for kube case with Docker CF we are using 
ordinal number of the DaemonSet as id. But looks like we use the Zookeeper to 
determine the id and use pod name as unique name
+1 : Sven Lange-Last
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563973651125400
----
2019-07-24 13:20:21 UTC - Roberto Santiago: @Sven Lange-Last I have a colleague 
who is a kafka genius and very interested in openwhisk.  We both share a 
"vision" of stream based computing.  When he took a look at how kafka is being 
used he speculated that there was much more OW could be getting from kafka 
especially for integrating into existing services that are kafka based.  So no 
particular feature request there just getting enough knowledge to analyze 
intelligently.  My particular focus is around scaling AI and compositional 
architectures.  Right now my most immediate focus is on very practical things 
with OW (e.g. measuring/increasing performance, measuring/increasing 
reliability, satisfying High Availability requirements, etc.).  So right now, 
just focusing on being the student to OW before proposing anything.  If 
anything my hope is to be sufficient for fixing bugs in the next few weeks.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563974421133000
----
2019-07-24 13:47:40 UTC - chetanm: @Roberto Santiago You can get the zookeeper 
state via `kubectl exec &lt;release&gt;-zookeeper-0 zkCli.sh ls 
/invokers/idAssignment/mapping`
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563976060133700
----
2019-07-24 13:48:13 UTC - chetanm: That should show the children under the 
mapping path which would have node names
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563976093134400
----
2019-07-24 14:07:06 UTC - Matt Rutkowski: REMINDER: ~ 1 HOUR until the Apache 
OpenWhisk Tech. Interchange project meeting,
* Day-Time: Every other Wednesday, 11AM EDT (Eastern US), 5PM CEST (Central 
Europe), 3PM GMT, 11PM (Beijing)
* Zoom: <https://zoom.us/my/asfopenwhisk>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563977226134800
----
2019-07-24 14:33:49 UTC - chetanm: How does the api gateway support in OpenWhisk
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563978829135300
----
2019-07-24 14:34:15 UTC - chetanm: I see under route management 3 actions for 
create/delete/get
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563978855135800
----
2019-07-24 14:35:03 UTC - chetanm: and there is an apigateway docker image 
which probably needs Redis (and possibly minio for object storage)
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563978903136600
----
2019-07-24 14:37:58 UTC - chetanm: So when I do `wsk api create /hello /world 
get hello` what would the action do? From code it appears there are couple of 
lua scripts which support the CRUD api and there is a datastore abstraction 
backed by Redis.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563979078138200
----
2019-07-24 14:38:18 UTC - chetanm: So upon api create it would possibly record 
that info in redis?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563979098138800
----
2019-07-24 14:38:41 UTC - chetanm: How does it use that info for actual HTTP 
calls?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563979121139300?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:10:13 UTC - Matt Rutkowski: 
<https://thenewstack.io/the-openwhisk-serverless-platform-now-an-apache-top-level-project-builds-on-kubernetes-mesos/>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563984613139700
----
2019-07-24 16:10:23 UTC - Matt Rutkowski: 
<https://devops.com/asf-moves-openwhisk-serverless-computing-forward/>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563984623140000
----
2019-07-24 16:10:32 UTC - Matt Rutkowski: 
<https://it.toolbox.com/blogs/shrutiumathe/apache-openwhisk-community-graduates-to-become-an-apache-software-foundation-asf-top-level-project-072419>
partyparrot : Michael Schmidt
upvotepartyparrot : Michael Schmidt
congapartyparrot : Michael Schmidt
openwhisk : Michael Schmidt
whisking : Michael Schmidt
yay : Michael Schmidt
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563984632140300?thread_ts=1563984632.140300&cid=C3TPCAQG1
----
2019-07-24 16:10:40 UTC - Matt Rutkowski: 
<https://www.cbronline.com/news/apache-openwhisk-ibm>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563984640140600
----
2019-07-24 16:19:14 UTC - Dragos Dascalita Haut: when the request comes it maps 
GET `/hello/world` to the `hello` action by using the data stored in Redis. 
<https://github.com/apache/incubator-openwhisk-apigateway/blob/master/scripts/lua/routing.lua>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985154140800?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:20:57 UTC - chetanm: So upon each request it checks from redis 
and then handles/adapts the request as per swagger defn stored in redis
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985257141000?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:21:36 UTC - Dragos Dascalita Haut: yes, but on the swagger part, 
I don't think that's stored in Redis.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985296141300?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:22:08 UTC - chetanm: so in real prod is redis to be backed by 
some persistent store like mysql
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985328141500?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:22:19 UTC - Dragos Dascalita Haut: no
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985339141700?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:22:20 UTC - chetanm: as all data in redis is ephermal
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985340141900?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:22:55 UTC - chetanm: so how does the data is retained post restart
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985375142100?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:23:08 UTC - Dragos Dascalita Haut: post Redis restart ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985388142300?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:23:22 UTC - chetanm: yup
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985402142500?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:23:24 UTC - Dragos Dascalita Haut: well, for prod scenario Redis 
should be clustered
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985404142700?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:24:27 UTC - chetanm: okie so we treat redis as a long term 
storage here.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985467142900?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:24:35 UTC - Dragos Dascalita Haut: correct
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985475143100?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:25:11 UTC - chetanm: and is we run multiple separate clusters 
then we need to think of some other approach
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985511143300?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:25:28 UTC - chetanm: like you had with minio based deployment to 
use some object storage?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985528143500?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:25:35 UTC - Dragos Dascalita Haut: our team has an alternate impl 
which uses static NGINX configuration files which are generated from the 
swagger file provided by developers
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985535143700?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:25:43 UTC - chetanm: (minio as in dev tools .)
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985543143900?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:26:20 UTC - chetanm: so if I need to just have minimal api 
gateway support in standalone
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985580144100?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:26:42 UTC - chetanm: I just need to start the apigw docker, redis 
and configure the edge urls properly
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985602144300?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:26:52 UTC - chetanm: would nginx be needed?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985612144500?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:27:47 UTC - chetanm: i am missing the config where api gw docker 
routes to controller
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563985667144700?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:42:16 UTC - Dragos Dascalita Haut: looking
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563986536144900?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:42:39 UTC - Dragos Dascalita Haut: @chetanm are you looking to 
start this docker from the _launchpad_ ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563986559145100?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:44:02 UTC - chetanm: From with in stand alone jar
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563986642145300?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:45:37 UTC - Dragos Dascalita Haut: this is the config file that 
has the managed APIs 
<https://github.com/apache/incubator-openwhisk-apigateway/blob/master/conf.d/managed_endpoints.conf#L57-L95>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563986737145500?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:46:40 UTC - Dragos Dascalita Haut: it's not the one that routes 
to controller. I'm looking for that one
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563986800145700?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:47:04 UTC - chetanm: Oh so it's hardwired to https 443
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563986824145900?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:47:22 UTC - Dragos Dascalita Haut: there's one in ansible: 
<https://github.com/apache/incubator-openwhisk/blob/master/ansible/roles/nginx/templates/nginx.conf.j2>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563986842146100?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:48:01 UTC - Dragos Dascalita Haut: there's also one in 
docker-compose: 
<https://github.com/apache/incubator-openwhisk-devtools/blob/master/docker-compose/apigateway/generated-conf.d/whisk-docker-compose.conf>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563986881146300?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:49:21 UTC - chetanm: Okie
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563986961146500?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:49:26 UTC - Dragos Dascalita Haut: BTW, @chetanm I could also 
help you with a minimalist set of configs to get the API GW up and running. you 
can worry for the docker run command from the standalone jar, and I can help 
with the configs
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563986966146700?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:49:39 UTC - chetanm: Cool
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563986979146900?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:50:05 UTC - chetanm: Let me come up with base pr
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987005147100?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:50:12 UTC - Dragos Dascalita Haut: we'd also have to include in 
that jar a set of resources like some nginx config files (target would be 1 
config file to keep it simple )
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987012147300?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:50:33 UTC - chetanm: Okie
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987033147500?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:50:38 UTC - Dragos Dascalita Haut: I would also guess that we can 
skip SSL/ `443` and go with another port ?
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987038147700?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:50:44 UTC - chetanm: Yes
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987044147900?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:50:47 UTC - chetanm: No ssl
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987047148100?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:51:08 UTC - chetanm: 3233 is default port
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987068148300?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:52:28 UTC - Dragos Dascalita Haut: if that's the case then in 
src/main/resources you can have a simple config file :
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987148148500?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:52:35 UTC - Dragos Dascalita Haut: ```upstream whisk_controller {
  server whisk.controller:8888;
  keepalive 16;
}

server {
  listen 3233 default;

  proxy_http_version 1.1;
  proxy_set_header Connection "";

  client_max_body_size 50m;     # allow bigger functions to be deployed

  location /docs {
    proxy_pass <http://whisk_controller>;
  }

  location /api-docs {
    proxy_pass <http://whisk_controller>;
  }

  location /api/v1 {
    proxy_pass <http://whisk_controller>;
    proxy_read_timeout 70s; # 60+10 additional seconds to allow controller to 
terminate request
  }

}```
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987155148700?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:53:01 UTC - Dragos Dascalita Haut: replace 
`whisk.controller:8888` with something else to reach the controller
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987181148900?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:54:49 UTC - chetanm: Okie
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987289149100?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:54:58 UTC - Dragos Dascalita Haut: say you have a file in 
`src/main/resources/openwhisk-nginx.conf` then when executing `docker run` from 
the jar mount this file like this: `-v 
openwhisk-nginx.conf:/etc/api-gateway/conf.d/openwhis-nginx.conf`
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987298149300?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:55:30 UTC - Dragos Dascalita Haut: it should be automatically 
included by NGINX. see 
<https://github.com/apache/incubator-openwhisk-apigateway/blob/master/api-gateway.conf#L72>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987330149600?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:56:57 UTC - chetanm: So no explicit handling of 
'^/api/([a-zA-Z0-9\-]+)/([a-zA-Z0-9'
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987417149800?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:57:21 UTC - Dragos Dascalita Haut: we/I can work on that once 
this first part works
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987441150000?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:58:11 UTC - chetanm: Okie
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987491150200?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:58:13 UTC - Dragos Dascalita Haut: I'd assume that the definition 
for that location would be automatically included, as the files are in `conf.d`
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987493150400?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 16:58:17 UTC - Dragos Dascalita Haut: 
<https://github.com/apache/incubator-openwhisk-apigateway/tree/master/conf.d>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987497150600?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 17:03:14 UTC - chetanm: Okie
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987794150800?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 17:03:42 UTC - chetanm: What about the url which are as per some 
swagger so not /api/v1
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987822151000?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 17:04:09 UTC - chetanm: Not sure if I know that part ...let me play 
with that
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987849151200?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 17:04:43 UTC - chetanm: I am bit confused on api in which gateway 
actually manipulates the incoming request and convert it to some URL for 
controller
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563987883151400?thread_ts=1563979121.139300&cid=C3TPCAQG1
----
2019-07-24 18:33:55 UTC - Michael Schmidt: woot
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1563993235152400?thread_ts=1563984632.140300&cid=C3TPCAQG1
----
2019-07-24 20:26:45 UTC - Sam Hjelmfelt: This may help with Kubernetes 
deployments. The container per second throughput of the YuniKorn scheduler is 
much higher that other options. Batch processing requires large numbers of 
“ephemeral” containers. 
<https://blog.cloudera.com/blog/2019/07/yunikorn-a-universal-resource-scheduler/>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1564000005153900
----
2019-07-24 20:27:08 UTC - Sam Hjelmfelt: 
<https://github.com/cloudera/yunikorn-core>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1564000028154200
----

Reply via email to