Thanks Akila and Imesh.

After updating, I actually redid the kubernetes setup again. Now I could
see that all three nodes are started properly in the vbox. I also could
deploy the php cartridge app using startos on kubernetes successfully.

Few things I noticed while trying out this.

1. When the sample app was deployed on stratos, some times the deployment
ended up with the following error. Looks like a similar issue was already
reported in : http://markmail.org/message/geuinnqjameg7c5f. But I had the
docker image downloaded on both minions. I'm not quite sure on why this was
popping up a few times. Could this be due to network issue or delay is
starting the container?

TID: [0] [STRATOS] [2015-07-11 23:15:18,006]  INFO
{org.apache.stratos.cloud.controller.iaases.kubernetes.KubernetesIaas} -
Waiting pod status to be changed to running: [application]
single-cartridge-app [cartridge] php [member]
single-cartridge-app.my-php.php.domain91f5d7e0-6ce0-45eb-9e83-ab40d32d63eb
[pod] pod-1

TID: [0] [STRATOS] [2015-07-11 23:15:18,006] ERROR
{org.apache.stratos.cloud.controller.iaases.kubernetes.KubernetesIaas} -
Pod status did not change to running within 60 sec: [application]
single-cartridge-app [cartridge] php [member]
single-cartridge-app.my-php.php.domain91f5d7e0-6ce0-45eb-9e83-ab40d32d63eb
[pod] pod-1

TID: [0] [STRATOS] [2015-07-11 23:15:18,007] ERROR
{org.apache.stratos.cloud.controller.iaases.kubernetes.KubernetesIaas} -
Could not start container: [application] single-cartridge-app [cartridge]
php [member]
single-cartridge-app.my-php.php.domain91f5d7e0-6ce0-45eb-9e83-ab40d32d63eb

java.lang.RuntimeException: Pod status did not change to running within 60
sec: [application] single-cartridge-app [cartridge] php [member]
single-cartridge-app.my-php.php.domain91f5d7e0-6ce0-45eb-9e83-ab40d32d63eb
[pod] pod-1

at
org.apache.stratos.cloud.controller.iaases.kubernetes.KubernetesIaas.waitForPodToBeActivated(KubernetesIaas.java:341)

at
org.apache.stratos.cloud.controller.iaases.kubernetes.KubernetesIaas.startContainer(KubernetesIaas.java:226)

at
org.apache.stratos.cloud.controller.iaases.kubernetes.KubernetesIaas.startInstance(KubernetesIaas.java:125)

at
org.apache.stratos.cloud.controller.services.impl.InstanceCreator.startInstance(InstanceCreator.java:109)

at
org.apache.stratos.cloud.controller.services.impl.InstanceCreator.run(InstanceCreator.java:68)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)


2. If the app is undeployed, shouldn't the containers ("php" service in
this case) also be stopped / removed from the minions? Noticed that the
containers were still up and running when app was undeployed.

3. Observed the following error sometimes during app undeployment using the
script file. Due to this, the app could not be deployed again. This was
only the app deployed on stratos at this time. Saw a similar issue reported
with STRATOS-1281.

TID: [0] [STRATOS] [2015-07-12 01:53:18,956] ERROR
{org.apache.stratos.rest.endpoint.api.StratosApiV41Utils} -  Cannot remove
cartridge : [cartridge-type] php since it is used in another cartridge
group or an application

TID: [0] [STRATOS] [2015-07-12 01:53:18,958] ERROR
{org.apache.stratos.rest.endpoint.handlers.CustomExceptionMapper} -  Cannot
remove cartridge : [cartridge-type] php since it is used in another
cartridge group or an application

org.apache.stratos.rest.endpoint.exception.RestAPIException: Cannot remove
cartridge : [cartridge-type] php since it is used in another cartridge
group or an application

at
org.apache.stratos.rest.endpoint.api.StratosApiV41Utils.removeCartridge(StratosApiV41Utils.java:248)

at
org.apache.stratos.rest.endpoint.api.StratosApiV41.removeCartridge(StratosApiV41.java:431)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at
org.apache.cxf.service.invoker.AbstractInvoker.performInvocation(AbstractInvoker.java:180)




On Fri, Jul 10, 2015 at 8:40 PM, Akila Ravihansa Perera <raviha...@wso2.com>
wrote:

> Adding one more point to it.
>
> There was a problem in one of the 'system' commands in Dockerfile causing
> connection timeouts.
>
> "The problem is that each of the system statements in the Vagrantfile is
> executed in a separate subshell. This means that the environment variables
> are not being exported between calls, hence kubectl does not know the
> $KUBERNETES_MASTER address."
>
> This has been fixed in upstream repo with [1].
>
> [1]
> https://github.com/pires/kubernetes-vagrant-coreos-cluster/pull/122/files
>
> On Fri, Jul 10, 2015 at 6:06 PM, Imesh Gunaratne <im...@apache.org> wrote:
>
>> Thanks Akila! I have now merged it.
>>
>> For others information, the problem was that when the internet connection
>> is slow, the Kubernetes installation process get restarted after reaching
>> the timeout. Now Akila has increased it to 400 seconds.
>>
>> On Fri, Jul 10, 2015 at 10:32 AM, Akila Ravihansa Perera <
>> raviha...@wso2.com> wrote:
>>
>>> Hi Kishanthan,
>>>
>>> Actually your master node has not been configured properly. See the logs
>>> at
>>>
>>> *==> master: Waiting for Kubernetes master to become ready...*
>>>
>>> The connection to the server localhost:8080 was refused - did you
>>> specify the right host or port?
>>>
>>>
>>> Also please check whether you are running on latest Vagrant version.
>>>
>>> @Imesh: this is a known issue and has been fixed in upstream repo. I've
>>> sent you a PR with the fix at [1].
>>>
>>> [1] https://github.com/imesh/kubernetes-vagrant-setup/pull/1
>>>
>>> Thanks.
>>>
>>>
>>> On Fri, Jul 10, 2015 at 9:29 AM, Imesh Gunaratne <im...@apache.org>
>>> wrote:
>>>
>>>> Hi Kishanthan,
>>>>
>>>> You could try to ssh into master and node-01 and execute journalctl -f
>>>> to view the log. Kubernetes installation may have failed in one of the
>>>> hosts.
>>>>
>>>> On Fri, Jul 10, 2015 at 9:07 AM, Kishanthan Thangarajah <
>>>> kshanth2...@gmail.com> wrote:
>>>>
>>>>> Hi Devs,
>>>>>
>>>>> I was trying out the steps provided in :
>>>>> https://gist.github.com/imesh/b8f81fac8de39183a504
>>>>>
>>>>> I couldn't get the vagrant up and running with minions which fails
>>>>> with the error at the bottom.
>>>>>
>>>>> I assume node-01 here is one of the minion. From the logs, I think the
>>>>> master node is correctly configured, even with the version constraint. But
>>>>> the node-01 had failed on the version constraint while adding the core-os.
>>>>> Is this a known issue? Do I have to change/remove the version constraint 
>>>>> in
>>>>> some config file?
>>>>>
>>>>> kubernetes-vagrant-setup kishanthan$ vagrant up
>>>>>
>>>>> Bringing machine 'master' up with 'virtualbox' provider...
>>>>>
>>>>> Bringing machine 'node-01' up with 'virtualbox' provider...
>>>>>
>>>>> Bringing machine 'node-02' up with 'virtualbox' provider...
>>>>>
>>>>> *==> master: Running triggers before up...*
>>>>>
>>>>> *==> master: Setting Kubernetes version 0.18.0*
>>>>>
>>>>> *==> master: Configuring Kubernetes cluster DNS...*
>>>>>
>>>>> *==> master: Box 'coreos-alpha' could not be found. Attempting to find
>>>>> and install...*
>>>>>
>>>>>     master: Box Provider: virtualbox
>>>>>
>>>>>     master: Box Version: >= 738.1.0
>>>>>
>>>>> *==> master: Loading metadata for box
>>>>> 'http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json
>>>>> <http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json>'*
>>>>>
>>>>>     master: URL:
>>>>> http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json
>>>>>
>>>>> *==> master: Adding box 'coreos-alpha' (v738.1.0) for provider:
>>>>> virtualbox*
>>>>>
>>>>>     master: Downloading:
>>>>> http://alpha.release.core-os.net/amd64-usr/738.1.0/coreos_production_vagrant.box
>>>>>
>>>>>     master: Calculating and comparing box checksum...
>>>>>
>>>>> *==> master: Successfully added box 'coreos-alpha' (v738.1.0) for
>>>>> 'virtualbox'!*
>>>>>
>>>>>
>>>>> *-----------------------------------------------------------------------------------------------------------------------*
>>>>>
>>>>> *==> master: Waiting for Kubernetes master to become ready...*
>>>>>
>>>>> The connection to the server localhost:8080 was refused - did you
>>>>> specify the right host or port?
>>>>>
>>>>> The connection to the server localhost:8080 was refused - did you
>>>>> specify the right host or port?
>>>>>
>>>>> *==> node-01: Box 'coreos-alpha' could not be found. Attempting to
>>>>> find and install...*
>>>>>
>>>>>     node-01: Box Provider: virtualbox
>>>>>
>>>>>     node-01: Box Version: >= 738.1.0
>>>>>
>>>>> You specified a box version constraint with a direct box file
>>>>>
>>>>> path. Box version constraints only work with boxes from Vagrant
>>>>>
>>>>> Cloud or a custom box host. Please remove the version constraint
>>>>>
>>>>> and try again.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Imesh Gunaratne
>>>>
>>>> Senior Technical Lead, WSO2
>>>> Committer & PMC Member, Apache Stratos
>>>>
>>>
>>>
>>>
>>> --
>>> Akila Ravihansa Perera
>>> Software Engineer, WSO2
>>>
>>> Blog: http://ravihansa3000.blogspot.com
>>>
>>
>>
>>
>> --
>> Imesh Gunaratne
>>
>> Senior Technical Lead, WSO2
>> Committer & PMC Member, Apache Stratos
>>
>
>
>
> --
> Akila Ravihansa Perera
> Software Engineer, WSO2
>
> Blog: http://ravihansa3000.blogspot.com
>

Reply via email to