Re: What actually is required for DNS and Origin?

2016-07-21 Thread Aleksandar Kostadinov

Josh Berkus wrote on 07/22/16 00:21:

On 07/21/2016 02:07 PM, Aleksandar Kostadinov wrote:


Then use plain IPs for nodes and masters. Then use xip.io for automatic
generated DNS names pointing at your NAT router. Make sure NAT router
forwards 80 and 443 to OpenShift cluster 80 and 443 ports respectively
of working router node(s).


Thanks for that.  I didn't know about xip.io before.


Btw running the app DNS in OpenShift is not exactly catch 22. If you
know the subdomain name beforehand (which is easy), then you use that
subdomain in openshift configuration while installing. Then you start a
DNS pod (you'll have to use node ports feature to expose it to the
outside world) to serve that subdomain.


I might need to set this up, just because I need the cluster to work
even if it has no internet.


The router subdomain is non-mandatory. You just get faux DNS names when 
you create routes (aka expose services). You can still create routes 
with custom DNS. The main point here is to make it easy for client 
machines to access the exposed services.


That means you can add 'hosts' entries on the client machine to specific 
routes, point client machine at custom DNS server that will resolve 
things as usual but also resolve your special subdomain... actually 
these are the two options I can think about off the top of my head ... 
and xip.io of course (or some other external public DNS service under 
your control where you can dynamically create domains during environment 
provisioning; but then you again need internet).


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What actually is required for DNS and Origin?

2016-07-21 Thread Jonathan Yu
This might help: https://github.com/peterhellberg/xip.name

I can't vouch for its quality as I haven't tried it yet, though.

On Thu, Jul 21, 2016 at 2:21 PM, Josh Berkus  wrote:

> On 07/21/2016 02:07 PM, Aleksandar Kostadinov wrote:
>
> > Then use plain IPs for nodes and masters. Then use xip.io for automatic
> > generated DNS names pointing at your NAT router. Make sure NAT router
> > forwards 80 and 443 to OpenShift cluster 80 and 443 ports respectively
> > of working router node(s).
>
> Thanks for that.  I didn't know about xip.io before.
>
> > Btw running the app DNS in OpenShift is not exactly catch 22. If you
> > know the subdomain name beforehand (which is easy), then you use that
> > subdomain in openshift configuration while installing. Then you start a
> > DNS pod (you'll have to use node ports feature to expose it to the
> > outside world) to serve that subdomain.
>
> I might need to set this up, just because I need the cluster to work
> even if it has no internet.
>
> --
> --
> Josh Berkus
> Project Atomic
> Red Hat OSAS
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>



-- 
Jonathan Yu, P.Eng. / Software Engineer, OpenShift by Red Hat / Twitter
(@jawnsy) is the quickest way to my heart 

*“A master in the art of living draws no sharp distinction between his work
and his play; his labor and his leisure; his mind and his body; his
education and his recreation. He hardly knows which is which. He simply
pursues his vision of excellence through whatever he is doing, and leaves
others to determine whether he is working or playing. To himself, he always
appears to be doing both.”* — L. P. Jacks, Education through Recreation
(1932), p. 1
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What actually is required for DNS and Origin?

2016-07-21 Thread Aleksandar Kostadinov

Josh Berkus wrote on 07/21/16 23:54:

On 07/21/2016 01:40 PM, Alex Wauck wrote:



On Thu, Jul 21, 2016 at 3:29 PM, Josh Berkus > wrote:

There is no external DNS server, here.  I'm talking about a portable
microcluster, a stack of microboard computers, self-contained.  The idea
would be to run some kind of local DNS server so that, on directly
connected machines, we could point to that in DNS and it would expose
the services.

I suppose I can just bootstrap that, maybe as a system container ...


If it's a bunch of microboard computers, I'd be tempted to just stick
one more in there and run BIND on it.  Are you running a DHCP server, or
are all IP addresses statically assigned?


There's a DHCP server, but it's a cheap router, so it can't do DNS.
Mind you, I've configured the router to assign specific addresses to all
the cards.

I'd rather not add another card to the stack, though, they're $200 each
with the accessories.


I'm pretty sure using plain IPs will also work. question as I understand
is though where to put the automatic routes subdomain.


Right.


If you have only one router node (which might be ok in your case), you
can use xip.io and configure the subdomain to something like:
apps.10.0.5.122.xip.io

That would be easiest unless your local network blocks private IP
responses from external DNS servers.


Well, the network is self-contained, pretty much.  Everything is behind
a NAT router, so I can do whatever I want, I just need to build it.


Then use plain IPs for nodes and masters. Then use xip.io for automatic 
generated DNS names pointing at your NAT router. Make sure NAT router 
forwards 80 and 443 to OpenShift cluster 80 and 443 ports respectively

of working router node(s).

Above has highest chance to work nice OOB.

Alternatively buy a router that can have OpenWRT installed. Or run DNS 
in container as you pointed out earlier.


Btw running the app DNS in OpenShift is not exactly catch 22. If you 
know the subdomain name beforehand (which is easy), then you use that 
subdomain in openshift configuration while installing. Then you start a 
DNS pod (you'll have to use node ports feature to expose it to the 
outside world) to serve that subdomain.


But using xip.io is better as it will not require client computers DNS 
reconfiguration.


HTH

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What actually is required for DNS and Origin?

2016-07-21 Thread Josh Berkus
On 07/21/2016 01:40 PM, Alex Wauck wrote:
> 
> 
> On Thu, Jul 21, 2016 at 3:29 PM, Josh Berkus  > wrote:
> 
> There is no external DNS server, here.  I'm talking about a portable
> microcluster, a stack of microboard computers, self-contained.  The idea
> would be to run some kind of local DNS server so that, on directly
> connected machines, we could point to that in DNS and it would expose
> the services.
> 
> I suppose I can just bootstrap that, maybe as a system container ...
> 
> 
> If it's a bunch of microboard computers, I'd be tempted to just stick
> one more in there and run BIND on it.  Are you running a DHCP server, or
> are all IP addresses statically assigned?

There's a DHCP server, but it's a cheap router, so it can't do DNS.
Mind you, I've configured the router to assign specific addresses to all
the cards.

I'd rather not add another card to the stack, though, they're $200 each
with the accessories.

> I'm pretty sure using plain IPs will also work. question as I understand
> is though where to put the automatic routes subdomain.

Right.

> If you have only one router node (which might be ok in your case), you
> can use xip.io and configure the subdomain to something like:
> apps.10.0.5.122.xip.io
>
> That would be easiest unless your local network blocks private IP
> responses from external DNS servers.

Well, the network is self-contained, pretty much.  Everything is behind
a NAT router, so I can do whatever I want, I just need to build it.


-- 
--
Josh Berkus
Project Atomic
Red Hat OSAS

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What actually is required for DNS and Origin?

2016-07-21 Thread Alex Wauck
On Thu, Jul 21, 2016 at 3:29 PM, Josh Berkus  wrote:

> There is no external DNS server, here.  I'm talking about a portable
> microcluster, a stack of microboard computers, self-contained.  The idea
> would be to run some kind of local DNS server so that, on directly
> connected machines, we could point to that in DNS and it would expose
> the services.
>
> I suppose I can just bootstrap that, maybe as a system container ...
>

If it's a bunch of microboard computers, I'd be tempted to just stick one
more in there and run BIND on it.  Are you running a DHCP server, or are
all IP addresses statically assigned?

-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What actually is required for DNS and Origin?

2016-07-21 Thread Josh Berkus
On 07/21/2016 02:07 PM, Aleksandar Kostadinov wrote:

> Then use plain IPs for nodes and masters. Then use xip.io for automatic
> generated DNS names pointing at your NAT router. Make sure NAT router
> forwards 80 and 443 to OpenShift cluster 80 and 443 ports respectively
> of working router node(s).

Thanks for that.  I didn't know about xip.io before.

> Btw running the app DNS in OpenShift is not exactly catch 22. If you
> know the subdomain name beforehand (which is easy), then you use that
> subdomain in openshift configuration while installing. Then you start a
> DNS pod (you'll have to use node ports feature to expose it to the
> outside world) to serve that subdomain.

I might need to set this up, just because I need the cluster to work
even if it has no internet.

-- 
--
Josh Berkus
Project Atomic
Red Hat OSAS

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Modifying an example image for deployment from source

2016-07-21 Thread Tony Saxon
Ok, that makes sense. The database information that I'm giving it are not
running as a service in openshift. I was passing those environment
variables with the '-e' option instead of the '-p' option which is probably
why it wasn't giving me any error. However it still was not setting those
as an environment variable. Since those would both come back as blank that
explains the error from the migrate command.

I edited the template and added parameters for the HOST and PORT and then
updated the django database file to use the new environment variables, and
it deployed properly. A few more tweaks and I should have it done. Thanks
for your help!

On Thu, Jul 21, 2016 at 4:21 PM, Ben Parees  wrote:

>
>
> On Thu, Jul 21, 2016 at 3:55 PM, Tony Saxon  wrote:
>
>> Thanks for the help. I'm very close to getting this working I believe.
>> Now the build finishes and pushes to the internal repository. The
>> deployment starts and the django app errors:
>>
>> ---> Migrating database ...
>> Traceback (most recent call last):
>>   File "manage.py", line 10, in 
>> execute_from_command_line(sys.argv)
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/__init__.py",
>> line 338, in execute_from_command_line
>> utility.execute()
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/__init__.py",
>> line 330, in execute
>> self.fetch_command(subcommand).run_from_argv(self.argv)
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/base.py",
>> line 393, in run_from_argv
>> self.execute(*args, **cmd_options)
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/base.py",
>> line 443, in execute
>> self.check()
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/base.py",
>> line 481, in check
>> include_deployment_checks=include_deployment_checks,
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/checks/registry.py",
>> line 72, in run_checks
>> new_errors = check(app_configs=app_configs)
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/checks/model_checks.py",
>> line 28, in check_all_models
>> errors.extend(model.check(**kwargs))
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/base.py",
>> line 1205, in check
>> errors.extend(cls._check_fields(**kwargs))
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/base.py",
>> line 1282, in _check_fields
>> errors.extend(field.check(**kwargs))
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
>> line 934, in check
>> errors = super(AutoField, self).check(**kwargs)
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
>> line 207, in check
>> errors.extend(self._check_backend_specific_checks(**kwargs))
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
>> line 306, in _check_backend_specific_checks
>> return connection.validation.check_field(self, **kwargs)
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/validation.py",
>> line 18, in check_field
>> field_type = field.db_type(connection)
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
>> line 614, in db_type
>> return connection.data_types[self.get_internal_type()] % data
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/__init__.py",
>> line 36, in __getattr__
>> return getattr(connections[DEFAULT_DB_ALIAS], item)
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/utils/functional.py",
>> line 60, in __get__
>> res = instance.__dict__[self.name] = self.func(instance)
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/base.py",
>> line 196, in data_types
>> if self.features.supports_microsecond_precision:
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/utils/functional.py",
>> line 60, in __get__
>> res = instance.__dict__[self.name] = self.func(instance)
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/features.py",
>> line 52, in supports_microsecond_precision
>> return self.connection.mysql_version >= (5, 6, 4) and
>> Database.version_info >= (1, 2, 5)
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/utils/functional.py",
>> line 60, in __get__
>> res = instance.__dict__[self.name] = self.func(instance)
>>   File
>> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/base.py",
>> line 371, in mysql_version
>> with 

Re: What actually is required for DNS and Origin?

2016-07-21 Thread Aleksandar Kostadinov

Alex Wauck wrote on 07/21/16 23:40:



On Thu, Jul 21, 2016 at 3:29 PM, Josh Berkus > wrote:

There is no external DNS server, here.  I'm talking about a portable
microcluster, a stack of microboard computers, self-contained.  The idea
would be to run some kind of local DNS server so that, on directly
connected machines, we could point to that in DNS and it would expose
the services.

I suppose I can just bootstrap that, maybe as a system container ...


If it's a bunch of microboard computers, I'd be tempted to just stick
one more in there and run BIND on it.  Are you running a DHCP server, or
are all IP addresses statically assigned?


I'm pretty sure using plain IPs will also work. question as I understand 
is though where to put the automatic routes subdomain.


If you have only one router node (which might be ok in your case), you 
can use xip.io and configure the subdomain to something like:

apps.10.0.5.122.xip.io

That would be easiest unless your local network blocks private IP 
responses from external DNS servers.


Otherwise you'd need custom DNS server and point client machines at it.

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What actually is required for DNS and Origin?

2016-07-21 Thread Josh Berkus
On 07/21/2016 01:03 PM, Aleksandar Kostadinov wrote:
> Could you explain what kind of DNS are you talking about here? For the
> exposed services?
> Presently you just create wildcard A records pointing at your "router"
> nodes and put the subdomain name in configuration. That you can do once
> (or whenever nodes are added/removed) in your general DNS infrastructure
> whatever it is.

There is no external DNS server, here.  I'm talking about a portable
microcluster, a stack of microboard computers, self-contained.  The idea
would be to run some kind of local DNS server so that, on directly
connected machines, we could point to that in DNS and it would expose
the services.

I suppose I can just bootstrap that, maybe as a system container ...

-- 
--
Josh Berkus
Project Atomic
Red Hat OSAS

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Modifying an example image for deployment from source

2016-07-21 Thread Ben Parees
On Thu, Jul 21, 2016 at 3:55 PM, Tony Saxon  wrote:

> Thanks for the help. I'm very close to getting this working I believe. Now
> the build finishes and pushes to the internal repository. The deployment
> starts and the django app errors:
>
> ---> Migrating database ...
> Traceback (most recent call last):
>   File "manage.py", line 10, in 
> execute_from_command_line(sys.argv)
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/__init__.py",
> line 338, in execute_from_command_line
> utility.execute()
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/__init__.py",
> line 330, in execute
> self.fetch_command(subcommand).run_from_argv(self.argv)
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/base.py",
> line 393, in run_from_argv
> self.execute(*args, **cmd_options)
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/base.py",
> line 443, in execute
> self.check()
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/base.py",
> line 481, in check
> include_deployment_checks=include_deployment_checks,
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/checks/registry.py",
> line 72, in run_checks
> new_errors = check(app_configs=app_configs)
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/checks/model_checks.py",
> line 28, in check_all_models
> errors.extend(model.check(**kwargs))
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/base.py",
> line 1205, in check
> errors.extend(cls._check_fields(**kwargs))
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/base.py",
> line 1282, in _check_fields
> errors.extend(field.check(**kwargs))
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
> line 934, in check
> errors = super(AutoField, self).check(**kwargs)
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
> line 207, in check
> errors.extend(self._check_backend_specific_checks(**kwargs))
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
> line 306, in _check_backend_specific_checks
> return connection.validation.check_field(self, **kwargs)
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/validation.py",
> line 18, in check_field
> field_type = field.db_type(connection)
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
> line 614, in db_type
> return connection.data_types[self.get_internal_type()] % data
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/__init__.py",
> line 36, in __getattr__
> return getattr(connections[DEFAULT_DB_ALIAS], item)
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/utils/functional.py",
> line 60, in __get__
> res = instance.__dict__[self.name] = self.func(instance)
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/base.py",
> line 196, in data_types
> if self.features.supports_microsecond_precision:
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/utils/functional.py",
> line 60, in __get__
> res = instance.__dict__[self.name] = self.func(instance)
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/features.py",
> line 52, in supports_microsecond_precision
> return self.connection.mysql_version >= (5, 6, 4) and
> Database.version_info >= (1, 2, 5)
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/utils/functional.py",
> line 60, in __get__
> res = instance.__dict__[self.name] = self.func(instance)
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/base.py",
> line 371, in mysql_version
> with self.temporary_connection():
>   File "/opt/rh/python27/root/usr/lib64/python2.7/contextlib.py", line 17,
> in __enter__
> return self.gen.next()
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/base/base.py",
> line 462, in temporary_connection
> cursor = self.cursor()
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/base/base.py",
> line 162, in cursor
> cursor = self.make_debug_cursor(self._cursor())
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/base/base.py",
> line 135, in _cursor
> self.ensure_connection()
>   File
> "/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/base/base.py",
> line 130, in ensure_connection
> self.connect()
>   File
> 

Re: How to set an proxy in the openshift origin to pull the image

2016-07-21 Thread aleks

Hi.

Am 21-07-2016 09:33, schrieb 周华康:


Hi
When i try the deploy the example apps,it shows that in the log i need 
to set an proxy,but how?

log:
"API error (500): Get 
https://registry-1.docker.io/v2/library/dancer-example/manifests/latest: 
Get 
https://auth.docker.io/token?scope=repository%3Alibrary%2Fdancer-example%3Apull=registry.docker.io: 
dial tcp: lookup auth.docker.io on 10.202.72.116:53: read udp 
10.161.67.132:57753->10.202.72.116:53: i/o timeout\n"


Maybe this can help

https://docs.openshift.org/latest/install_config/http_proxies.html

BR aleks

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What actually is required for DNS and Origin?

2016-07-21 Thread Aleksandar Kostadinov

Josh Berkus wrote on 07/21/16 22:59:
...

Just testing, for now, so the AWS DNS will work.

I'll have to give some thought as to how I'll handle DNS on the hardware
microcluster.  Anyone have suggestions for a minimalist solution?  I'd
love to just run BIND on a container, but there's a bit of a catch-22 there.



Could you explain what kind of DNS are you talking about here? For the 
exposed services?
Presently you just create wildcard A records pointing at your "router" 
nodes and put the subdomain name in configuration. That you can do once 
(or whenever nodes are added/removed) in your general DNS infrastructure 
whatever it is.


There might be other solutions possible in the future though. You can 
also create "routes" with custom DNS names instead of auto-generated 
which might be preferable in many cases.


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What actually is required for DNS and Origin?

2016-07-21 Thread Josh Berkus
On 07/21/2016 12:46 PM, Alex Wauck wrote:
> 
> On Thu, Jul 21, 2016 at 2:32 PM, Aleksandar Kostadinov
> > wrote:
> 
> Two things as listed in the doc. One is to have hostnames of masters
> and slaves resolvable over the configured DNS servers.
> 
> 
> If you're on AWS, this is taken care of for you.  Your masters and
> slaves and whatnot will all be referred to by their internal DNS names
> (e.g. ip-172-31-33-101.us-west-1.compute.internal), so this aspect will
> just work, even if you set up the EC2 instances yourself and use the BYO
> playbooks.
>  
> 
> The other thing listed as "optional" is having a wildcard record(s)
> for the routes exposed to services in OpenShift. This subdomain also
> needs to be configured in master's config file.
> 
> 
> I highly recommend this.  It makes it very quick and easy to set up new
> services with valid DNS records.  Also, get a wildcard SSL certificate
> if you can afford it.  You can configure the router to automatically use
> that certificate for any service that doesn't specify one.

Just testing, for now, so the AWS DNS will work.

I'll have to give some thought as to how I'll handle DNS on the hardware
microcluster.  Anyone have suggestions for a minimalist solution?  I'd
love to just run BIND on a container, but there's a bit of a catch-22 there.

-- 
--
Josh Berkus
Project Atomic
Red Hat OSAS

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Modifying an example image for deployment from source

2016-07-21 Thread Tony Saxon
Thanks for the help. I'm very close to getting this working I believe. Now
the build finishes and pushes to the internal repository. The deployment
starts and the django app errors:

---> Migrating database ...
Traceback (most recent call last):
  File "manage.py", line 10, in 
execute_from_command_line(sys.argv)
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/__init__.py",
line 338, in execute_from_command_line
utility.execute()
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/__init__.py",
line 330, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/base.py",
line 393, in run_from_argv
self.execute(*args, **cmd_options)
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/base.py",
line 443, in execute
self.check()
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/management/base.py",
line 481, in check
include_deployment_checks=include_deployment_checks,
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/checks/registry.py",
line 72, in run_checks
new_errors = check(app_configs=app_configs)
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/core/checks/model_checks.py",
line 28, in check_all_models
errors.extend(model.check(**kwargs))
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/base.py",
line 1205, in check
errors.extend(cls._check_fields(**kwargs))
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/base.py",
line 1282, in _check_fields
errors.extend(field.check(**kwargs))
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
line 934, in check
errors = super(AutoField, self).check(**kwargs)
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
line 207, in check
errors.extend(self._check_backend_specific_checks(**kwargs))
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
line 306, in _check_backend_specific_checks
return connection.validation.check_field(self, **kwargs)
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/validation.py",
line 18, in check_field
field_type = field.db_type(connection)
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/models/fields/__init__.py",
line 614, in db_type
return connection.data_types[self.get_internal_type()] % data
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/__init__.py",
line 36, in __getattr__
return getattr(connections[DEFAULT_DB_ALIAS], item)
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/utils/functional.py",
line 60, in __get__
res = instance.__dict__[self.name] = self.func(instance)
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/base.py",
line 196, in data_types
if self.features.supports_microsecond_precision:
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/utils/functional.py",
line 60, in __get__
res = instance.__dict__[self.name] = self.func(instance)
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/features.py",
line 52, in supports_microsecond_precision
return self.connection.mysql_version >= (5, 6, 4) and
Database.version_info >= (1, 2, 5)
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/utils/functional.py",
line 60, in __get__
res = instance.__dict__[self.name] = self.func(instance)
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/base.py",
line 371, in mysql_version
with self.temporary_connection():
  File "/opt/rh/python27/root/usr/lib64/python2.7/contextlib.py", line 17,
in __enter__
return self.gen.next()
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/base/base.py",
line 462, in temporary_connection
cursor = self.cursor()
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/base/base.py",
line 162, in cursor
cursor = self.make_debug_cursor(self._cursor())
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/base/base.py",
line 135, in _cursor
self.ensure_connection()
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/base/base.py",
line 130, in ensure_connection
self.connect()
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/base/base.py",
line 118, in connect
conn_params = self.get_connection_params()
  File
"/opt/app-root/src/.local/lib/python2.7/site-packages/django/db/backends/mysql/base.py",
line 263, in get_connection_params
if settings_dict['HOST'].startswith('/'):

Re: What actually is required for DNS and Origin?

2016-07-21 Thread Alex Wauck
On Thu, Jul 21, 2016 at 2:32 PM, Aleksandar Kostadinov 
wrote:

> Two things as listed in the doc. One is to have hostnames of masters and
> slaves resolvable over the configured DNS servers.
>

If you're on AWS, this is taken care of for you.  Your masters and slaves
and whatnot will all be referred to by their internal DNS names (e.g.
ip-172-31-33-101.us-west-1.compute.internal), so this aspect will just
work, even if you set up the EC2 instances yourself and use the BYO
playbooks.


> The other thing listed as "optional" is having a wildcard record(s) for
> the routes exposed to services in OpenShift. This subdomain also needs to
> be configured in master's config file.
>

I highly recommend this.  It makes it very quick and easy to set up new
services with valid DNS records.  Also, get a wildcard SSL certificate if
you can afford it.  You can configure the router to automatically use that
certificate for any service that doesn't specify one.

-- 

Alex Wauck // DevOps Engineer

*E X O S I T E*
*www.exosite.com *

Making Machines More Human.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: What actually is required for DNS and Origin?

2016-07-21 Thread Aleksandar Kostadinov

Josh Berkus wrote on 07/21/16 22:17:

Folks:

https://docs.openshift.org/latest/install_config/install/prerequisites.html#install-config-install-prerequisites

This goes on a bit about DNS requirements, but what's *actually*
required is a bit unclear.  Do I just need DNS support for the
hostnames?  Or do I need external DNS which supports routing for containers?

Can anyone clarify?


Two things as listed in the doc. One is to have hostnames of masters and 
slaves resolvable over the configured DNS servers.


The other thing listed as "optional" is having a wildcard record(s) for 
the routes exposed to services in OpenShift. This subdomain also needs 
to be configured in master's config file.


HTH

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


What actually is required for DNS and Origin?

2016-07-21 Thread Josh Berkus
Folks:

https://docs.openshift.org/latest/install_config/install/prerequisites.html#install-config-install-prerequisites

This goes on a bit about DNS requirements, but what's *actually*
required is a bit unclear.  Do I just need DNS support for the
hostnames?  Or do I need external DNS which supports routing for containers?

Can anyone clarify?

-- 
--
Josh Berkus
Project Atomic
Red Hat OSAS

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Manifest v2 schema error when pulling images from Docker Hub

2016-07-21 Thread Guilherme Macedo
Nice to know! Please, let me know if you need any help in testing this, 
I've a test environment available.


Best regards.

Guilherme Macedo | guilhe...@gmacedo.com
Information Security Consultant
www.gmacedo.com

On 2016-07-21 10:55, Clayton Coleman wrote:

The CentOS, Fedora, and RHEL versions of Docker 1.10 have a patch that
will allow you to push from Docker 1.10 to a schema2 enabled registry
using schema1 (which is what we're using today in our CI).  I believe
it's --skip-schema2-push to the daemon and will be enabled soon in
testing.

On Jul 21, 2016, at 3:20 AM, Guilherme Macedo  
wrote:


Hi Clayton.

Thanks for answering.
So I will wait for v1.3.0 release then, and will continue to push 
images manually.


Best regards.

Guilherme Macedo | guilhe...@gmacedo.com
Information Security Consultant
www.gmacedo.com

On Wed, 20 Jul 2016 19:06:48 -0400
Clayton Coleman  wrote:


Accept schema2 is not supported in 1.2.0

On Jul 20, 2016, at 6:33 PM, Guilherme Macedo 
 wrote:


Hi.

I'm running an Origin v1.2.0 cluster and docker v1.10.3-44 in our 
clients.
Origin fails to pull images from Docker Hub when I build a project 
with Dockerfile due to manifest v2 schema change - manifest unknown 
error.
I've tried to set the variable 
REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ACCEPTSCHEMA2 to true as 
documented here [1], but Origin continues to fail with the same 
error.
To work around it I'm pulling images locally and pushing them to an 
Origin registry that I exposed to our users.
Does anyone knows how to resolve this or we need to wait for Origin 
v1.3.0?


[1] 
https://docs.openshift.org/latest/install_config/install/docker_registry.html#docker-registry-configuration-reference-middleware


Thanks in advance and best regards.

Guilherme Macedo | guilhe...@gmacedo.com
Information Security Consultant
www.gmacedo.com

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Manifest v2 schema error when pulling images from Docker Hub

2016-07-21 Thread Clayton Coleman
The CentOS, Fedora, and RHEL versions of Docker 1.10 have a patch that
will allow you to push from Docker 1.10 to a schema2 enabled registry
using schema1 (which is what we're using today in our CI).  I believe
it's --skip-schema2-push to the daemon and will be enabled soon in
testing.

> On Jul 21, 2016, at 3:20 AM, Guilherme Macedo  wrote:
>
> Hi Clayton.
>
> Thanks for answering.
> So I will wait for v1.3.0 release then, and will continue to push images 
> manually.
>
> Best regards.
>
> Guilherme Macedo | guilhe...@gmacedo.com
> Information Security Consultant
> www.gmacedo.com
>
> On Wed, 20 Jul 2016 19:06:48 -0400
> Clayton Coleman  wrote:
>
>> Accept schema2 is not supported in 1.2.0
>>
>>> On Jul 20, 2016, at 6:33 PM, Guilherme Macedo  wrote:
>>>
>>> Hi.
>>>
>>> I'm running an Origin v1.2.0 cluster and docker v1.10.3-44 in our clients.
>>> Origin fails to pull images from Docker Hub when I build a project with 
>>> Dockerfile due to manifest v2 schema change - manifest unknown error.
>>> I've tried to set the variable 
>>> REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ACCEPTSCHEMA2 to true as 
>>> documented here [1], but Origin continues to fail with the same error.
>>> To work around it I'm pulling images locally and pushing them to an Origin 
>>> registry that I exposed to our users.
>>> Does anyone knows how to resolve this or we need to wait for Origin v1.3.0?
>>>
>>> [1] 
>>> https://docs.openshift.org/latest/install_config/install/docker_registry.html#docker-registry-configuration-reference-middleware
>>>
>>> Thanks in advance and best regards.
>>>
>>> Guilherme Macedo | guilhe...@gmacedo.com
>>> Information Security Consultant
>>> www.gmacedo.com
>>>
>>> ___
>>> users mailing list
>>> users@lists.openshift.redhat.com
>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Modifying an example image for deployment from source

2016-07-21 Thread Ben Parees
On Thu, Jul 21, 2016 at 9:15 AM, Tony Saxon  wrote:

> Thanks. Looks like that puts me back on the right track. The django:3.5
> was a mistype on my part. I was referring to the python:3.5 image that was
> referred to in the build strategy in
> https://github.com/openshift/django-ex/blob/master/openshift/templates/django.json
> .
>
> Now it's building no errors and the pod is deploying. 'oc status' shows
> that it's deployed and 'oc get pods' shows the deploy pod ready and running
> but the application pod is running but '0/1' ready. When I look at the logs
> for both the deploy pod and the application pod, I don't see any errors,
> but the application pod definitely keeps restarting.
>

​that template defines a readiness check for the application pod:
https://github.com/openshift/django-ex/blob/master/openshift/templates/django.json#L173-L188

is your application still serving traffic at the "/" path on port 8080?  If
not, it's going to fail the readiness check.
​



>
> logs from application pod:
>
> Synchronizing apps without migrations:
>   Creating tables...
> Creating table stats_fagroup
> Creating table stats_metrics
> Creating table stats_host
> Creating table stats_hostperfstatus
> Creating table stats_statistics
> Creating table stats_defaultthreshold
> Creating table stats_threshold
> Creating table stats_pmsignoff
> Running deferred SQL...
>   Installing custom SQL...
> Running migrations:
>   Rendering model states... DONE
>   Applying contenttypes.0001_initial... OK
>   Applying auth.0001_initial... OK
>   Applying admin.0001_initial... OK
>   Applying contenttypes.0002_remove_content_type_name... OK
>   Applying auth.0002_alter_permission_name_max_length... OK
>   Applying auth.0003_alter_user_email_max_length... OK
>   Applying auth.0004_alter_user_username_opts... OK
>   Applying auth.0005_alter_user_last_login_null... OK
>   Applying auth.0006_require_contenttypes_0002... OK
> System check identified some issues:
>
> WARNINGS:
> stats.HostPerfStatus.date: (fields.W161) Fixed default value provided.
> HINT: It seems you set a fixed date / time / datetime value as
> default for this field. This may not be what you want. If you want to have
> the current date as default, use `django.utils.timezone.now`
>   Applying sessions.0001_initial... OK
> ---> Serving application with 'manage.py runserver' ...
> WARNING: this is NOT a recommended way to run you application in
> production!
> Consider using gunicorn or some other production web server.
>
> Logs from the deploy pod:
>
> [root@oso-master pmweb]# oc logs -f pmweb-1-deploy
> I0721 13:08:55.477158   1 deployer.go:200] Deploying test/pmweb-1 for
> the first time (replicas: 1)
> I0721 13:08:55.478057   1 recreate.go:126] Scaling test/pmweb-1 to 1
> before performing acceptance check
> I0721 13:08:57.518253   1 recreate.go:131] Performing acceptance check
> of test/pmweb-1
> I0721 13:08:57.518333   1 lifecycle.go:445] Waiting 600 seconds for
> pods owned by deployment "test/pmweb-1" to become ready (checking every 1
> seconds; 0 pods previously accepted)
>
> Any idea if there's another place to look for logs for what's going wrong?
>




>
>
> On Wed, Jul 20, 2016 at 8:45 PM, Ben Parees  wrote:
>
>>
>>
>> On Wed, Jul 20, 2016 at 7:53 PM, Tony Saxon  wrote:
>>
>>> I'm trying to take an existing Django application that we have running
>>> on a system and make it so that I can deploy into a lab origin environment
>>> that I have set up. I started by going through the example Django
>>> application: https://github.com/openshift/django-ex
>>>
>>> I didn't have any major problems with deploying that. I then tried to
>>> adapt our existing application based on the example; I added the
>>> requirements text file made some small label modifications to the template
>>> file and attempted to deploy our application from our private git
>>> repository. It is bombing out while building the application due to the
>>> fact that it is unable to install one of the items listed in the
>>> requirements.txt file. I built another docker container and narrowed it
>>> down to needing the libffi-devel package.
>>>
>>
>> ​i'm not familiar with the package, but if you think it's a common
>> package people will need, consider opening an issue against the python repo
>> requesting it be added to the python s2i builder image:
>> https://github.com/sclorg/s2i-python-container
>> ​
>>
>>
>>>
>>> After pouring over the documentation, I'm having trouble figuring out
>>> the proper way to make a source image based on the openshift/django:3.5
>>> image that has the included package. I've gone over the documentation for
>>> building s2i images and such, but don't quite grasp the procedure for
>>> building something generic that does not have any application source code
>>> included and pushing that to an internal repository to be included in a
>>> 

Re: Modifying an example image for deployment from source

2016-07-21 Thread Tony Saxon
Thanks. Looks like that puts me back on the right track. The django:3.5 was
a mistype on my part. I was referring to the python:3.5 image that was
referred to in the build strategy in
https://github.com/openshift/django-ex/blob/master/openshift/templates/django.json
.

Now it's building no errors and the pod is deploying. 'oc status' shows
that it's deployed and 'oc get pods' shows the deploy pod ready and running
but the application pod is running but '0/1' ready. When I look at the logs
for both the deploy pod and the application pod, I don't see any errors,
but the application pod definitely keeps restarting.

logs from application pod:

Synchronizing apps without migrations:
  Creating tables...
Creating table stats_fagroup
Creating table stats_metrics
Creating table stats_host
Creating table stats_hostperfstatus
Creating table stats_statistics
Creating table stats_defaultthreshold
Creating table stats_threshold
Creating table stats_pmsignoff
Running deferred SQL...
  Installing custom SQL...
Running migrations:
  Rendering model states... DONE
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
System check identified some issues:

WARNINGS:
stats.HostPerfStatus.date: (fields.W161) Fixed default value provided.
HINT: It seems you set a fixed date / time / datetime value as
default for this field. This may not be what you want. If you want to have
the current date as default, use `django.utils.timezone.now`
  Applying sessions.0001_initial... OK
---> Serving application with 'manage.py runserver' ...
WARNING: this is NOT a recommended way to run you application in production!
Consider using gunicorn or some other production web server.

Logs from the deploy pod:

[root@oso-master pmweb]# oc logs -f pmweb-1-deploy
I0721 13:08:55.477158   1 deployer.go:200] Deploying test/pmweb-1 for
the first time (replicas: 1)
I0721 13:08:55.478057   1 recreate.go:126] Scaling test/pmweb-1 to 1
before performing acceptance check
I0721 13:08:57.518253   1 recreate.go:131] Performing acceptance check
of test/pmweb-1
I0721 13:08:57.518333   1 lifecycle.go:445] Waiting 600 seconds for
pods owned by deployment "test/pmweb-1" to become ready (checking every 1
seconds; 0 pods previously accepted)

Any idea if there's another place to look for logs for what's going wrong?


On Wed, Jul 20, 2016 at 8:45 PM, Ben Parees  wrote:

>
>
> On Wed, Jul 20, 2016 at 7:53 PM, Tony Saxon  wrote:
>
>> I'm trying to take an existing Django application that we have running on
>> a system and make it so that I can deploy into a lab origin environment
>> that I have set up. I started by going through the example Django
>> application: https://github.com/openshift/django-ex
>>
>> I didn't have any major problems with deploying that. I then tried to
>> adapt our existing application based on the example; I added the
>> requirements text file made some small label modifications to the template
>> file and attempted to deploy our application from our private git
>> repository. It is bombing out while building the application due to the
>> fact that it is unable to install one of the items listed in the
>> requirements.txt file. I built another docker container and narrowed it
>> down to needing the libffi-devel package.
>>
>
> ​i'm not familiar with the package, but if you think it's a common package
> people will need, consider opening an issue against the python repo
> requesting it be added to the python s2i builder image:
> https://github.com/sclorg/s2i-python-container
> ​
>
>
>>
>> After pouring over the documentation, I'm having trouble figuring out the
>> proper way to make a source image based on the openshift/django:3.5 image
>> that has the included package. I've gone over the documentation for
>> building s2i images and such, but don't quite grasp the procedure for
>> building something generic that does not have any application source code
>> included and pushing that to an internal repository to be included in a
>> configuration file and be deployed with the new-app command. Any help would
>> be greatly appreciated, thanks.
>>
>
> Not sure what "openshift/django:3.5" is, but assuming you mean the python
> image, what you need to do is write a Dockerfile like:
>
> FROM centos/python-35-centos7
> USER root
> RUN yum install -y libffi-devel​
> USER 1001  # must set user back to a non-root user
>
> then docker build that dockerfile (you can't build it on openshift online
> since we don't allow Docker builds, but if you have your own cluster, you
> 

Re: Networking

2016-07-21 Thread Miloslav Vlach
I have problem that the building of the image is very slow (30sec on AWS,
3min on our VM). The CPU power is not different. It looks like that after
some time the build process stalled. Second, when I run container the jdbc
cannot connect to the database….

I’m trying to add more utilities to my s2i image and run console.

Thanks for helping me…

Mila


Dne 21. července 2016 v 15:10:00, Andy Goldstein (agold...@redhat.com)
napsal/a:

The line you've highlighted about not being able to get networks stats from
the container is harmless. The docker container has stopped but there is
still a cgroup for it, and the system is attempting to gather stats. It's a
bug that we're looking into but it's unrelated to any connectivity issues
you might be having.

What specific issues are you running in to?

Andy

On Thursday, July 21, 2016, Miloslav Vlach  wrote:

> Hi,
>
> can somebody please help me identify what is wrong ? It looks like the
> network connection is broken from the docker container…
>
> Thanks Mila
>
> Jul 21 14:57:21 rohlik-jdev01 systemd: Stopped docker container
> 8adf6ebdd893ea3c7b3ede0fbd2e80795380058a363b9509649ff684a28b3d37.
>
> Jul 21 14:57:21 rohlik-jdev01 systemd: Stopping docker container
> 8adf6ebdd893ea3c7b3ede0fbd2e80795380058a363b9509649ff684a28b3d37.
>
> Jul 21 14:57:21 rohlik-jdev01 origin-node: I0721 14:57:21.4360463561
> server.go:1100] GET
> /containerLogs/testo/wapi-12-build/sti-build?follow=true=10485760=1000:
> (17.80430948s) 200 [[Go-http-client/1.1] 10.10.121.132:33154]
>
> Jul 21 14:57:21 rohlik-jdev01 origin-node: I0721 14:57:21.7457633561
> helpers.go:101] *Unable to get network stats from pid 6521: couldn't read
> network stats: failure opening /proc/6521/net/dev: open /proc/6521/net/dev:
> no such file or directory*
>
> Jul 21 14:57:22 rohlik-jdev01 origin-node: I0721 14:57:22.3423453561
> kubelet.go:2430] SyncLoop (PLEG):
> "wapi-12-build_testo(9dd31dfb-4f42-11e6-bf98-52cb629706b3)", event:
> {ID:"9dd31dfb-4f42-11e6-bf98-52cb629706b3",
> Type:"ContainerDied",
> Data:"8adf6ebdd893ea3c7b3ede0fbd2e80795380058a363b9509649ff684a28b3d37"}
>
> Jul 21 14:57:22 rohlik-jdev01 ovs-vsctl: ovs|1|vsctl|INFO|Called as
> ovs-vsctl --if-exists del-port veth740cf17
>
> Jul 21 14:57:22 rohlik-jdev01 kernel: device veth740cf17 left promiscuous
> mode
>
> Jul 21 14:57:22 rohlik-jdev01 origin-node: I0721 14:57:22.6382183561
> manager.go:1368] Killing container
> "63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c
> testo/wapi-12-build" with 30 second grace period
>
> Jul 21 14:57:22 rohlik-jdev01 journal:
> time="2016-07-21T14:57:22.639053363+02:00" level=info msg="{Action=stop,
> ID=63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c,
> LoginUID=4294967295, PID=3561}"
>
> Jul 21 14:57:22 rohlik-jdev01 systemd: Stopped docker container
> 63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c.
>
> Jul 21 14:57:22 rohlik-jdev01 systemd: Stopping docker container
> 63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c.
>
> Jul 21 14:57:22 rohlik-jdev01 systemd-machined: Machine
> 63f3c7aded229b170b5ce620493cb254 terminated.
>
> Jul 21 14:57:22 rohlik-jdev01 oci-register-machine[6669]: 2016/07/21
> 14:57:22 Register machine: poststop
> 63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c 0
> /var/lib/docker/overlay/cbd4df1c6602213c15795ceb61291e908c2820779b5a5affc997787521c829f9/merged
>
> Jul 21 14:57:22 rohlik-jdev01 oci-register-machine[6669]: 2016/07/21
> 14:57:22 TerminateMachine failed: No machine
> '63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c' known
>
> Jul 21 14:57:22 rohlik-jdev01 NetworkManager[1021]:   (veth740cf17):
> link disconnected
>
> Jul 21 14:57:22 rohlik-jdev01 NetworkManager[1021]:   (veth649d055):
> failed to find device 19 'veth649d055' with udev
>
> Jul 21 14:57:22 rohlik-jdev01 NetworkManager[1021]:   (veth649d055):
> new Veth device (carrier: OFF, driver: 'veth', ifindex: 19)
>
> Jul 21 14:57:22 rohlik-jdev01 avahi-daemon[1002]: Withdrawing workstation
> service for veth649d055.
>
> Jul 21 14:57:22 rohlik-jdev01 avahi-daemon[1002]: Withdrawing workstation
> service for veth740cf17.
>
> Jul 21 14:57:22 rohlik-jdev01 NetworkManager[1021]:   (veth649d055):
> failed to disable userspace IPv6LL address handling
>
> Jul 21 14:57:22 rohlik-jdev01 NetworkManager[1021]:   (veth740cf17):
> failed to disable userspace IPv6LL address handling
>
> Jul 21 14:57:22 rohlik-jdev01 origin-node: I0721 14:57:22.7631753561
> manager.go:1400] Container
> "63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c
> testo/wapi-12-build" exited after 124.889561ms
>
> Jul 21 14:57:23 rohlik-jdev01 origin-node: I0721 14:57:23.0815023561
> kubelet.go:2245] Killing unwanted pod "wapi-12-build"
>
> Jul 21 14:57:23 rohlik-jdev01 origin-node: E0721 14:57:23.3276233561
> manager.go:1297] Failed to teardown network for pod
> "9dd31dfb-4f42-11e6-bf98-52cb629706b3" 

Re: How to set an proxy in the openshift origin to pull the image

2016-07-21 Thread Mateus Caruccio
Hi.

You could try to use a chinese mirror. Following article shows how to do it
(didn't tried myself)
http://rzhw.me/blog/2015/12/faster-docker-pulls-in-china-with-daocloud/


--
Mateus Caruccio / Master of Puppets
GetupCloud.com - Eliminamos a Gravidade

On Thu, Jul 21, 2016 at 4:33 AM, 周华康  wrote:

> Hi
>When i try the deploy the example apps,it shows that in the log i need
> to set an proxy,but how?
> log:
> "API error (500): Get
> https://registry-1.docker.io/v2/library/dancer-example/manifests/latest:
> Get
> https://auth.docker.io/token?scope=repository%3Alibrary%2Fdancer-example%3Apull=registry.docker.io:
> dial tcp: lookup auth.docker.io on 10.202.72.116:53: read udp
> 10.161.67.132:57753->10.202.72.116:53: i/o timeout\n"
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Networking

2016-07-21 Thread Miloslav Vlach
Hi,

can somebody please help me identify what is wrong ? It looks like the
network connection is broken from the docker container…

Thanks Mila

Jul 21 14:57:21 rohlik-jdev01 systemd: Stopped docker container
8adf6ebdd893ea3c7b3ede0fbd2e80795380058a363b9509649ff684a28b3d37.

Jul 21 14:57:21 rohlik-jdev01 systemd: Stopping docker container
8adf6ebdd893ea3c7b3ede0fbd2e80795380058a363b9509649ff684a28b3d37.

Jul 21 14:57:21 rohlik-jdev01 origin-node: I0721 14:57:21.4360463561
server.go:1100] GET
/containerLogs/testo/wapi-12-build/sti-build?follow=true=10485760=1000:
(17.80430948s) 200 [[Go-http-client/1.1] 10.10.121.132:33154]

Jul 21 14:57:21 rohlik-jdev01 origin-node: I0721 14:57:21.7457633561
helpers.go:101] *Unable to get network stats from pid 6521: couldn't read
network stats: failure opening /proc/6521/net/dev: open /proc/6521/net/dev:
no such file or directory*

Jul 21 14:57:22 rohlik-jdev01 origin-node: I0721 14:57:22.3423453561
kubelet.go:2430] SyncLoop (PLEG):
"wapi-12-build_testo(9dd31dfb-4f42-11e6-bf98-52cb629706b3)", event:
{ID:"9dd31dfb-4f42-11e6-bf98-52cb629706b3",
Type:"ContainerDied",
Data:"8adf6ebdd893ea3c7b3ede0fbd2e80795380058a363b9509649ff684a28b3d37"}

Jul 21 14:57:22 rohlik-jdev01 ovs-vsctl: ovs|1|vsctl|INFO|Called as
ovs-vsctl --if-exists del-port veth740cf17

Jul 21 14:57:22 rohlik-jdev01 kernel: device veth740cf17 left promiscuous
mode

Jul 21 14:57:22 rohlik-jdev01 origin-node: I0721 14:57:22.6382183561
manager.go:1368] Killing container
"63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c
testo/wapi-12-build" with 30 second grace period

Jul 21 14:57:22 rohlik-jdev01 journal:
time="2016-07-21T14:57:22.639053363+02:00" level=info msg="{Action=stop,
ID=63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c,
LoginUID=4294967295, PID=3561}"

Jul 21 14:57:22 rohlik-jdev01 systemd: Stopped docker container
63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c.

Jul 21 14:57:22 rohlik-jdev01 systemd: Stopping docker container
63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c.

Jul 21 14:57:22 rohlik-jdev01 systemd-machined: Machine
63f3c7aded229b170b5ce620493cb254 terminated.

Jul 21 14:57:22 rohlik-jdev01 oci-register-machine[6669]: 2016/07/21
14:57:22 Register machine: poststop
63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c 0
/var/lib/docker/overlay/cbd4df1c6602213c15795ceb61291e908c2820779b5a5affc997787521c829f9/merged

Jul 21 14:57:22 rohlik-jdev01 oci-register-machine[6669]: 2016/07/21
14:57:22 TerminateMachine failed: No machine
'63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c' known

Jul 21 14:57:22 rohlik-jdev01 NetworkManager[1021]:   (veth740cf17):
link disconnected

Jul 21 14:57:22 rohlik-jdev01 NetworkManager[1021]:   (veth649d055):
failed to find device 19 'veth649d055' with udev

Jul 21 14:57:22 rohlik-jdev01 NetworkManager[1021]:   (veth649d055):
new Veth device (carrier: OFF, driver: 'veth', ifindex: 19)

Jul 21 14:57:22 rohlik-jdev01 avahi-daemon[1002]: Withdrawing workstation
service for veth649d055.

Jul 21 14:57:22 rohlik-jdev01 avahi-daemon[1002]: Withdrawing workstation
service for veth740cf17.

Jul 21 14:57:22 rohlik-jdev01 NetworkManager[1021]:   (veth649d055):
failed to disable userspace IPv6LL address handling

Jul 21 14:57:22 rohlik-jdev01 NetworkManager[1021]:   (veth740cf17):
failed to disable userspace IPv6LL address handling

Jul 21 14:57:22 rohlik-jdev01 origin-node: I0721 14:57:22.7631753561
manager.go:1400] Container
"63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c
testo/wapi-12-build" exited after 124.889561ms

Jul 21 14:57:23 rohlik-jdev01 origin-node: I0721 14:57:23.0815023561
kubelet.go:2245] Killing unwanted pod "wapi-12-build"

Jul 21 14:57:23 rohlik-jdev01 origin-node: E0721 14:57:23.3276233561
manager.go:1297] Failed to teardown network for pod
"9dd31dfb-4f42-11e6-bf98-52cb629706b3" using network plugins
"redhat/openshift-ovs-subnet": exit status 1

Jul 21 14:57:23 rohlik-jdev01 origin-node: I0721 14:57:23.3292143561
manager.go:1368] Killing container
"63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c /" with
30 second grace period

Jul 21 14:57:23 rohlik-jdev01 journal:
time="2016-07-21T14:57:23.329852522+02:00" level=info msg="{Action=stop,
ID=63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c,
LoginUID=4294967295, PID=3561}"

Jul 21 14:57:23 rohlik-jdev01 journal:
time="2016-07-21T14:57:23.330123594+02:00" level=error msg="Handler for
POST
/containers/63f3c7aded229b170b5ce620493cb254c1bb745a450fea413373919ddc0b492c/stop
returned error: Container already stopped"

Jul 21 14:57:23 rohlik-jdev01 origin-node: E0721 14:57:23.3307613561
kubelet.go:2248] Failed killing the pod "wapi-12-build": failed to
"TeardownNetwork" for "wapi-12-build_testo" with TeardownNetworkError:
"Failed to teardown network for pod
\"9dd31dfb-4f42-11e6-bf98-52cb629706b3\" using network plugins

How to set an proxy in the openshift origin to pull the image

2016-07-21 Thread ??????
Hi
   When i try the deploy the example apps,it shows that in the log i need to 
set an proxy,but how?
log:
"API error (500): Get 
https://registry-1.docker.io/v2/library/dancer-example/manifests/latest: Get 
https://auth.docker.io/token?scope=repository%3Alibrary%2Fdancer-example%3Apull=registry.docker.io:
 dial tcp: lookup auth.docker.io on 10.202.72.116:53: read udp 
10.161.67.132:57753->10.202.72.116:53: i/o timeout\n"___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users