[no subject]

2019-02-04 Thread Marcello Lorenzi
Hi All,
we have noticed a global cluster issue after a system upgrade on our Origin
v3.10 cluster. The master nodes updated present this error repetead into
journal log:

ocp-devmaster01.test.local origin-node[6432]: W0204 10:54:36.5772596432
status_manager.go:498] Failed to update status for pod
"master-controllers-ocp-devmaster01.test.local_kube-system(75934eb3-2861-11e9-8714-005056802561)":
failed to patch status
"{\"status\":{\"conditions\":[{\"lastProbeTime\":null,\"lastTransitionTime\":\"2019-02-04T09:44:26Z\",\"status\":\"True\",\"type\":\"Initialized\"},{\"lastProbeTime\":null,\"lastTransitionTime\":\"2019-02-04T09:46:12Z\",\"status\":\"True\",\"type\":\"Ready\"},{\"lastProbeTime\":null,\"lastTransitionTime\":null,\"status\":\"True\",\"type\":\"ContainersReady\"},{\"lastProbeTime\":null,\"lastTransitionTime\":\"2019-02-04T09:44:26Z\",\"status\":\"True\",\"type\":\"PodScheduled\"}],\"containerStatuses\":[{\"containerID\":\"docker://fec09ea31a3d523ed2bce797db2e6b76b6c80e03bd6eb337019f20173f99e048\",\"image\":\"
docker.io/openshift/origin-control-plane:v3.10.0\
",\"imageID\":\"docker-pullable://
docker.io/openshift/origin-control-plane@sha256:a2c9a4739e3dcb8124dbca8b743b32bd7a37b5ff5066c8b800cbf2b56747a59d\",\"lastState\":{\"terminated\":{\"containerID\":\"docker://1b2f82c725acc91170de540d72a45f4be66206c4d444247d5136c566c1eb2320\",\"exitCode\":2,\"finishedAt\":\"2019-02-04T09:41:46Z\",\"reason\":\"Error\",\"startedAt\":\"2019-02-04T09:24:34Z\"}},\"name\":\"controllers\",\"ready\":true,\"restartCount\":6,\"state\":{\"running\":{\"startedAt\":\"2019-02-04T09:46:01Z\"}}}],\"hostIP\":\"192.168.1.100\",\"phase\":\"Running\",\"podIP\":\"192.168.1.100\",\"startTime\":\"2019-02-04T09:44:26Z\"}}"
for pod "kube-system"/"master-controllers-ocp-devmaster01.test.local": pods
"master-controllers-ocp-devmaster01.test.local" is forbidden: User
"system:node:ocp-devmaster01.test.local" cannot patch pods/status in the
namespace "kube-system": User "system:node:ocp-devmaster01.test.local"
cannot "patch" "pods/status" with name
"master-controllers-ocp-devmaster01.test.local" in project "kube-system"

We noticed that the  docker.io/openshift/origin-pod version has been
updated from v3.10.0 to v3.11.0 on the updated nodes and only these nodes
are not working actually.

Could it be related to a permission issue from the different kubelet
version?

Regards,
Marcello
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[no subject]

2018-04-30 Thread Tien Hung Nguyen
Hi,

I'm trying to start OpenShift Origin on Docker Toolbox (Boot2Docker) which
uses VirtualBox to start Docker.

However, I'm getting the following error:

$ oc cluster up
Starting OpenShift using openshift/origin:v3.6.1 ...
-- Checking OpenShift client ... OK
-- Checking Docker client ... OK
-- Checking Docker version ... OK
-- Checking for existing OpenShift container ... OK
-- Checking for openshift/origin:v3.6.1 image ... OK
-- Checking Docker daemon configuration ... FAIL
   Error: did not detect an --insecure-registry argument on the Docker
daemon
   Solution:

 Ensure that the Docker daemon is running with the following argument:
--insecure-registry 172.30.0.0/16

 You can run this command with --create-machine to create a machine
with the
 right argument.


I have already added the insecure-registry by the following commands:


   - docker-machine create openshift --engine-insecure-registry
   172.30.0.0/16


   - oc cluster up



When I do the command 'docker-machine inspect openshift' I get the
following results:

{
"ConfigVersion": 3,
"Driver": {
"IPAddress": "192.168.99.103",
"MachineName": "openshift",
"SSHUser": "docker",
"SSHPort": 54818,
"SSHKeyPath":
"d:\\Profiles\\username\\.docker\\machine\\machines\\openshift\\id_rsa",
"StorePath": "d:\\Profiles\\username\\.docker\\machine",
"SwarmMaster": false,
"SwarmHost": "tcp://0.0.0.0:3376",
"SwarmDiscovery": "",
"VBoxManager": {},
"HostInterfaces": {},
"CPU": 1,
"Memory": 1024,
"DiskSize": 2,
"NatNicType": "82540EM",
"Boot2DockerURL": "",
"Boot2DockerImportVM": "",
"HostDNSResolver": false,
"HostOnlyCIDR": "192.168.99.1/24",
"HostOnlyNicType": "82540EM",
"HostOnlyPromiscMode": "deny",
"UIType": "headless",
"HostOnlyNoDHCP": false,
"NoShare": false,
"DNSProxy": true,
"NoVTXCheck": false,
"ShareFolder": ""
},
"DriverName": "virtualbox",
"HostOptions": {
"Driver": "",
"Memory": 0,
"Disk": 0,
"EngineOptions": {
"ArbitraryFlags": [],
"Dns": null,
"GraphDir": "",
"Env": [],
"Ipv6": false,
"InsecureRegistry": [
"172.30.0.0/16"
],
"Labels": [],
"LogLevel": "",
"StorageDriver": "",
"SelinuxEnabled": false,
"TlsVerify": true,
"RegistryMirror": [],
"InstallURL": "https://get.docker.com;
},
"SwarmOptions": {
"IsSwarm": false,
"Address": "",
"Discovery": "",
"Agent": false,
"Master": false,
"Host": "tcp://0.0.0.0:3376",
"Image": "swarm:latest",
"Strategy": "spread",
"Heartbeat": 0,
"Overcommit": 0,
"ArbitraryFlags": [],
"ArbitraryJoinFlags": [],
"Env": null,
"IsExperimental": false
},
"AuthOptions": {
"CertDir": "d:\\Profiles\\username\\.docker\\machine\\certs",
"CaCertPath":
"d:\\Profiles\\username\\.docker\\machine\\certs\\ca.pem",
"CaPrivateKeyPath":
"d:\\Profiles\\username\\.docker\\machine\\certs\\ca-key.pem",
"CaCertRemotePath": "",
"ServerCertPath":
"d:\\Profiles\\username\\.docker\\machine\\machines\\openshift\\server.pem",
"ServerKeyPath":
"d:\\Profiles\\username\\.docker\\machine\\machines\\openshift\\server-key.pem",
"ClientKeyPath":
"d:\\Profiles\\username\\.docker\\machine\\certs\\key.pem",
"ServerCertRemotePath": "",
"ServerKeyRemotePath": "",
"ClientCertPath":
"d:\\Profiles\\username\\.docker\\machine\\certs\\cert.pem",
"ServerCertSANs": [],
"StorePath":
"d:\\Profiles\\username\\.docker\\machine\\machines\\openshift"
}
},
"Name": "openshift"
}

Please, could you tell me why it is not working and how I can fix this in
order to start OpenShift Origin wit Docker Toolbox?

Regards
Tien
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[no subject]

2017-12-01 Thread Brian Keyes
I have just installed open shift for the first time out side of the OPEN
lab, but when I attempt to login to the gui , no user will work , I also
cannot login with "oc login " for any user , but I can ssh into the master
and run the oc commands like oc get pods

I am perplexed on what could be the issue, please help!

-- 
Brian Keyes
Systems Engineer, Vizuri
703-855-9074(Mobile)
703-464-7030 x8239 (Office)

FOR OFFICIAL USE ONLY: This email and any attachments may contain
information that is privacy and business sensitive.  Inappropriate or
unauthorized disclosure of business and privacy sensitive information may
result in civil and/or criminal penalties as detailed in as amended Privacy
Act of 1974 and DoD 5400.11-R.
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[no subject]

2017-02-22 Thread Joseph Lorenzini
Hi all,

I have a fully functioning gluster cluster that i would like to use as a
volume in my openshift origin cluster. I am on openshift 1.3. I am
following the instructions to set this up here.

https://docs.openshift.org/latest/install_config/storage_examples/gluster_example.html

When I create the pod (per the instructions), I see the following error.


Error syncing pod, skipping: timeout expired waiting for volumes to
attach/mount for pod "gluster-pod1"/"default". list of unattached/unmounted
volumes=[gluster-vol1]


As a starting point, I'd like to find the all logs related to the volume
mounting process in the hopes that I can get some idea about why it can't
mount the gluster volume. However, I can't seem to find logs about this
problem anywhere. Also, if anyone has any suggestions where i could begin
to resolve this problem, I all ears.

Thanks,
Joe
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[no subject]

2016-04-08 Thread Marcos Ortiz


___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Subject: NAT'ed connection between containers

2016-03-14 Thread HIGUCHI Daisuke
Hello,

I install OpenShift Origin Single Master with Advanced Installation:
https://docs.openshift.org/latest/install_config/install/advanced_install.html#single-master

oso-master01.example.com192.168.200.20  10.1.2.1
oso-node01.example.com  192.168.200.21  10.1.1.1
(oso-node02.example.com 192.168.200.22  10.1.0.1)

Now, there are 2 Pods on oso-node01. Pod-A has 10.1.1.6 and Pod-B has 10.1.1.7.

When Container-AA in Pod-A accesses to Container-BB in Pod-B,
Container-AA's remote IP address is seen as 10.1.1.1 at Container-BB.
10.1.1.1 is oso-node01's tun0 network interface address.
My understanding is Pods accessing each other directly, but it looks NAT'ed.

I want all containers to access each other directly.
Can I configure OpenShift so?  And How?

Thanks,
dai
-- 
HIGUCHI Daisuke 

___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


[no subject]

2016-02-24 Thread Stéphane Klein
Hi,

I've created a PersistentVolumeClaim before my PersistentVolume resource.

Now, I've this :

```
# oc get pv
NAME   LABELSCAPACITY   ACCESSMODES
STATUS  CLAIM REASONAGE
my-persistent-volume   1GiRWO
Available   18h

# oc get pvc
NAME  LABELSSTATUSVOLUMECAPACITY   ACCESSMODES   AGE
foobar  Pending  4d
```

My PersistentVolumeClaim config :
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-persistent-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /my-persistent-volume/
```

My PersistentVolume config :

```
- apiVersion: v1
  kind: PersistentVolumeClaim
  metadata:
name: foobar
  spec:
accessModes:
- ReadWriteMany
resources:
  requests:
storage: 512Mi
```

How can I say to my persistent volume to retry ?

Best regards,
Stéphane
-- 
Stéphane Klein 
blog: http://stephane-klein.info
cv : http://cv.stephane-klein.info
Twitter: http://twitter.com/klein_stephane
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users