Hello,
I am trying to setup auth for a k8s cluster and after setting up a cluster
with kops using coreOS, i am now looking
at https://cloud.google.com/community/tutorials/kubernetes-auth-openid-rbac
the user is created, in the section, `Setting up a Kubernetes cluster` it
states to update the
Hello,
What is the correct way to set the FSType to XFS when creating a Persistent
Volume Claim from a helm template -
https://github.com/kubernetes/charts/blob/master/stable/mongodb-replicaset/templates/mongodb-statefulset.yaml#L166
?
Any advice is much appreciated.
--
You received this mes
Hello,
I have setup a cluster using kops and have also created additional nodes,
as
per
https://github.com/kubernetes/kops/blob/master/docs/instance_groups.md#creating-a-new-instance-group
➜ terraform git:(master) ✗ kops get instancegroups
Hello,
I have setup a small test cluster, on AWS, which has one master and 3
workers, t4.large, using kops.
What is the correct way to add for example 3 x r4.xlarge nodes to this?
Any advice is much appreciated.
--
You received this message because you are subscribed to the Google Groups
"Kube
2017 at 17:16, Norman Khine wrote:
>
> I have just setup a k8s 1.7.0, but get this error Critical pod
kube-system_elasticsearch-logging-0 doesn't fit on any node. and therefore
>
> ➜ tack git:(develop) ✗ kubectl get pods --all-namespaces
(git)-[develop]
> NAMESPACE
I have just setup a k8s 1.7.0, but get this error Critical pod
kube-system_elasticsearch-logging-0 doesn't fit on any node. and therefore
➜ tack git:(develop) ✗ kubectl get pods --all-namespaces
replicaset" in pod "test-
mongo-mongodb-replicaset-0" is waiting to start: PodInitializing
On Friday, June 30, 2017 at 10:47:39 PM UTC+1, Rodrigo Campos wrote:
>
> That is should be independent of the service type change.
>
> The service is a different object and
nope, the pods never initialized ;'( will see if i can trouble-shoot it
On 30 June 2017 at 17:42, Rodrigo Campos wrote:
> So it's working? :)
>
> On Friday, June 30, 2017, Norman Khine wrote:
>
>> Ignore, I had to pass the `templates/database/mongo/values.ya
:
- ReadWriteOnce
size: 30Gi
annotations: {}
On Friday, June 30, 2017 at 1:00:04 PM UTC+1, Norman Khine wrote:
>
> Hello, I have installed mongodb using the helm chart,
> https://github.com/kubernetes/charts/blob/master/stable/mongodb-replicaset/templates/mongodb-service.yaml#L17,
>
rning FailedMount Failed to attach volume
"pvc-88927d4f-5da9-11e7-93c0-06b40d25fce3" on node
"ip-10-0-10-229.eu-west-2.compute.internal" with: Error attaching EBS
volume "vol-03817831fe7d0c29e" to instance "i-04695822170b439ea":
IncorrectState: vol-03817831fe7
Hello, I have installed mongodb using the helm chart,
https://github.com/kubernetes/charts/blob/master/stable/mongodb-replicaset/templates/mongodb-service.yaml#L17,
what is the correct way to update the Type and IP for this service,
currently it is:
➜ k8s git:(master) kubectl describe svc tri
hello, I am using
https://github.com/kubernetes/charts/blob/master/stable/mongodb-replicaset
to deploy a 3 node mongo instance all works fine within the k8s cluster,
but i would like to allow my lambda function access to mongo as well.
what is the correct way to securely achieve this and how wo
ok i needed the extra permissions:
"ec2:AuthorizeSecurityGroupIngress",
"ec2:AuthorizeSecurityGroupEgress",
"ec2:DeleteSecurityGroup",
"ec2:RevokeSecurityGroupEgress",
"ec2:RevokeSecurityGroupIngress",
On
I have setup a new k8s cluster and all works well, in that I can create
pods, i have setup helm and installed mongo, cluster is working fine.
The issue I am having is that, when I try to create a service, i get
UnauthorizedOperation: You are not authorized to perform this operation
Here is the
t;> On Mon, Feb 20, 2017 at 2:11 PM, Matthias Rampke
>> wrote:
>> > I see three containers in this.
>> >
>> > Yes, 0.0.0.0: should work if the graphql container binds to all
>> > interfaces. Try it out?
>> >
>> >
>> > On
Hello, I have the following template file which has 2 containers:
containers:
- name: api
ports:
- containerPort: 3000
env:
- name: SERVERLESS_ENDPOINT
value: http://0.0.0.0:
- name: media
ports:
Hello, I have a pod with 3 kublets one of the kublet had an issue and I had
a `CrashLoopBackOff` but the rolling update continued, so the application
failed and became un available.
What is the correct way to prevent this from happening?
here is my yaml file:
apiVersion: extensions/v1beta1
kind
ch-f3fvi_ku
<https://papertrailapp.com/groups/1212284/events?centered_on_id=728454754665263131&q=program%3Ak8s_fluentd-elasticsearch.845ea3f_fluentd-elasticsearch-f3fvi_ku>
: 2016-10-28 00:26:10 + [warn]: temporarily failed to flush the
buffer. next_retry=2016-10-28 00:15:
mply
> because cluster network is not working yet.
>
>
> On Monday, October 24, 2016 at 8:44:44 PM UTC+8, Norman Khine wrote:
>>
>> i am getting these warnings in my k8s cluster logs
>>
>> Oct 24 12:09:47 fluentd-elasticsearch-zy0oq k8s_fluentd-elasticsearch.
>
i am getting these warnings in my k8s cluster logs
Oct 24 12:09:47 fluentd-elasticsearch-zy0oq k8s_fluentd-elasticsearch.
6305e9d5_fluentd-elasticsearch-zy0oq_k: 2016-10-24 12:09:47 + [warn]:
temporarily failed to flush the buffer. next_retry=2016-10-24 12:08:11 +
error_class="Fluent::El
oh i see, thank you
On 20 October 2016 at 16:34, 'Tim Hockin' via Kubernetes user discussion
and Q&A wrote:
> That's what I mean - replace really requires a read-modify-write, so
> you can read back the immutable field.
>
> On Thu, Oct 20, 2016 at 7:48 AM, Norma
Oct 20, 2016 6:54 AM, "Rodrigo Campos" wrote:
>
>> Yes, modify the file and do kubectl apply -f
>>
>> On Thursday, October 20, 2016, Norman Khine wrote:
>>
>>> Hello, I have the following pod, which has 3 containers:
>>>
>>>
Hello, I have the following pod, which has 3 containers:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-feature
labels:
pod: app
track: feature
spec:
replicas: 3
template:
metadata:
labels:
pod: app
track: feature
spec:
conta
Hello,
I am running k8s cluster on AWS and am trying to setup the security groups
on AWS to only allow traffic from the vpc created for my application.
When I add a new rule based on my k8s-worker security group I get this
error ` You have specified two resources that belong to different network
yes, it makes sense, thank you
On 28 September 2016 at 16:36, Rodrigo Campos wrote:
> On Wed, Sep 28, 2016 at 03:10:09PM +0100, Norman Khine wrote:
> > Well the issue is that the new updated version is not deployed, so when i
> > push to `develop` branch I have my image built
Wednesday, September 28, 2016, Norman Khine wrote:
>
>> Hello,
>> I have setup a lambda function to trigger a update to my k8s cluster, the
>> problem is that the image is not being pulled.
>> Here is my yaml file
>>
>> apiVersion: extensions/v1beta1
>
Hello,
I have setup a lambda function to trigger a update to my k8s cluster, the
problem is that the image is not being pulled.
Here is my yaml file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app-prod
labels:
pod: app
track: production
annotations:
scheduler
yeah it is working as expected, thanks
On 14 September 2016 at 16:09, Rodrigo Campos wrote:
> Awesome. It's working as you want now? Can we help? :)
>
>
> On Wednesday, September 14, 2016, Norman Khine wrote:
>
>> Many thanks for your replies, it is clear now.
>&g
ide whether it should pull a
>>>> container image. After the container image is pulled the container/pod will
>>>> be up and running. If the image is updated, kubelet won't know and won't
>>>> re-pull the image for the pod even if the imagePull
fig-best-practices/#container-images
>
>
> On Mon, Sep 12, 2016 at 5:39 AM Rodrigo Campos > wrote:
>
>> Modify the file and run kubectl apply -f . This will do a rolling
>> update
>>
>> The :latest images are always pulled, and you have specifie
Hello, I have the following pod file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: app
labels:
pod: app
annotations:
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
Hello, in my logs I am seeing the following:
Aug 24 15:18:28 kube-dns-v10-4epnq k8s_skydns.29528e52_kube-dns-v10-4epnq
_kube-system_af513c8b-69f7: 2016/08/24 14:18:28 skydns: received DNS
Request for "kubernetes.default.svc.cluster.local." from "127.0.0.1:36423"
with type 1
Aug 24 15:18:28 ku
Tim Hockin' via Kubernetes user discussion and
Q&A wrote:
> I feel obligated to ask why? If it is about resources like CPU and RAM
> you should use the resources section of your pod, and we'll find an
> appropriate home for it.
>
> On Aug 25, 2016 12:13 PM, "Norman
into
> a gigant monster (besides if the ci is hacked, they have pretty much access
> to your hosted stuff)
>
> On Friday, August 26, 2016, Norman Khine wrote:
>
>> Hello,
>> I am trying to setup a CI from codeship the deploy new version and deploy
>> these when a releas
Hello,
I am trying to setup a CI from codeship the deploy new version and deploy
these when a release has been made and pushed to our github repo.
In codeship I am able to execute a custom script and the following works
curl -O
https://storage.googleapis.com/kubernetes-release/release/v1.3.4/bin
0 1m
10.2.39.5 ip-10-0-11-129.ec2.internal
also, if i also want to ensure that the pods don't land on an etcd node, so
for example not an instance-type: c3.large and not name: etcd-* how is this
done?
On 25 August 2016 at 23:03, Norman Khine wrote:
> is there a way
is there a way to find out on which nodes did the pods got deployed from
kubectl?
On 25 August 2016 at 22:28, Brandon Philips
wrote:
> Yes, this looks roughly right. Did it work? We should probably write a doc
> on this.
>
> On Thu, Aug 25, 2016 at 4:53 PM Norman Khine wrote:
>
e-type" and value being
> the instance type. So you can use node selectors to do what you want; see
> http://kubernetes.io/docs/user-guide/node-selection/
>
> On Thu, Aug 25, 2016 at 12:13 PM, Norman Khine wrote:
>
>> hello,
>> i have couple of worker instances some
hello,
i have couple of worker instances some are c4.large and others i2.xlarge
what is the correct way to run specific pods in one instance type?
any advice much appreciated
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group
yeah, the only way to get it to work is to allow the public ip's of the
workers to be able to access the mongo cluster. i will put the cluster
inside k8s and install the monitoring agent there.
On 19 August 2016 at 14:16, Rodrigo Campos wrote:
>
>
> On Friday, August 19, 2016
Hello,
I have a mongo cluster hosted on EC2 and i also have a k8s cluster with a
simple node.js app that connects to mongo.
And as outgoing connection will go from any node how do secure my mongo
cluster to accept connections from these pods?
My k8s cluster runs in a different AWS account to the
you to claim an IP or name
> so it can be stable. GCE does allow this.
>
> On Thu, Aug 18, 2016 at 1:24 AM, Norman Khine wrote:
> > Hello,
> > I have setup a test k8s cluster on AWS, where kubectl has:
> >
> > ➜ otp git:(maint) ✗ kubectl get svc my-app
>
Hello,
I have setup a test k8s cluster on AWS, where kubectl has:
➜ otp git:(maint) ✗ kubectl get svc my-app
NAMECLUSTER-IP EXTERNAL-IPPORT(S) AGE
my-app 10.0.5.21a1cf7b1cf5fb5... 443/TCP,80/TCP 6d
What is the correct way to set the external IP
apiVersion:
43 matches
Mail list logo