On Thursday, September 6, 2018 at 11:34:00 PM UTC-4, Tim Hockin wrote:
>
>
>
> On Thu, Sep 6, 2018, 1:52 PM Naseem Ullah wrote:
>
>> I do not think you have to understand what you are asking for, I've
>> learned a lot by asking questions I only half understood :) With that said
>> autoscaling
Interesting approach indeed! Also curious about "As pods come and go -
don't you eventually waste the disk space?"
On Thursday, September 6, 2018 at 6:33:39 PM UTC-4, David Rosenstrauch
wrote:
>
> FWIW, I recently ran into a similar issue, and the way I handled it was
> to have each of the
On Thursday, September 6, 2018 at 11:37:43 PM UTC-4, Tim Hockin wrote:
>
>
>
> On Thu, Sep 6, 2018, 3:20 PM Naseem Ullah wrote:
>
>> Thank you Jing and thank you Tim.
>> If this feature will allow HPA enabled deployment managed pods to spawn
>> with a prepopulated volume each, that would be
On Thu, Sep 6, 2018, 3:33 PM David Rosenstrauch wrote:
> FWIW, I recently ran into a similar issue, and the way I handled it was
> to have each of the pods mount an NFS shared file system as a PV (AWS
> EFS, in my case) and have each pod write its output into a directory on
> the NFS share. The
On Thu, Sep 6, 2018, 3:20 PM Naseem Ullah wrote:
> Thank you Jing and thank you Tim.
> If this feature will allow HPA enabled deployment managed pods to spawn
> with a prepopulated volume each, that would be
>
This feature enables start-from-snapshot volumes but does not fundamentally
alter the
On Thu, Sep 6, 2018, 1:52 PM Naseem Ullah wrote:
> I do not think you have to understand what you are asking for, I've
> learned a lot by asking questions I only half understood :) With that said
> autoscaling sts was a question and not a feature request :)
>
LOL, fair enough
> I do not see
Thank you Jing and thank you Tim.
If this feature will allow HPA enabled deployment managed pods to spawn with a
prepopulated volume each, that would be nice. If not, using emptyDir with a 2
minute startup delay as the data is synced for each new pod is what it is.
PS would be nice if GKE had a
Naseem, for your volume data prepopulated request, like Tim mentioned, we
now have volume snapshot which will be available in v1.12 as alpha
feature. This allows you to create volume snapshots from volume (PVC).
With snapshot available, you can create a new volume (PVC) from snapshot as
the
I do not think you have to understand what you are asking for, I've learned
a lot by asking questions I only half understood :) With that said
autoscaling sts was a question and not a feature request :)
I do not see how "data is important, and needs to be preserved" and "pods
(compute) have no
You have to understand what you are asking for. You're saying "this
data is important and needs to be preserved beyond any one pod (a
persistent volume)" but you're also saying "the pods have no identity
because they can scale horizontally". These are mutually incompatible
statements.
You
I see I see.. what about autoscaling statefulsets with an HPA?
> On Sep 6, 2018, at 4:06 PM, 'Tim Hockin' via Kubernetes user discussion and
> Q wrote:
>
> Deployments and PersistentVolumes are generally not a good
> combination. This is what StatefulSets are for.
>
> There's work happening
Deployments and PersistentVolumes are generally not a good
combination. This is what StatefulSets are for.
There's work happening to allow creation of a volume from a snapshot,
but it's only Alpha in the next release.
On Thu, Sep 6, 2018 at 1:03 PM Naseem Ullah wrote:
>
> Hello,
>
> I have a
Hello,
I have a similar use case to Montassar.
Although I could use emptyDirs, each newly spun pod takes 2-3 minutes to
download required data(pod does something similar to git-sync). If volumes
could be prepopulated when a new pod is spun it will simply sync the diff,
which will drastically
We pipe the k8s events into sumologic using a http collector and then use
sumologic alerting.
punit agrawal
dev-ops lead
new product development
ebay
From: on behalf of Chaitanya Potu
Reply-To: "kubernetes-users@googlegroups.com"
Date: Thursday, September 6, 2018 at 12:43 PM
To:
Use Prometheus and alert manager for setting up these kind of monitoring
and alerts.
On Wednesday, August 8, 2018 at 2:59:01 PM UTC-5, David Rosenstrauch wrote:
>
> As we're getting ready to go to production with our k8s-based system,
> we're trying to pin down exactly how we're going to do all
I'm using Heptio ARK on GKE for cluster back up and restores. We can set
the scheduling for backup and restore as well as retention period. The tool
is very effective.
On Wednesday, August 8, 2018 at 5:41:21 PM UTC-5, parthi.geo wrote:
>
> Pointers on tools & best practices to back-up and
Here is more details on the problem.
The idea behind it is to manually manage pods into the not default nodepool.
The nodepool is created in autoscaling. It is not the default nodepool but
an additional one created manually through gcloud api.
When a pod is created in the nodepool and that
17 matches
Mail list logo