PS
>
> Think of it this way - if the deployment scales up, clearly we should add
> more volumes. What do I do if it scales down? Delete the volumes? Hold on
> to them for some later scale-up? For how long? How many volumes?
Seeing that scaling happens 1 pod at a time, having n+1 volum
On 9/6/18 11:38 PM, 'Tim Hockin' via Kubernetes user discussion and Q&A
wrote:
On Thu, Sep 6, 2018, 3:33 PM David Rosenstrauch wrote:
FWIW, I recently ran into a similar issue, and the way I handled it was
to have each of the pods mount an NFS shared file system as a PV (AWS
EFS, in my cas
On Thursday, September 6, 2018 at 11:34:00 PM UTC-4, Tim Hockin wrote:
>
>
>
> On Thu, Sep 6, 2018, 1:52 PM Naseem Ullah wrote:
>
>> I do not think you have to understand what you are asking for, I've
>> learned a lot by asking questions I only half understood :) With that said
>> autoscaling
Interesting approach indeed! Also curious about "As pods come and go -
don't you eventually waste the disk space?"
On Thursday, September 6, 2018 at 6:33:39 PM UTC-4, David Rosenstrauch
wrote:
>
> FWIW, I recently ran into a similar issue, and the way I handled it was
> to have each of the pods
On Thursday, September 6, 2018 at 11:37:43 PM UTC-4, Tim Hockin wrote:
>
>
>
> On Thu, Sep 6, 2018, 3:20 PM Naseem Ullah wrote:
>
>> Thank you Jing and thank you Tim.
>> If this feature will allow HPA enabled deployment managed pods to spawn
>> with a prepopulated volume each, that would be
>>
On Thu, Sep 6, 2018, 3:33 PM David Rosenstrauch wrote:
> FWIW, I recently ran into a similar issue, and the way I handled it was
> to have each of the pods mount an NFS shared file system as a PV (AWS
> EFS, in my case) and have each pod write its output into a directory on
> the NFS share. The
On Thu, Sep 6, 2018, 3:20 PM Naseem Ullah wrote:
> Thank you Jing and thank you Tim.
> If this feature will allow HPA enabled deployment managed pods to spawn
> with a prepopulated volume each, that would be
>
This feature enables start-from-snapshot volumes but does not fundamentally
alter the
On Thu, Sep 6, 2018, 1:52 PM Naseem Ullah wrote:
> I do not think you have to understand what you are asking for, I've
> learned a lot by asking questions I only half understood :) With that said
> autoscaling sts was a question and not a feature request :)
>
LOL, fair enough
> I do not see ho
FWIW, I recently ran into a similar issue, and the way I handled it was
to have each of the pods mount an NFS shared file system as a PV (AWS
EFS, in my case) and have each pod write its output into a directory on
the NFS share. The only issue then is just to make sure that each pod
writes it'
Thank you Jing and thank you Tim.
If this feature will allow HPA enabled deployment managed pods to spawn with a
prepopulated volume each, that would be nice. If not, using emptyDir with a 2
minute startup delay as the data is synced for each new pod is what it is.
PS would be nice if GKE had a
Naseem, for your volume data prepopulated request, like Tim mentioned, we
now have volume snapshot which will be available in v1.12 as alpha
feature. This allows you to create volume snapshots from volume (PVC).
With snapshot available, you can create a new volume (PVC) from snapshot as
the da
I do not think you have to understand what you are asking for, I've learned
a lot by asking questions I only half understood :) With that said
autoscaling sts was a question and not a feature request :)
I do not see how "data is important, and needs to be preserved" and "pods
(compute) have no
You have to understand what you are asking for. You're saying "this
data is important and needs to be preserved beyond any one pod (a
persistent volume)" but you're also saying "the pods have no identity
because they can scale horizontally". These are mutually incompatible
statements.
You really
I see I see.. what about autoscaling statefulsets with an HPA?
> On Sep 6, 2018, at 4:06 PM, 'Tim Hockin' via Kubernetes user discussion and
> Q&A wrote:
>
> Deployments and PersistentVolumes are generally not a good
> combination. This is what StatefulSets are for.
>
> There's work happenin
Deployments and PersistentVolumes are generally not a good
combination. This is what StatefulSets are for.
There's work happening to allow creation of a volume from a snapshot,
but it's only Alpha in the next release.
On Thu, Sep 6, 2018 at 1:03 PM Naseem Ullah wrote:
>
> Hello,
>
> I have a sim
Hello,
I have a similar use case to Montassar.
Although I could use emptyDirs, each newly spun pod takes 2-3 minutes to
download required data(pod does something similar to git-sync). If volumes
could be prepopulated when a new pod is spun it will simply sync the diff,
which will drastically r
16 matches
Mail list logo