Naseem, for your volume data prepopulated request, like Tim mentioned, we now have volume snapshot which will be available in v1.12 as alpha feature. This allows you to create volume snapshots from volume (PVC). With snapshot available, you can create a new volume (PVC) from snapshot as the data source. So the volume will have data prepopulated. We also plan to work on data clone and population features which allow you to clone data from one PVC to another one or prepopulate data from some data source. Please let us know if you have any questions about it or any requirements for the feature. Thanks!
On Thursday, September 6, 2018 at 1:52:11 PM UTC-7, Naseem Ullah wrote: > > I do not think you have to understand what you are asking for, I've > learned a lot by asking questions I only half understood :) With that said > autoscaling sts was a question and not a feature request :) > > I do not see how "data is important, and needs to be preserved" and "pods > (compute) have no identity" are mutually incompatible statements but if you > say so. :) > > In any case the data is more or less important since it is fetchable, its > just that if its already there when a new pod is spun up, or when a > deployment is updated and a new pod created, it speeds up the startup time > drastically (virtually immediate vs 2-3 minutes to sync) > > Shared storage would be ideal. (deployement with hpa, with a mounted nfs > vol) But I get OOM errors when using RWX persistent volume and +1 pods are > syncing the same data to that volume at the same time, i do not know why > these OOM errors occur. But maybe that has something to do with the code > running that syncs the data. RWX seems to be a recurring challenge. > > > On Thursday, September 6, 2018 at 4:33:18 PM UTC-4, Tim Hockin wrote: >> >> You have to understand what you are asking for. You're saying "this >> data is important and needs to be preserved beyond any one pod (a >> persistent volume)" but you're also saying "the pods have no identity >> because they can scale horizontally". These are mutually incompatible >> statements. >> >> You really want a shared storage API, not volumes... >> On Thu, Sep 6, 2018 at 1:08 PM Naseem Ullah <nas...@transit.app> wrote: >> > >> > I see I see.. what about autoscaling statefulsets with an HPA? >> > >> > > On Sep 6, 2018, at 4:06 PM, 'Tim Hockin' via Kubernetes user >> discussion and Q&A <kubernet...@googlegroups.com> wrote: >> > > >> > > Deployments and PersistentVolumes are generally not a good >> > > combination. This is what StatefulSets are for. >> > > >> > > There's work happening to allow creation of a volume from a snapshot, >> > > but it's only Alpha in the next release. >> > > On Thu, Sep 6, 2018 at 1:03 PM Naseem Ullah <nas...@transit.app> >> wrote: >> > >> >> > >> Hello, >> > >> >> > >> I have a similar use case to Montassar. >> > >> >> > >> Although I could use emptyDirs, each newly spun pod takes 2-3 >> minutes to download required data(pod does something similar to git-sync). >> If volumes could be prepopulated when a new pod is spun it will simply sync >> the diff, which will drastically reduce startup readiness time. >> > >> >> > >> Any suggestions? Now I have a tradeoff between creating a static >> number of replicas and creating same number of PVCs , or using HPA but >> emptyDir volume which increases startup time for the pod. >> > >> >> > >> Thanks, >> > >> Naseem >> > >> >> > >> On Thursday, January 5, 2017 at 6:07:42 PM UTC-5, Montassar Dridi >> wrote: >> > >>> >> > >>> Hello!! >> > >>> >> > >>> I'm using Kubernetes deployment with persistent volume to run my >> application, but when I try to add more replicas or autoscale, all the new >> pods try to connect to the same volume. >> > >>> How can I simultaneously auto create new volumes for each new pod., >> like statefulsets(petsets) are able to do it. >> > >> >> > >> -- >> > >> You received this message because you are subscribed to the Google >> Groups "Kubernetes user discussion and Q&A" group. >> > >> To unsubscribe from this group and stop receiving emails from it, >> send an email to kubernetes-use...@googlegroups.com. >> > >> To post to this group, send email to kubernet...@googlegroups.com. >> > >> Visit this group at https://groups.google.com/group/kubernetes-users. >> >> > >> For more options, visit https://groups.google.com/d/optout. >> > > >> > > -- >> > > You received this message because you are subscribed to the Google >> Groups "Kubernetes user discussion and Q&A" group. >> > > To unsubscribe from this group and stop receiving emails from it, >> send an email to kubernetes-use...@googlegroups.com. >> > > To post to this group, send email to kubernet...@googlegroups.com. >> > > Visit this group at https://groups.google.com/group/kubernetes-users. >> >> > > For more options, visit https://groups.google.com/d/optout. >> > >> > -- >> > You received this message because you are subscribed to the Google >> Groups "Kubernetes user discussion and Q&A" group. >> > To unsubscribe from this group and stop receiving emails from it, send >> an email to kubernetes-use...@googlegroups.com. >> > To post to this group, send email to kubernet...@googlegroups.com. >> > Visit this group at https://groups.google.com/group/kubernetes-users. >> > For more options, visit https://groups.google.com/d/optout. >> > -- You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group. To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-users+unsubscr...@googlegroups.com. To post to this group, send email to kubernetes-users@googlegroups.com. Visit this group at https://groups.google.com/group/kubernetes-users. For more options, visit https://groups.google.com/d/optout.