Hi, In my particular case, I'm using django-storages to upload my static files to Amazon S3. I'm serving my application from Heroku.
In my local development, when I run collectstatic for a second time just after the first one, no files are being uploaded to S3 because collectstatic checks for the modified_time to determine if the local files are newer than the ones in S3. That's fine so far. The problem is when I deploy to Heroku. Collectstatic is being executed from the Heroku server and absolutely all the files are always being uploaded to S3, even the ones that have not changed. This is because during the deployment Heroku creates a full copy of the source code, and therefore all the files have a new modified_time. In my case, it takes almost 10 minutes to upload ~1000 files for each deployment. Also, imagine the situation where the modified_times are not being changed and I wanted to upload older versions of the static files. I wont be able because storage wouldn't allow to upload files with an older modified_time. I think that a more accurate way to check if a file needs to be replaced could be by comparing their checksum/hash and offer this feature for all the Storage subclasses. To preserve backwards compatibility, in collectstatic command first determine if the storage subclass implements a checksum generation and otherwise fallback to modified_time comparison. What do you think, is this something that makes sense? -- You received this message because you are subscribed to the Google Groups "Django users" group. To unsubscribe from this group and stop receiving emails from it, send an email to django-users+unsubscr...@googlegroups.com. To post to this group, send email to django-users@googlegroups.com. Visit this group at https://groups.google.com/group/django-users. To view this discussion on the web visit https://groups.google.com/d/msgid/django-users/04748bab-050a-4f2c-8982-21406de30b35%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.