After more thought on this and further discussion, it seems that MD5 is the
right way to do this. The use case that I am designing for however requires
the repo manager aritfactory because it looks for the md5 sums that it
generates for each artifact (other similar tools likely have the same
Those are good points. I guess the challenge is how much time are you
willing to spend when running playbooks to compute sha/md5's of files on
disk? If we are OK spending that time, then we could just as easily have
the conditionals do that. I was looking at the file module to see what
I hit the wrong reply option previously. Sorry for the duplicate in
private, Brian.
I see this pretty rarely. It's normally the length of the file in bytes.
I think we're overthinking this. I had originally suggested to William that
comparing the timestamps and length would be a super
Brian,
Are you sure that's true of os.stat() and not just the shell commands?
Seems like it would not be.
I think I'm ok with adding a bytes= parameter if we can get around this
question.
On Wed, Jan 8, 2014 at 2:59 PM, Chad Scott csc...@appdynamics.com wrote:
I hit the wrong reply option
I've been burned by this before, stat is supposed to return the size of the
file in bytes.
I haven't really checked in a long time as I've grown accustomed to not
rely on size for file comparisons, but my issues stemmed from some
tools/implementations using # of blocks * block_size to measure the
Hi Guys
I submitted a pull request today but wanted to provide some background on
the use case. I ran into some situations where it was desirable to be able
to update an artifact in a content repository (say artifactory) and then
re-run ansible as is and have that updated artifact be pulled
The latter, if we have to SHA/MD5 a 500M+ file every time we run ansbile
the thought was that would be too slow.
On Tue, Jan 7, 2014 at 2:48 PM, Michael DeHaan mich...@ansibleworks.comwrote:
Was the time of doing something SHA related actually being a problem, or
more a problem of needing to