This doesn't seem to be an issue. I have multiple files with plus signs in
their names that made it back down to my local cache without requiring a
rebuild (including the whole Linux kernel)
On Tue, Feb 26, 2019 at 11:35 AM Brian Walsh wrote:
> On Mon, Feb 25, 2019 at 8:46 PM Timothy Froehlich
Hi Timothy,
The S3 protocol is HTTP(S) based, the overhead per object is quite significant.
This is not much of a problem for large files but the sstate_cache contains
mostly lots of really small files. I think in this case you’re better of
storing the cache on a secondary EBS volume that you c
On Mon, Feb 25, 2019 at 8:46 PM Timothy Froehlich wrote:
>
> I've been spending a bit too long this past week trying to build up a
> reproducable build infrastructure in AWS and I've got very little experience
> with cloud infrastucture and I'm wondering if I'm going in the wrong
> direction. I
Well, based on the responses above I did some more research and it didn't
seem like the file sizes should be causing problems on the scale that I was
seeing so I investigated further. I realized that despite my build/sstate
directory getting slowly larger, it wasn't actually getting the files and
w
Have you done any wireshark analysis on the traffic? My guess is that the
round trip with network latency is bumping your build time by a factor of
at least 100x. The state-cache is hammered on continuously, so have
probably introduced a significant bottleneck.
..Ch:W..
On Mon, Feb 25, 2019 at 1
I've been spending a bit too long this past week trying to build up a
reproducable build infrastructure in AWS and I've got very little
experience with cloud infrastucture and I'm wondering if I'm going in the
wrong direction. I'm attempting to host my sstate_cache as a mirror in a
private S3 bucke