On Sun, 19 Jun 2016 03:28 am, Random832 wrote: > On Sat, Jun 18, 2016, at 12:02, Steven D'Aprano wrote: >> Er, you may have missed that I'm talking about a single user setup. >> Are you suggesting that I can't trust myself not to forge a request >> that goes to a hostile site? >> >> It's all well and good to say that the application is vulnerable to >> X-site attacks, but how does that relate to a system where I'm the >> only user? > > I don't think you understand what cross-site request forgery is,
Possibly not. > unless > your definition of "single user setup" includes not connecting to the > internet at all. The point is that one site causes the client to send a > request (not desired by the user) to another site. That the client is a > single-user system makes no difference. Here's the link again, in case anyone missed it. http://blog.blindspotsecurity.com/2016/06/advisory-http-header-injection-in.html I've read it again, and it seems to me that the attack surface is pretty small. The attacker needs to know that you're running (let's say) memcache, AND the port you're running it on, AND that you are fetching URLs with something that allows X-site attacks, AND they have to fool you into fetching an appropriately crafted URL. From the article: "In our case, if we could fool an internal Python application into fetching a URL for us, then we could easily access memcached instances. Consider the URL: ..." and then they demonstrate an attack against memcache. Except, the author of the article knows the port that memcache is on, and he doesn't have to fool anyone into fetching a hostile URL. He just fetched it himself. "In our case, if we could fool a person into pointing a gun at their foot and pulling the trigger, we can blow their foot off. Here is a proof-of-concept..." (points gun at own foot and pulls trigger) Absent an actual attack that demonstrates the "fool an internal application" part, I don't think I'm going to lose too much sleep over this. My house has many dangerous items, like kitchen knives, power tools and the like. If somebody could fool me into, say, hitting myself on the head with a hammer, that would be bad. But until I see a demonstration of how somebody might do that, I'm not going to keep my hammer under lock and key. Maybe I'm missing something, but while I acknowledge the general position "here's a security flaw", and I accept that it needs to be fixed, I'm not seeing that this is a sufficiently realistic attack enough to justify requiring authentication for all internal services. -- Steven -- https://mail.python.org/mailman/listinfo/python-list