[liberationtech] BabelCrypt: The Universal Encryption Layer for Mobile Messaging Applications

2014-12-19 Thread Wasa Bee
This [0] may of interest to people implementing secure IM. Instead of
creating an IM app from scratch and hoping for wide adoption, babelcrypt
is a keyboard app. One installed an an android smartphone, the keyboard
passes encrypted data to an existing IM app such as whatsapp or Fb
messenger. Using certain android APIs, it can also access content on the
screen to display received messages.

[0]
http://www.iseclab.org/people/mweissbacher/publications/babelcrypt_fc.pdf
-- 
Liberationtech is public & archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

Re: [liberationtech] Proposal for more-trustable code from app stores; comments welcome.

2014-09-25 Thread Wasa Bee
On Thu, Sep 25, 2014 at 10:00 AM, Nick  wrote:

>
>
> But to be honest I'm not sure why people who are happy to use a
> completely proprietary mobile computing system would care that much
> about this.
>

There is a difference between trusting Google and the manufacturer VS
trusting the hundreds of proprietary apps and how they'll use your data...
The attack surface is much smaller in the former...




>
> Nick
> --
> Liberationtech is public & archives are searchable on Google. Violations
> of list guidelines will get you moderated:
> https://mailman.stanford.edu/mailman/listinfo/liberationtech.
> Unsubscribe, change to digest, or change password by emailing moderator at
> compa...@stanford.edu.
>
>
-- 
Liberationtech is public & archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

Re: [liberationtech] Foxacid payload

2014-07-18 Thread Wasa Bee
if Google start actively looking for bugs, aren't they going to have a
ranking per vendor every year to incentive "bad vendors" to improve?
What are the other means they can incentive vendors, without making too
much of a fuss that users don't loose confidence in web security overall?


On Thu, Jul 17, 2014 at 11:07 PM, Richard Brooks  wrote:

> On 07/17/2014 05:57 PM, Griffin Boyce wrote:
> > Andy Isaacson wrote:
> >>> this is exactly why some who have received these payloads are
> >>> sitting on them, rather than disclosing.
> >
> >> Hmmm, that seems pretty antisocial and shortsighted.  While the
> >> pool of bugs is large, it is finite.  Get bugs fixed and get
> >> developers to write fewer bugs going forward, and we'll rapidly
> >> deplete the pool of 0day and drive up the cost of FOXACID style
> >> deployments.
> >
> >> Forcing deployments to move to more interesting bugs will also
> >> give insight into IAs' exploit sourcing methodologies.
> >
> >   Solidarity is really important here.  "Increased security for those
> > who actively set honeytraps" doesn't really scale at all, and most
> > people will never reap the rewards of this work. =/  Forcing the
> > government and defense contractors to burn through 0day at a high rate
> > is far, FAR better than coming across one or two on your own and
> > hiding it.  These backdoors need to be revealed if we're to protect
> > ourselves.
> >
> >   Let's sunburn these motherfuckers.
> >
>
> You are forgetting moral hazard.
>
> Why are there so many bugs? The laws relieve software manufacturers
> of liability for the flaws of their programs. It is cheaper to
> let clients do the testing for you.
>
> If a 3rd party like Google takes over the software testing for
> free, there is even less incentive to make the slightest effort
> to test pre-release software and make non-faulty products.
>
> You will not exterminate all the bugs, you will give the bug
> makers (software manufacturers) more incentive to flood the
> world with faulty products.
>
> Which I think is why the open source/free products are more reliable
> than the commercial ones. The economic incentives are to build
> crap quickly. If you are not doing the work for profit motives,
> you can afford to make a decent product.
>
>
> --
> Liberationtech is public & archives are searchable on Google. Violations
> of list guidelines will get you moderated:
> https://mailman.stanford.edu/mailman/listinfo/liberationtech.
> Unsubscribe, change to digest, or change password by emailing moderator at
> compa...@stanford.edu.
>
>
-- 
Liberationtech is public & archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

Re: [liberationtech] LUKS "Self-Destruct" feature introduced in Kali Linux

2014-01-30 Thread wasa bee
well, encryption software are already hard to use "Greenwald struggled with
the software for a while, but then gave up and blew off Snowden.  Snowden
then got in touch with Laura Poitras, who was already an expert on
encryption"
http://www.dailykos.com/story/2013/08/28/1233355/-Can-anyone-help-me-set-up-PGP-encrypted-E-mail-It-s-the-mark-of-an-investigative-reporter
How much would your not-so-technically-complicated solution cripple
usability?
You might argue that if you're encrypted ur data + afraid of being coerced
to reveal the key, then ur a sufficiently high target to take the extra
hassle...


On Thu, Jan 30, 2014 at 3:25 AM, Charles Haynes  wrote:

> Yes it's useful but it's maybe more complicated than necessary. You
> encrypt the information and make sure the decryption key is sent to a safe
> destination via a different route. While in transit you cannot be compelled
> to give up encryption keys because you do not have them (unlike a TrueCrypt
> hidden volume.) When you arrive safely at your destination you retrieve the
> decryption key and restore your data (unlike a self-destruct.)
>
> -- Charles
>
> --
> Liberationtech is public & archives are searchable on Google. Violations
> of list guidelines will get you moderated:
> https://mailman.stanford.edu/mailman/listinfo/liberationtech.
> Unsubscribe, change to digest, or change password by emailing moderator at
> compa...@stanford.edu.
>
-- 
Liberationtech is public & archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

Re: [liberationtech] LUKS "Self-Destruct" feature introduced in Kali Linux

2014-01-30 Thread wasa bee
assuming credential info (IV, pwd-encrypted key,etc) is stored with no
recognizable format (not ASN1, no header), it should look indistinguishable
from other encrypted data on disk. So how feasible is it to brute-force the
 location of the key + pwd? That must take time. What if cred data is
scattered over the disk rather than written as a continuous blob? How much
mitigation would that introduce?
I'm just wondering what kind of "hardening" could be used against
non-reliable erase features.
Note that if you use an SSD with block management and wear levelling done
in OS, you should be able to delete securely. The problem is mainly for MMC.


On Thu, Jan 30, 2014 at 9:00 AM, Maxim Kammerer  wrote:

> On Sat, Jan 18, 2014 at 5:02 AM, Pranesh Prakash 
> wrote:
> > This above description seems to me to be an extreme case of 2FA.  Is it
> actually useful?
>
> As noted in Liberté Linux FAQ [1]:
> NOTE: Modern flash memory devices with wear leveling (as well as
> modern HDDs with automatic bad sectors remapping) cannot guarantee
> that the original OTFE header and its backup have been erased.
>
> Also, the developers implemented the functionality by finding some old
> cryptsetup patch and applying it.
>
> I can't think of a scenario where this functionality would be useful.
> Reminds me of Greenwald using his boyfriend as a data mule  --
> simultaneously trusting and mistrusting cryptography due to lack of
> understanding of the concepts involved. If you want to move data
> safely, encrypt it with an automatically-generated password of
> sufficient entropy, and transmit the password separately -- there is no
> need to transmit the whole LUKS keyslot, which is large, and is just a
> technical detail.
>
> [1] http://dee.su/liberte-faq
>
> --
> Maxim Kammerer
> Liberté Linux: http://dee.su/liberte
> --
> Liberationtech is public & archives are searchable on Google. Violations
> of list guidelines will get you moderated:
> https://mailman.stanford.edu/mailman/listinfo/liberationtech.
> Unsubscribe, change to digest, or change password by emailing moderator at
> compa...@stanford.edu.
-- 
Liberationtech is public & archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

Re: [liberationtech] Coursera to join censor club by blocking Iran IP space

2014-01-30 Thread wasa bee
proxy != tor ;)
Maybe they can also use lantern and google uProxy...


On Thu, Jan 30, 2014 at 10:06 AM,  wrote:

> The problem is the bandwith. Coursera works with video streams, that means
> that you can't practically use e.g. TOR.
> --
> *From: * wasa bee 
> *Date: *Thu, 30 Jan 2014 10:04:40 +
> *To: *; liberationtech<
> liberationtech@lists.stanford.edu>
> *Subject: *Re: [liberationtech] Coursera to join censor club by blocking
> Iran IP space
>
> Iranian users are very aware of proxies to access internet due to internal
> censorship.
> They will just use them to access coursera :); I doubt it will have much
> impact on users.
>
>
>
> On Thu, Jan 30, 2014 at 10:03 AM,  wrote:
>
>> Coursera says its not them, its an US export regulation. And this is
>> related to all sanctioned countries, including Syria, Sudan and Cuba, not
>> only Iran. I don't think that Coursera decided to do this by itself.
>> Stanford University also offers Coursera courses btw.
>>
>> Andreas
>>
>> Source:
>>
>> http://blog.coursera.org/post/74891215298/update-on-course-accessibility-for-students-in-cuba
>> -Original Message-
>> From: Nima Fatemi 
>> Sender: liberationtech-boun...@lists.stanford.edu
>> Date: Thu, 30 Jan 2014 09:22:33
>> To: 
>> Reply-To: liberationtech 
>> Subject: [liberationtech] Coursera to join censor club by blocking Iran IP
>> space
>>
>> --
>> Liberationtech is public & archives are searchable on Google. Violations
>> of list guidelines will get you moderated:
>> https://mailman.stanford.edu/mailman/listinfo/liberationtech.
>> Unsubscribe, change to digest, or change password by emailing moderator at
>> compa...@stanford.edu.
>> --
>> Liberationtech is public & archives are searchable on Google. Violations
>> of list guidelines will get you moderated:
>> https://mailman.stanford.edu/mailman/listinfo/liberationtech.
>> Unsubscribe, change to digest, or change password by emailing moderator at
>> compa...@stanford.edu.
>>
>
>
-- 
Liberationtech is public & archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

Re: [liberationtech] Coursera to join censor club by blocking Iran IP space

2014-01-30 Thread wasa bee
Iranian users are very aware of proxies to access internet due to internal
censorship.
They will just use them to access coursera :); I doubt it will have much
impact on users.



On Thu, Jan 30, 2014 at 10:03 AM,  wrote:

> Coursera says its not them, its an US export regulation. And this is
> related to all sanctioned countries, including Syria, Sudan and Cuba, not
> only Iran. I don't think that Coursera decided to do this by itself.
> Stanford University also offers Coursera courses btw.
>
> Andreas
>
> Source:
>
> http://blog.coursera.org/post/74891215298/update-on-course-accessibility-for-students-in-cuba
> -Original Message-
> From: Nima Fatemi 
> Sender: liberationtech-boun...@lists.stanford.edu
> Date: Thu, 30 Jan 2014 09:22:33
> To: 
> Reply-To: liberationtech 
> Subject: [liberationtech] Coursera to join censor club by blocking Iran IP
> space
>
> --
> Liberationtech is public & archives are searchable on Google. Violations
> of list guidelines will get you moderated:
> https://mailman.stanford.edu/mailman/listinfo/liberationtech.
> Unsubscribe, change to digest, or change password by emailing moderator at
> compa...@stanford.edu.
> --
> Liberationtech is public & archives are searchable on Google. Violations
> of list guidelines will get you moderated:
> https://mailman.stanford.edu/mailman/listinfo/liberationtech.
> Unsubscribe, change to digest, or change password by emailing moderator at
> compa...@stanford.edu.
>
-- 
Liberationtech is public & archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

Re: [liberationtech] 15 years later, why can't Johnny still not encrypt?

2014-01-15 Thread wasa bee
a 2011 study on encrypted radio used by government, police, etc:
http://www.crypto.com/blog/p25


On Wed, Jan 15, 2014 at 10:23 AM, Anders Thoresson wrote:

> Hi all!
>
> When doing research on email encryption and why it's still not widely
> used, I've read Alma Whittens "Why Johnny Can’t Encrypt: A Usability
> Evaluation of PGP 5.0" [1] from '99. I wonder if anyone knows of similar
> but more recent usability studies on encryption software?
>
> Comparing the findings made by Whittens and compare them to the software
> available today, not much seems to have happened. But does the conclusion
> still holds, that a lack of mass-adoption of email encryption is due to
> problematic UX – or are there other reasons that today are seen as more
> important?
>
> [1] –
> https://www.usenix.org/legacy/events/sec99/full_papers/whitten/whitten.ps
>
> Best regards,
> Anders Thoresson
> Freelance reporter
> and...@thoresson.net
> http://anders.thoresson.se
> http://www.dn.se/blogg/teknikbloggen
> http://twitter.com/thoresson
>
> --
> Liberationtech is public & archives are searchable on Google. Violations
> of list guidelines will get you moderated:
> https://mailman.stanford.edu/mailman/listinfo/liberationtech.
> Unsubscribe, change to digest, or change password by emailing moderator at
> compa...@stanford.edu.
>
-- 
Liberationtech is public & archives are searchable on Google. Violations of 
list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

Re: [liberationtech] The second operating system hiding in every mobile phone

2013-11-13 Thread wasa bee
there's a 3rd OS running in your smartphones (besides the Java card in your
SIM):
http://armdevices.net/2012/05/04/samsung-galaxy-s3-may-be-the-first-smartphone-with-full-arm-trustzone-support-for-enabling-100-security-in-everything-online/
All new Samsung phones have it... and it is proprietary too. This one's
built with security in mind and certainly is perfect for spying/backdoor
purposes :) ...


On Wed, Nov 13, 2013 at 8:54 AM, Eugen Leitl  wrote:

>
>
> http://www.osnews.com/story/27416/The_second_operating_system_hiding_in_every_mobile_phone
>
> The second operating system hiding in every mobile phone
>
> posted by Thom Holwerda  on Tue 12th Nov 2013 23:06 UTC
>
> I've always known this, and I'm sure most of you do too, but we never
> really
> talk about it. Every smartphone or other device with mobile communications
> capability (e.g. 3G or LTE) actually runs not one, but two operating
> systems.
> Aside from the operating system that we as end-users see (Android, iOS,
> PalmOS), it also runs a small operating system that manages everything
> related to radio. Since this functionality is highly timing-dependent, a
> real-time operating system is required.
>
> This operating system is stored in firmware, and runs on the baseband
> processor. As far as I know, this baseband RTOS is always entirely
> proprietary. For instance, the RTOS inside Qualcomm baseband processors (in
> this specific case, the MSM6280) is called AMSS, built upon their own
> proprietary REX kernel, and is made up of 69 concurrent tasks, handling
> everything from USB to GPS. It runs on an ARMv5 processor.
>
> The problem here is clear: these baseband processors and the proprietary,
> closed software they run are poorly understood, as there's no proper peer
> review. This is actually kind of weird, considering just how important
> these
> little bits of software are to the functioning of a modern communication
> device. You may think these baseband RTOS' are safe and secure, but that's
> not exactly the case. You may have the most secure mobile operating system
> in
> the world, but you're still running a second operating system that is
> poorly
> understood, poorly documented, proprietary, and all you have to go on are
> Qualcomm's Infineon's, and others' blue eyes.
>
> The insecurity of baseband software is not by error; it's by design. The
> standards that govern how these baseband processors and radios work were
> designed in the '80s, ending up with a complicated codebase written in the
> '90s - complete with a '90s attitude towards security. For instance, there
> is
> barely any exploit mitigation, so exploits are free to run amok. What makes
> it even worse, is that every baseband processor inherently trusts whatever
> data it receives from a base station (e.g. in a cell tower). Nothing is
> checked, everything is automatically trusted. Lastly, the baseband
> processor
> is usually the master processor, whereas the application processor (which
> runs the mobile operating system) is the slave.
>
> So, we have a complete operating system, running on an ARM processor,
> without
> any exploit mitigation (or only very little of it), which automatically
> trusts every instruction, piece of code, or data it receives from the base
> station you're connected to. What could possibly go wrong?
>
> With this in mind, security researcher Ralf-Philipp Weinmann of the
> University of Luxembourg set out to reverse engineer the baseband processor
> software of both Qualcomm and Infineon, and he easily spotted loads and
> loads
> of bugs, scattered all over the place, each and every one of which could
> lead
> to exploits - crashing the device, and even allowing the attacker to
> remotely
> execute code. Remember: all over the air. One of the exploits he found
> required nothing more but a 73 byte message to get remote code execution.
> Over the air.
>
> You can do some crazy things with these exploits. For instance, you can
> turn
> on auto-answer, using the Hayes command set. This is a command language for
> modems designed in 1981, and it still works on modern baseband processors
> found in smartphones today (!). The auto-answer can be made silent and
> invisible, too.
>
> While we can sort-of assume that the base stations in cell towers operated
> by
> large carriers are "safe", the fact of the matter is that base stations are
> becoming a lot cheaper, and are being sold on eBay - and there are even
> open
> source base station software packages. Such base stations can be used to
> target phones. Put a compromised base station in a crowded area - or even a
> financial district or some other sensitive area - and you can remotely turn
> on microphones, cameras, place rootkits, place calls/send SMS messages to
> expensive numbers, and so on. Yes, you can even brick phones permanently.
>
> This is a pretty serious issue, but one that you rarely hear about. This is
> such low-level, complex software that I would guess very few people in the

Re: [liberationtech] Moscow Metro says new tracking system is to find stolen phones; no one believes them

2013-07-30 Thread wasa bee
Spoofing GSM station has been known for a while. What abt newer generation
networks (3G,LTE,UMTS,etc)?
Unlike GSM, these networks authenticate the station and IMSI is usually
encrypted. So what is the state-of-the-art for spoofing the station, for
tracking here? What are the known vulnerabilities?
- does this rely on backward compatibility (i.e. phones connect to the
network with stronger signal so phones can be tricked into connecting to a
fake GSM station)?
- does it exploit a vulnerability in authentication/hand-over of these new
networks?
- does it use statistical methods to infer where users are (e.g.
http://www.isti.tu-berlin.de/fileadmin/fg214/Papers/UMTSprivacy.pdf)

Thanks


On Tue, Jul 30, 2013 at 9:56 AM, Eugen Leitl  wrote:

>
>
> http://arstechnica.com/tech-policy/2013/07/moscow-metro-says-new-tracking-system-is-to-find-stolen-phones-no-one-believes-them/
>
>
> Moscow Metro says new tracking system is to find stolen phones; no one
> believes them
>
> Experts: Russians are probably using fake cell tower devices for
> surveillance.
>
> by Cyrus Farivar - July 29 2013, 11:10pm +0200
>
> On Monday, a major Russian newspaper reported that Moscow’s metro system is
> planning what appears to be a mobile phone tracking device in its metro
> stations—ostensibly to search for stolen phones.
>
> According to Izvestia (Google Translate), Andrey Mokhov, the operations
> chief
> of the Moscow Metro system’s police department, said that the system will
> have a range of five meters (16 feet). “If the [SIM] card is wanted, the
> system automatically creates a route of its movement and passes that
> information to the station attendant,” Mokhov said.
>
> Many outside experts, both in and outside Russia, though, believe that what
> local authorities are actually deploying is a “stingray,” or “IMSI
> catcher”—a
> device that can fool a phone and SIM into reading from a fake mobile phone
> tower. (IMSI, or an International Mobile Subscriber Identity number, is a
> 15-digit unique number that sits on every SIM card.) Such devices can be
> used
> as a simple way to see what phone numbers are being used in a given area or
> even to intercept the audio of voice calls.
>
> The Moscow Metro did not immediately respond to our request for comment.
>
> “Many surveillance technologies are created and deployed with legitimate
> aims
> in mind, however the deploying of IMSI catchers sniffing mobile phones en
> masse is neither proportionate nor necessary for the stated aims of
> identifying stolen phones,” Eric King of Privacy International told Ars.
>
> “Likewise the legal loophole they claim to be using to legitimize the
> practice—distinguishing between tracking a person from a SIM card—is
> nonsensical and unjustifiable. It's surprising it's being discussed so
> openly, given in many countries like the United Kingdom, they refuse to
> even
> acknowledge the existence of IMSI catchers, and any government use of the
> technology is strictly national security exempted.”
>
> These devices are in use, typically by law enforcement agencies worldwide,
> including some in the United States. Portable, commercial IMSI catchers are
> made by Swiss and British companies, among others, but in 2010, security
> researcher Chris Paget announced that he built his own IMSI catcher for
> only
> $1,500. Still, mobile security remains spy-versus-spy to some degree, each
> measure matched by a countermeasure. In December 2011, Karsten Nohl,
> another
> noted mobile security researcher, released "Catcher Catcher"—a piece of
> software that monitors network traffic and looks at the likelihood an IMSI
> catcher is in use.
>
> Keir Giles, of the Conflict Studies Research Centre, an Oxford-based
> Russian
> think tank, told Ars that Russian authorities are claiming a legal
> technicality.
>
> "They are claiming that although they are legally prohibited from
> indiscriminate surveillance of people, the fact that they are following SIM
> cards which are the property of the mobile phone operators rather than the
> individuals carrying those SIM cards makes the tracking plans perfectly
> legal," he said, adding that this reasoning is "weaselly and ridiculous."
>
> The Russian newspaper also quoted Alexander Ivanchenko, executive director
> of
> the Russian Security Industry Association, who pointed out that even to be
> effective, such a system would need these devices every 10 meters (32
> feet).
>
> “It is obvious that the cost of the system is not commensurate with the
> value
> of all the stolen phones,” he said. “Also, effective anti-theft technology
> is
> already known: in the US, for example, the owner of the stolen phone knows
> enough to call the operator—and the stolen device stops working, even if
> another SIM-card is inserted.”
>
> Two major Russian mobile providers, Beeline and Megafon, have told Russian
> media (Google Translate) that they are unaware of this supposed anti-theft
> measure. On the other hand, BBC Russian reports (Google

Re: [liberationtech] Heml.is - "The Beautiful & Secure Messenger"

2013-07-10 Thread Wasa
https://whispersystems.org/ already has an open-source secure messaging, 
voice and more.

Has anyone reviewed their code?
Does anyone use it?
Why not build on top of it?


On 10/07/13 14:07, Nick wrote:

noone said it would be closed source. That's peoples guess. Like, your guess, I 
guess.

According to their twitter account, the answer is "maybe":
https://twitter.com/HemlisMessenger/statuses/354927721337470976

Peter Sunde (one of the people behind it) said "eventually", but
in my experience promises like that tend to be broken:
https://twitter.com/brokep/status/354608029242626048


and the feature 'unlocking' aspect of the project - to be indication of a
proprietary code base.

Frankly I can't see how they could get the "feature unlock" funding
stuff to work well if it's proper open source. As I'd expect people
to fork it to remove such antifeatures. It's a pity, as several new
funding models have been successful recently which are compatible with
free software, but this doesn't look to be one of them.
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech


--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech

Re: [liberationtech] How CyanogenMod’s founder is giving Android users their privacy back | Ars Technica

2013-06-18 Thread Wasa

On 18/06/13 05:46, Yosem Companys wrote:

Since not all applications are malicious, users will be able to enable
Incognito Mode on a per-app basis. The option will be available within
each application’s individual settings.
the first thing that bad apps (at least some) do is syphon out data 
right when u open them.
if u need to go to setting to turn the "incognito" option on, there is a 
risk the damage is already done by the time u get to the settings.
I may exaggerate a little of course... but that suggests an installation 
screen with "set default incoginito yes/no" prompt could be of use...
it might degrade usability (an extra screen to interact with), user may 
default to the OK button (so incognito maybe should be default).
On starting the app from grid, maybe a toast informing the "incognito" 
status may also be useful...


well, just thoughts...
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech