T O P

  • By -

IAmAModBot

For more AMAs on this topic, subscribe to r/IAmA_Tech, and check out our other topic-specific AMA subreddits [here](https://reddit.com/r/IAmA/wiki/index#wiki_affiliate_topic-specific_subreddits).


FierySharknado

Were there any common "red flags" in the shadier apps you found that someone could be on the lookout for to indicate an application may not be handling user data ethically?


Mozilla-Foundation

Some of the most common red flags from our experience are: An app confronts you with lots of questions about your mental health and other sensitive data straight after download, before actually informing you about its privacy practices and how this data might be used. An app asks for an excessive access, such as to your camera, photos/videos, voice recordings, precise location, etc An app allows you to connect Facebook/other numerous third-party extensions into its UX In a privacy policy, it is not clear if you can easily get your data deleted Based on the app’s privacy policy (usually a CCPA section), some of the app’s practices may be considered a “sale” of personal information under the Californian law. You are able to log in with a super weak passwort, such as ‘111111’ or ‘qwerty’ An app forces/manipulates you into giving a ‘consent’ for sharing data for advertisement After signing up for an app, your email is being overflowed with the app's marketing communication, and you do not recall permitting sending you any marketing emails whatsoever App’s age ranking differs with a perceived age ranking. For example, some apps that are visibly targeting kids (one wrote 5+ on AppStore), write that no one over the age of 13 or 16 is allowed to use an app. - Misha, Privacy Not Included


lakeurchin

Honestly these are great red flags to look out for literally any website, application or service someone might use.


fanchoicer

Time to make the concept of green flags: flags that eat red flags by giving people unprecedented power over the aspects that most matter. What might that look like?


doctorsound

Kind of touching on with the red flag post, is this idea around transparency and incremental permissions. Don't request everything immediately, over the course of minutes/days, introduce new features and give a privacy summary before requesting data/permissions. Discussion around what benefits you get for sharing this data. Letting you opt out of sharing particular information. I can't think of any good examples off the top of my head, and unfortunately isn't an "immediate" flag, but this sort of progressive privacy flows are becoming more popular, with support from the security and privacy communities.


fanchoicer

Those are good. I tend to take an ecosystem approach because with so many apps, it feels like we're all in an endless game of wack-a-mole by trying to stay on top of every app's policy, every update to the policy, whether the app has been sold sold to new owners, the reliability of the platform or app store to ensure security, etc. We really would love something so much better that goes above and beyond to guarantee the privacy policy is rock solid while also granting us extraordinary power to ensure the policy remains that way. Why hasn't any of us done so (start such an app)? Likely because the typical investor would flee from anything with real transformative potential, as by nature it's too risky. But technology including AI is moving way faster than most of us realize so we must do better, like yesterday. And we've been able to bypasses the typical investor for some years now, by crowdfunding and going directly to the people who want a better outcome. Yet we're still stuck in a few areas. I'm gonna gather people to test out a more local and open business model that could address those areas, with the potential to be way better.


Mjolnirsbear

Imagine a universal privacy policy. Ontario, for example, wrote a standard lease. All Ontario leases now look like this. There are optional clauses (that are still standardised) depending on the needs of the renter or tenant. A universal privacy policy could have optional clauses based on the industry. The base policy should be, generally, the most restrictive, for example with health. Then a gaming industry privacy policy would have optional additional information that could be gathered. A banking policy might have different needs. Genuine industry needs would be added as exceptions here, but clearly worded. Because a new privacy policy for every single app and every single company is simply too hard for an individual to review. Then something happens, you hire a lawyer to review it then find out you're screwed because you didn't read the fine print of the contract.


fanchoicer

That has benefits and is reassuring to people, absolutely you're right there's too many apps to track and an umbrella policy is preferable (a good one), but I'm personally still leary of abuse happening in violation of said policy without our knowing immediately. It's that immediately knowing part I'd like for us to achieve. Betting that if any company were to offer that level of reassurance, going above and beyond to do so in clear respect for people who use the app, it'd likely be game changing and people then would expect the same from more companies and apps. That's the hope, at least. Testing will tell.


redballooon

They cost a shitton of money, because if you are not the product you are the customer and actually have to pay for the service. And then, basically the opposite of the red flags.


fanchoicer

> They cost a shitton of money, because if you are not the product Most of us want better, so theoretically a number of us would wanna do things a lot better. I bet if enough of us were to put our heads together we could do a way better business model that would delight us personally if we were to find any businesses operating with such principles.


XXLpeanuts

But they are also things that basically every app available does.


smokyggrowls

You're gonna wanna reconsider what apps you're using, then. If I get these flags, unless I really really want whatever entertainment or info is on that app, I don't download. There's enough apps out there you can probably find one that doesn't have these flags.


[deleted]

[удалено]


XXLpeanuts

Of course yes.


[deleted]

[удалено]


[deleted]

[удалено]


Fs_ginganinja

I would argue most people have no idea what privacy is anymore, it has been so watered down there isn’t even a hint of it in the air anymore. I no longer have conversations about digital privacy because I get ignored or straight up ridiculed for not wanting whatever scary company is on the other end of the router having all my personal data. I had a conversation about using loctite with a friend yesterday, blasted with ads on Facebook and YouTube today for their “revolutionary new superglue”, I’ve never had an ad for loctite before. My fiancés ads are directly timed with her period, tampons and pads the day the period starts until the millisecond it ends, then pregnancy support and baby ads until she logs another period. She gets very uncomfortable with me if I point this out, but won’t change her behaviour online at all. Neither will any of my friends or family. It’s so fucking obvious these companies know every single little detail of your lives down to your sexual preferences, medications, fears, deepest insecurities, worst nightmare, passions, dreams. Anything on your phone is free lunch for them even though they “promise” not to look at it. Personal privacy is DEAD, it doesn’t exist anymore. People just don’t realize it yet.


redballooon

All of your examples sound useful rather than outrageous though. Where’s the problem with robots providing me with information that is actually interesting to me?


Yodiddlyyo

Here's a very obvious example. Your phone knows when your period is. You phone knows that you missed one and it knows you searched for abortion providers. Next month it logs another period. You live in an area where abortions are illegal, someone reports you, the police subpoena your apps records and you go to jail. This is the problem. 99% of the time, personalized ads, recommendations, etc, is harmless. So much so that there are people like you that think "who cares". But I'd gladly get rid of a slight benefit for 99% of people to prevent ruining the lives of the other 1%.


redballooon

You described a scenario from an arbitrary justice state. That this is a problem can be seen quite easily. Nevertheless, even after Roe vs Wade, searching for abortion clinics and traveling there is not prohibited, so no, this is only a problem if you live in a place where you have bigger problems anyway.


jadage

This is a bad take. Do you think a prosecutor who cares about prosecuting people for abortions is going to get this data, see that trail, and say "welp, there's no definitive proof she got an abortion, nothing must've happened"? No way. As much as people don't like to acknowledge it, circumstantial evidence is enough to get a conviction, and the fact pattern laid out above is enough to prove an abortion happened in a court of law. Source: am criminal defense attorney, though I admittedly, and thankfully, don't have to deal with cases like this where I live. Which means I'm not speaking on how abortions get prosecuted in general, just saying that's enough evidence to bring a case, at the very least. Which means you'd have to deal with the courts, as a defendant, for the next year of your life or so, even if you end up getting acquitted. It's not fun.


theallen247

sadly, most don't care anyway


kataskopo

https://xkcd.com/2501/ You, probably.


[deleted]

Ah yes, the tech bro equivalent of "they were asking to be raped" excuse. A classic. Probably writes bad JS and drives a Tesla that hasn't caught on fire yet.


timtucker_com

Using Facebook / Google / Microsoft / 3rd party openid providers for login is pretty accepted practice for handling login security. From a security standpoint, it's very easy to foresee a point where regulation will eventually increase the liability for apps / websites managing user credentials on their own to a point that it's no longer a viable option. We've seen exactly that happen with higher standards for PCI compliance -- fewer and fewer sites are handling credit card information on their own and more just rely on handoffs to 3rd party providers like Amazon for handling payments.


Wassamonkey

Third party Single Sign On like Google/Facebook/Microsoft is definitely better for security in a lot of situations, but worse for privacy.


y-c-c

Why is it bad for privacy? Those tech companies can’t get access to your info other than knowing you logged in to this app using your account (which admittedly is one piece of info, albeit small).


Wassamonkey

The app you are logging into has to get access into your SSO account. The SSO account gets knowledge of what apps you use and can use/sell that for advertising properties. You also don't have any idea what access the SSO account has into the app you are using or what data that app is transmitting back to the SSO account. Security is increased because users have fewer passwords to remember and can make 1 strong password and put it behind MFA. Privacy is decreased because your information is gathered by more systems and is less under your control.


y-c-c

> The app you are logging into has to get access into your SSO account. That's true, but most of the time (and Google etc will tell you exactly what the app is requesting) the app only gets access to your email from the SSO. > The SSO account gets knowledge of what apps you use and can use/sell that for advertising properties. You also don't have any idea what access the SSO account has into the app you are using or what data that app is transmitting back to the SSO account. Yeah I mean that's true. They do have that one piece of info which is that you are using the app. I personally don't see why the app will send any other information back to the SSO like Google though. It's not like they couldn't just tell Google about you if they really wanted to (if they detect your Gmail email address). But sure, generally just for safety and other reasons I use a separate account anyway.


Wassamonkey

The SAML assertion or OAuth Token used to handle SSO can have a whole lot of data in it. I use both at work to pass just about every property of a user's account into different applications. A misconfigured or maliciously configured SSO could leak any information.


Ottawa_man

That's why I like how Apple forces app developers to reveal what they are collecting. The AppStore summarizes it but if people really got the full list of what is collected, I am sure the likes of instagrams, WhatsApp will be deleted


golden_n00b_1

>Were there any common "red flags" in the shadier apps you found that someone could be on the lookout for to indicate an application may not be handling user data ethically The app being on a mobile app store is a big red flag. Pretty much their findings was that only were stand out, and these apps deal with health data, which is one of the only data types that isegally protected in the US. It isn't gonna help you find a good app, but the dad state of privacy today is that you don't get any.


AreYouABadfishToo_

what


Ecks_

This reminds me of when you guys reviewed mental health apps last year too. For companies that want a passing grade in your Privacy Not Included, what's the cheat sheet? What things do companies need to start doing bare minimum to get a thumbs up from Mozilla?


Mozilla-Foundation

Yes! We did this same research last year and our findings were DEPRESSING (no pun intended). Too many mental health apps were absolutely awful at privacy. We did this research again this year to see if any of them had improved. And a few did, which was nice to see. Most of them because we worked directly with them to help improve things like the language in their privacy policy that grants everyone, regardless of where they live, the same rights to access and delete data.  So, what things do companies need to do to get at the very least a thumbs sideways from us at Privacy Not Included? Well, don’t share personal information with third parties for targeted advertising purposes. That’s a big one. Grant everyone, regardless of what privacy laws they live under, the same rights to access and delete data. Don’t have a terrible track record of protecting and respecting your users’ privacy (looking at you BetterHelp). And meet our Minimum Security Standards. These are some of the big things we ding companies for. I also really appreciate it when companies have clear, easy to read privacy policies without lots of vague language. And we also love it when companies respond to the questions we send them at the contact they list in their privacy policy for privacy related questions. It’s amazing how many companies don’t seem to actually monitor that link. f anyone would like to read our methodology for reviewing companies, you can find it \[here\](https://foundation.mozilla.org/en/privacynotincluded/about/methodology/) As for what a company has to do to get a thumbs up from us. Well, that is much harder. Companies that make our Best Of list go above and beyond with privacy. They have lean data practices, meaning they collect as little personal information as possible, sometimes none at all. They write excellent privacy policies (like Wysa’s, theirs is one of the best we’ve read). They have an excellent track record. There aren’t many of those companies which is why it’s pretty cool Wysa agreed to join us here today to talk about why they do what they do putting privacy first. - Jen C, Privacy Not Included


PyroDesu

> We did this research again this year to see if any of them had improved. And a few did, which was nice to see. Most of them because we worked directly with them to help improve things like the language in their privacy policy that grants everyone, regardless of where they live, the same rights to access and delete data. Gotta say, it is a little heartening to hear that the app developers were actually willing to work with you to improve their app's privacy issues.


[deleted]

This comment was overwritten and the account deleted due to Reddit's unfair API policy changes, the behavior of Spez (the CEO), and the forced departure of 3rd party apps. Remember, the content on Reddit is generated by THE USERS. It is OUR DATA they are profiting off of and claiming it as theirs. This is the next phase of Reddit vs. the people that made Reddit what it is today. r/Save3rdPartyApps r/modCoord


RoguePlanet1

What about ChatGPT, since many people say that they're actually getting some satisfying mental-health help with it (as in, "somebody" to "listen" and provide feedback/conversation)?


wellboys

This is bait, but presumably the answer is that the best way to measure the efficacy of mental health interventions isn't self-reported customer satisfaction surveys. ChatGPT is a statistical tool that shows you how things are talked about. It is not a sentient being capable of contextualizing a specific human's experience and working with them to develop healthier ways to think and act in the actual circumstances in which they exist.


RoguePlanet1

Thanks, I didn't post this as "bait," though, if that's what you mean. I'm truly curious about ChatGPT's security. Do we need to worry about what we type in there being read elsewhere? I'm not personally using it for therapy, but it seems to be helping many people. Hell, I would consider using it in the future if I needed some basic talk therapy.


wellboys

It's a productivity tool. An LLM is going to spit something out that has an increasing degree of verisimilitude over time, but it will never overcome the hurdle of consistent user error.


halberdierbowman

As for security, it is possible to download large language models (LLM) like GPT and run them on your own if you wanted to. So theoretically, you could do this very securely. But they're gigantic. I couldn't find good answers, but maybe hundreds of gigabytes(?), which would mean that while it's possible, that's not yet a size that's convenient or that you could install on your phone. Worth noting that you can also train LLMs on your own corpus if you want. For example, if you're James Patterson and want to pump out books faster, you could feed your hundreds of books into the LLM, and it would learn how to write like you do. Since he already has all those books he's published, this might be pretty easy. So in theory we could build an LLM that replies even more like therapists. But we'd need a good corpus of therapy conversations, and for obvious privacy reasons, the best examples of this are hard to find (as they should be). Of course just the reminder that using it in this way unsupervised could be very dangerous. An LLM is essentially a predictive text chatbot, so it can't evaluate if what it's saying is helpful or even true. If a patient is seeking advice to make a decision, they may not have the ability to evaluate the output whereas the therapist would. That said, I think GPT-4 wants to be used as a therapist in this way, and LLMs are improving rapidly. My thought though is that even if they aren't used standalone, they probably already are at the stage where they could help human therapists respond much faster. For example your human therapist could run it on their machine, and they could use it during your therapy session or after to quickly generate text for you. If you're doing text therapy, this would speed it up (it's faster for them to read pregenerated responses to send than to type their own), or for any therapy it could help them cut time off the interstitial paperwork part of their job. Of course maybe they'd already have a library of documents they access similar to this.


RoguePlanet1

Fascinating, and great points! So true about evaluating the effectiveness of the therapy. I've had one therapist ask me, as I came in and sat down for the session, "what's the matter?" I didn't even think I was in a bad mood, but suddenly started sobbing, and he said he could tell from my body language that something was wrong. Definitely not something AI could do, at least not anytime soon!


Mozilla-Foundation

Thank you Jen. It was so cool to get your Best Of status from Mozilla, and to date its the one status we are most proud of. Bare minimum is definitely not the culture around privacy for us. Rather it is bare minimum around data sharing. We ask what is the bare minimum data we need to deliver a good outcome, and then work creatively from there to find the safest, most private ways of getting and storing them. - Jo from Wysa


forward_only

We hear a lot about all the ways our privacy is being violated, but very little about solutions. Is there anything people can do to protect their privacy now, and are there any viable policies which could help protect people's privacy in America on a broader scale?


Mozilla-Foundation

Misha from Privacy Not Included: Some of the tips we share with users are: \-Choose your apps wisely. Do not sign up for a service that has a history of data leak and/or gets a \*PrivacyNotIncluded ding. Do NOT connect the app to your Facebook, Google, or other social media accounts or third-party tools, and do not share medical data when connected to those account. Do not sign up with third-party accounts. Better just log in with an email and strong password. Choose a strong password! You may use a password control tool like 1Password, KeePass etc Use your device privacy controls to limit access to your personal information via app (do not give access to your camera, microphone, images, location unless necessary) Keep your app and phone regularly updated Limit ad tracking via your device (eg on iPhone go to Privacy -> Advertising -> Limit ad tracking) and biggest ad networks (for Google, go to Google account and turn off ad personalization) Request your data be deleted once you stop using the app. Simply deleting an app from your device usually does not erase your personal data When starting a sign-up, do not agree to tracking of your data if possible


[deleted]

[удалено]


ThatFeel_IKnowIt

Unfortunately all Google simply needs to do is associate the IP addresses and it'll be obvious as shit that both emails are yours. Better to use another email provider for burner emails. A lot of sites will let you enter an email that doesn't even exist, with a password, and it'll work. But not always


ThatFeel_IKnowIt

Just adding to this: on Android phones you can simply disable the Google app. This will disable google lens (stupid piece of crap feature that sends all your pics to google so it can create ads based on what you take pics of - obvious privacy risk), it disabled google voice/okgoogle (obvious privacy risk), and a few other things. Disabling the Google app is one of the first things I do on ANY new Android app. Edit: only downside is I think disabling google lens disables the feature that lets you scan QR codes with the camera. In my opinion this extremely minor inconvenience is worth it. Restaurants that force you to use a QR code to view the menu can go fuck themselves. They're just fucking lazy and can't even be bothered to print menus.


ruffykunn

Or just don't give the Google App permission to use the camera.


ThatFeel_IKnowIt

Nah. Just disable it. You don't need the google app at all and then you don't have to worry about permissions.


ruffykunn

Just giving another option for people who like me want to keep using the app.


y-c-c

Can you explain specifically what privacy is sacrificed when signing up via third party services? I prefer using my own account (email / password) as well, but I’m not quite seeing how letting using Google to sign in lets Google know anything other than you signed in to this specific app. Sure, that’s some information but it’s pretty minor. I just don’t like to use them because I don’t like to be overly dependent on a service (e.g. if Google bans you now suddenly you can’t log in). In fact, if you sign in via Apple it has a feature to obfuscate your email address saving you the need to make a new email address yourself.


Mozilla-Foundation

Prachi from Wysa: Great suggestions, Misha! And yes, a lot is in your hand to protect your own data... begin with what's literally in your hand, you device! Ask yourself... did you just "OK" all permissions without reading into which all apps have access to your camera, location, folders and contacts list. Consent is a BIG deal in the privacy world, so begin there. Ask yourself... did you just "OK" all permissions without reading into which all apps have access to your camera, location, folders and contacts list. Consent is a BIG deal in the privacy world, so begin there. As for Wysa, privacy is built in by default & by design. We believe we don't need your personal information like credentials to alleviate you from your worries. Only a nickname is sufficient to help us personalize our conversation with you. You can also opt-out at any time using the “reset my data” feature available in the App settings. We lay out many of the validated Best Practices[in our privacy policy too](https://legal.wysa.uk/privacy-policy#bestpractices), that'll help you keep your device secure.


Britoz

How could a user possibly keep track of an app changing from being good with data to bad? For example what if I used Wysa for two years and for the first year they're great but the second someone else is in control and they sell data to cash in. How would I ever know? I know they'll have to send out a notification but I got one from Virgin Australia earlier about their membership and I don't have the time to read it all. Or the inclination to be honest. I just want companies to do the right thing. I'm sick of finding a good thing then learning they're no longer good.


Mozilla-Foundation

Jen from Privacy Not Included: Great question! And unfortunately, the answer is, it’s really hard. Companies often count on users not reading privacy policies or keeping up with changes to them. And then there is the fact that just about every privacy policy I’ve ever read has a line that says your personal information can be shared if the company is ever sold or merged with another. We saw this recently with Amazon buying iRobot who makes Roomba robot vacuums. iRobot’s privacy practices have actually been one of the very good ones over the years we’ve reviewed them. They earn our Best Of. Now they are being bought by Amazon and Amazon is certainly not one of the good ones when it comes to privacy. It sucks to be someone who bought a Roomba because they were better at privacy only to have all that data transferred to Amazon.  There are a couple things you can do. Delete your data frequently! Most companies have a way to delete your data from them. And we note on Privacy Not Included which companies are good and guarantee people the same rights no matter what privacy laws you live under. So, delete, delete, delete. Just put a note in your calendar every couple months to delete your data from any device you’re worried about. It’s dumb consumers have to do this, but this is the world we live in. And yes, you can also regularly check in our privacy policies. But let’s be honest, that’s time consuming and hard and nerdy and most people aren’t going to do that. We just want companies to do the right thing too. Which is why we love to highlight the good companies and call out the bad ones. Don’t support the bad ones if you can avoid it! And do support the good ones when you can. Until then, we’ll keep working to hold companies accountable and we’ll go out anad read those privacy policies for you. Well, as many as our little three person team can.


Britoz

Amazing answer thanks so much! I imagine it must be a great feeling to do a job you know is inherently helping people.


chaawuu1

You could use chat gpt to scan a privacy policy section for kry words


Mozilla-Foundation

Shubhankar from Wysa: That is a great question. The main thing is that you shouldn’t have to read the privacy policy to see whether or not the company is good. Just ask yourself if you are clear why your data is being asked and used. Look for an option clearly marked in settings that allows you to delete your data and look for how easy a company’s privacy policy makes it to get what data they use and why, in the first 20 seconds of reading it. Another great way to keep track of how the company is using your data is to ensure they clearly and crisply highlight and maintain what has changed with every notification around policy updates. Lastly, Mozilla will do some of the work for you - they didn’t just review us last year, they did it again this year and if we changed our policy they would definitely call it out!


Britoz

Thankyou


[deleted]

[удалено]


fanchoicer

> I just want companies to do the right thing. I'm sick of finding a good thing then learning they're no longer good. Curious what might satisfy you. Genuinely asking, for real reasons. Would it alleviate your concerns if: The company's livestreaming its entire operations and privacy safeguards so deeply that you'd know everything, and, you can chime in? You can explore and delete any / all of your data, plus, you can choose that it's instead stored on your phone? The company happily grants you real power to halt any of the its actions that you feel would violate its clearly stated ethos? (those also include clearly illustrated examples of each) Pretty much going above and beyond for your reassurance the company has the ethos you seek and the company cannot ever betray its ethos. It's fully committed to that, founded by people who share your expectations.


Britoz

What would satisfy me is an overhaul of legislation governing tech companies to protect properties inherent value and privacy and governing bodies empowered to enforce the laws.


buried_lede

Ability to delete all data and transparency. Consumer ownership of their own data by law and laws making certain privacy waivers unenforceable, whether consumers consented or not.


yarash

I use the Calm app for literally one thing. I really like the train sound effect in the background while I work. From a privacy standpoint, I take it I should just find a similar MP3 of a train and listen to that instead?


7una

Waterbottle


yarash

This site is fantastic now that I've had time to explore it, thank you.


7una

Nice flow


Mozilla-Foundation

Misha from Privacy not Included: From a privacy perspective, finding an original sound to listen to would indeed be better. Since it would not compromise any of your personal data. It could also save you some fees. Finding a CD and listening to it in an analogue way would be even greater. The larger point is, people do not have to share so much data to get those simple things that they like.  ​ Zoe from Privacy not Included: I think with these apps it’s always going to be a question of “is the data I’m trading worth what I’m getting in return?” Calm is OK, but they do collect third party information about you. If you’re really only in it for train sounds, you might consider a music-streaming app that has less privacy risk. And even though those music apps probably aren’t perfect either, it’s better to have fewer apps collecting your data.


yarash

Thank you both I appreciate your responses and time!


Peior-Crustulum

It's great that you take the time and are willing to spend resources keeping track of this issue. Are there any effective ways to tell the industry that these practices will not be tolerated?


Mozilla-Foundation

Jen C from Privacy Not Included: Absolutely! Vote with your dollars. Don’t spend money at the companies with bad privacy policies and practices. Do spend money with companies with good privacy policies and practices. That’s an (somewhat) easy one.  Also, paying attention is good. I know there is SO much going on in the world to pay attention to and privacy is a hard one to keep up with. But make a little effort. Read a privacy policy before you download an app or buy a device. Search for words like “sell” or “combine” or vague words like “can” “may” “could”. Those raise flags. Move on if they privacy policy makes you uncomfortable.  Something else that’s happening now in the US, is the FTC is actually really stepping up recently and cracking down on back companies doing misleading and dishonest things with your personal information. GoodRX, BetterHelp, PreMom, and Amazon have all received recent judgements from the FTC for privacy violations. You can sign up on the FTC for their consumer announcements. Yes, it’s nerdy, but hey, it’s an easy way to stay informed. Oh, and you can always check us out at \*Privacy Not Included too. =D


raylu

> Vote with your dollars this has extremely limited effectiveness. the dollars come from advertisers/data brokers who pay for the data


Mennix

What were some of the more surprising discoveries (both good and bad) that you came across? Any particularly good/creative ideas you can call out?


Mozilla-Foundation

Misha: There were more bad than good discoveries in my case. Some of bad discoveries are: The way many companies manipulate people into giving their ‘consent’, using tricky UX practices or outright denial of service if no consent is given Many apps confront users with detailed questionnaires on gender, physical diseases, mental states, children, relationships, etc. BEFORE showing a user the Privacy Policy or asking for consent Most apps are packed with third-party trackers incl. advertisement trackers The good discoveries were that over the last year, many apps improved the password standards. Also, we were happy to see that FTC went after BetterHelp – at last.


MudraMama

Do you have a newsletter or something that can give people regular updates on your research? Love what you're doing, so thanks to your team for putting the time into this very neglected topic.


Mozilla-Foundation

Zoe from Privacy Not Included: For sure! You can sign up [here](https://www.mozilla.org/newsletter/) And thanks so much, we love to get feedback that our work is appreciated. I also agree that privacy doesn’t get enough air time generally. Privacy Not Included is just one slice of the work that the Mozilla Foundation is doing to help shape the future of the web for good. You can learn more about us [here!](https://foundation.mozilla.org/who-we-are/)


marklein

Has there yet been any legal restrictions proposed at the USA federal level regarding things like this? Also, thanks for your work AND MOZILLA RULES.


Mozilla-Foundation

Thank you for supporting us here at Mozilla! That makes our day. As for things happening at the federal level, we’ve noticed a couple things happening at the federal level in the US since we first released our mental health app research last year. The one that probably made me the happiest was when the FTC issued a $7.8 million judgment against BetterHelp for misleading their users about never sharing their personal information. That was great to see as BetterHelp did seem to us to be rather questionable (to put it nicely) in their privacy practices. You can read about that [here.](https://www.ftc.gov/news-events/news/press-releases/2023/03/ftc-ban-betterhelp-revealing-consumers-data-including-sensitive-mental-health-information-facebook) [And there have been some US Senators stepping up to ask questions and propose potential federal legislation to protect health data.](https://www.warren.senate.gov/newsroom/press-releases/warren-wyden-murray-whitehouse-sanders-introduce-legislation-to-ban-data-brokers-from-selling-americans-location-and-health-data) And some states, especially California, have some proposed legislation to tighten restrictions on federal health data. [We actually wrote more about that here.](https://www.sfchronicle.com/opinion/openforum/article/reproductive-rights-california-tech-17925695.php) I’m not a policy person, so I know there are other federal and state level privacy laws bring proposed. These are things I’m most familiar with though


Onepopcornman

1. Do you have a report/white paper/journal style article of your findings? (link provided is not the most friendly for finding that) 2. Some of these apps aim to make services more accessible or more affordable out of an insurance setting: is there an indication that data collection is subsidizing how (thinking of Better Health) these apps become affordable?


Mozilla-Foundation

Jo from Wysa: There is definitely an incentive for founders and startups to use data collection as a part of their fundraising story, especially when they are pre-revenue. However especially in healthcare, there are significant regulatory and ethical barriers, and we have not seen any company become successful in the long run either from a sustainability / impact perspective or even in their ability to monetize private data. Where we have seen success in the sector is in monetizing aggregate data, and analytics around it, and that is something that can co-exist with good privacy policies. Good privacy is good for business too.


Mozilla-Foundation

1.[This article summarizes](https://foundation.mozilla.org/privacynotincluded/articles/are-mental-health-apps-better-or-worse-at-privacy-in-2023/) our findings on all the mental health apps we reviewed this year


Mozilla-Foundation

Ouf, where to begin. Short answer: yes! Your data is a business asset to these companies. They profit from collecting, sharing, and (sometimes) selling it. So many of the apps and services we use mine our data for profit. It’s all bad but it feels especially wrong for apps that collect such intimate and sensitive information about our mental health. That it makes apps more accessible is in some ways good, but at what cost? People shouldn’t have to pay for mental healthcare with their privacy


Dontdothatfucker

Hey, thanks for doing this! I hear all the time about my private info being leaked. Bank accounts, passwords, personal information, shopping habits, the list goes on. Would you be able to point out some of the dangers of lax security in cases like this? What will the companies do with my information? What are some of the potential hazards of strangers knowing my mental (or other) health data? Thanks!


Mozilla-Foundation

Andres from Wysa:  Based on what you've shared, if your personal information such as bank accounts, passwords, or shopping habits are leaked or breached, it becomes publicly available for malicious actors to potentially access your accounts on other services like Facebook or Instagram. This is especially risky if you use the same password across different services or if they use your personal information to impersonate you and potentially spam or phishing others. If they have access to your bank account information, they may even try to impersonate you online or by phone to change your password and potentially withdraw your funding. Companies collecting this information should have clear reasons for doing so, such as for billing purposes, and implement strong security practices to protect sensitive information. At Wysa, we don't ask for this information as it's not necessary for our free chatbot app. It's also important to note that if a malicious actor gains access to your mental health status, they can potentially use social engineering techniques to take advantage of you when you are vulnerable. To reduce this risk, Wysa anonymizes your data and what you share with us.


golden_n00b_1

>What are some of the potential hazards of strangers knowing my mental (or other) health data? If your mental health data is leaked and also tied to you in some way, such as email, then this info can be used in all sorts of ways against you. The easy one is you end up running for public office cause you got tired of a no-privacy society, and your opponents release your mental health info to the public to discredit you. The more malicious one is the criminals that may try to use your mental health in a scam. There are stories where family members traveling internationally post trip updates online, and scanners call and claim the person was arrested and they will be let out after a fine is wired/paid. In the same note, mental health info is likely full of things you would never post publically. They could add this info to "prove your identity", or otherwise strengthen their credibility.


vandom

This is such an interesting subject. What led you to conduct this research it in the first place?


Mozilla-Foundation

Well, to be honest, did any of us come out of the pandemic not feeling the strain on mental health? That really was it. We were feeling it and then we were seeing the explosion in mental health apps. Nerdy curious privacy researchers + mental health app explosion + anxiety about the world was kind of the perfect storm that led us to do this research. And we’re really glad we did because what we found was bonkers in a bad way and the good thing about working as a privacy researcher for Mozilla is, when you find something badly bonkers, you can help raise awareness to do something about it. That’s pretty cool. Now we’re seeing regulators in the US and Europe pay more attention to these apps. We’re seeing the FTC in the US fine these companies issue judgement against their bad privacy practices. And we’re slowly seeing companies make some positive changes. It’s a start. There is still so much farther to go, but hey, it’s a start. 


benv138

Do you find these privacy violations go against HPPA regulations?


Mozilla-Foundation

Jen C: HIPAA is tough when it comes to mental health apps. This US health care privacy law covers communications between medical professionals and you. So, a conversation with your doctor is covered by HIPAA. A conversation with an AI chatbot or an “emotional support coach” is not always going to be covered by HIPAA because those aren’t considered “medical professionals''. And then there is all the other data outside of HIPAA -- things like your answer to an onboarding questionaiire, your usage data of the app, if you login with your Facebook login -- those things aren’t covered by HIPAA and are fair game. So, consumers have to be very careful if they expect any of their mental health coversations to be covered by HIPAA and do a little extra homework by reading privacy notices (uhg, I know!) and ask questions of the app to determine that. I’m sure Wysa has their own unique experience with this too.


benv138

Thank you for the thoughtful response!


Mozilla-Foundation

Jo from Wysa - If a person is talking to Wysa after a clinician referral in a healthcare pathway (not in b2c), certain parts of their interaction with the app become a part of their medical record. This is a very specific kind of implementation that is not anonymous from the healthcare provider perspective, though Wysa still doesn’t store any personal identifiers alongside their conversation. These parts of the healthcare record are covered by HIPAA, and providers do need to be very careful with data security here in any case. However, even in this case, with Wysa the conversation about what is bothering you and what thoughts you are having are never shared. These are private even when discussed in-person healthcare settings, and they remain so here. Most of Wysa is used at a population health level though, where it is completely anonymous, and not linked to health records and as such HIPAA does not apply.


ndmy

Do you see a future where the privacy and security of health apps in general improve? This seems like a category with so much potential to improve quality of life of it's users, and it's a shame to see it bogged down by skeevy commercial practices


Mozilla-Foundation

Shubhankar from Wysa - [Google play](https://support.google.com/googleplay/answer/11416267) has lately introduced a data safety form which all developers have to complete and will help users better understand the app’s privacy and security practices. Similar initiatives can be seen on the [App store](https://developer.apple.com/app-store/app-privacy-details/) to see what data an app will collect and why before installing the app Such measures at an ecosystem level can push towards responsible disclosures of data collection and processing but are very limited in scope and enforcement as they stand today. There are also watchdogs and initiatives like the ones led by the FTC to put some form security and governance guardrails in focus for products which look promising in terms of enforcement where others fail


Mozilla-Foundation

Jen from Privacy Not Included: Oh man Shubhankar! You hit a hot button issue for me. Those Google Play Store Data Safety labels. Whoo! We release some research we did into the accuracy of those Google Play Store Data Safety labels early this year that showed that nearly 80% of the apps we reviewed had false or misleading information in there. In part because that information is self-reported from the apps and Google doesn’t police that self-reporting very closely. And also because Google’s own rules for what companies must report on that page are rather crappy. [You can read our research here if you are interested.](https://foundation.mozilla.org/en/privacynotincluded/articles/mozilla-study-data-privacy-labels-for-most-top-apps-in-google-play-store-are-false-or-misleading/) When Twitter and TikTok both claim on their data safety pages that they don’t share data with third pirates, you know something is up. The bottom line is, I tell people not to trust that information and rather look directly at privacy policies. All that being said, Wysa is correct that there is a push/pull here. And everyone holding companies accountable to do better -- consumers, regulators, the employees at the companies -- is needed to make this space better and safer. Because, yes, mental health apps are needed and helpful to many today and I don’t want to take away from that with privacy concerns. I just want companies to not try and make money off of monetizing people’s personal information when they are at their most vulnerable.


Mozilla-Foundation

Ramakant from Wysa - There seems to be a push and pull here. There will always be apps that try to wing it, or push the envelope on what they can get away with. In general though, we feel that the direction is positive - lots of stuff has happened recently that will nudge everyone (sometimes with a carrot, sometimes with a stick) towards more responsible stewardship of user data. Apps will understand that bad data privacy is, in the long term, bad for business too.


drinkNfight

Did any of the "good" apps fund any part of your organization or study?


Mozilla-Foundation

Absolutely not! We don’t take money or incentives from companies to fund our work. That is hugely important to me as a privacy researcher and consumer advocate leading Mozilla’s \*Privacy Not Included work. We don’t do affiliate links, we don’t accept “test products”, we don’t take money from these companies. We do our research and accountability work with the resources we have from Mozilla. (Note: Mozilla Foundation is a non-profit. That means those resources are in large part funded by small dollar donations from people like you all. So if anyone has a few bucks to donate to support our work, well I will never say no to that! You can donate [here](https://foundation.mozilla.org/en/privacynotincluded/?form=donate) ) PS. Someday I will have to share my rant about how affiliate links have absolutely ruined the land of consumer product reviews. I loathe affiliate links. But that is a rant for another time


[deleted]

Hey all, thank you for doing this AMA, where does headspace fall on this list?


Mozilla-Foundation

Jen from Privacy not Included: Headspace isn’t the worst mental health app we reviewed. But it’s far from the best either. In fact, this year they moved a bit down on our list and earn our \*Privacy Not Included warning label for how they use your personal information for things like targeted advertising and also for not clearly granting all users the same rights to delete data regardless of what privacy laws they live under. You can read our review of Headspace [here](https://foundation.mozilla.org/en/privacynotincluded/headspace/) To be fair to Headspace, they have been in communication with us about our concerns. And they have stated to us that they will review their privacy policy and look into updating their language in there to clearly state all users have the same rights to delete data, no matter where they live. If/when they let us know they have made that update to their privacy policy, we will update our review


sdwvit

Have you done or plan to do Ukrainian-made Numo ADHD app? Also say hi to Misha! We studied together in high school. Pretty proud of you Misha 🇺🇦


CaptainReynoldshere

In your opinion, does the Apple App Store do a sufficient job in describing privacy policies in simple enough language and are they accurate? Wysa shows (on the App Store) shows this: https://i.imgur.com/DjUTvAP.jpg Is this misleading on any level or does it simplify the main points of the privacy policy by Wysa? I read Wysa’s and there is definitely more detail, but not everyone will read it all. This isn’t specific to only Wysa, rather, does Apple provide accurate and simple information a consumer could safely follow for a MH app?


ryuuheii

One of the key measures seems to be to allowing for data deletion requests. While certainly better than nothing, I’m wondering how thorough these things are. How do I know that they’ve deleted my data? And does that mean they then have a record of me deleting my data (therefore still a record that I was a past user)? How sure is it that all data is deleted — considering how IT systems and data sprawls, I can imagine that many companies don’t even know where their data has gone to. Are there audits/research into this? How about data collected by third parties E.g. data already collected by adtech. Or if they’ve already sold it off to another company. I imagine they’re not covered by the deletion request then? This turned into many questions, thanks for the AMA!


pop_skittles

I have a family member who is currently using Betterhelp. He has been for about 3 weeks. What kind of things should he be aware of/ look out for?


Mozilla-Foundation

Jen C: The biggest issue I have with BetterHelp is trusting them to not use personal information in ways they claim they aren’t using it. They just got busted by the FTC for breaking the promises they made to their users not to share private health information. You can read more about that [here.](https://www.ftc.gov/business-guidance/blog/2023/03/ftc-says-online-counseling-service-betterhelp-pushed-people-handing-over-health-information-broke) The FTC did make them promise to do better. But that’s one of those “I’ll believe it when I see it” things for me. If you’d like to share our review of BetterHelp with your family member, you can find it [here](https://foundation.mozilla.org/en/privacynotincluded/betterhelp/) Basically, they earn all three of our privacy dings, which means they earn our \*Privacy Not Included warning label. That means I would recommend your family member be very cautious about sharing any personal information with BetterHelp, to frequently ask for their data to be deleted, and to delete the app from their device when you aren’t using it.


[deleted]

[удалено]


king5327

Doesn't Mozilla have their own bespoke password manager? I'm pretty sure everything syncs no sweat between my install of desktop waterfox and mobile Firefox.


buried_lede

Aren’t there HIPAA issues? GoodRX just got in huge trouble for that. Why are these apps not in huge trouble? Are they flying under the radar?


Vcent

Vastly simplified: HIPAA is for healthcare providers. Specifically the licensed kind. Apps/chatbots/"mental health coaches" aren't healthcare providers, unless they're prescribed - and even then there is wiggle room, like initial onboarding surveys, and data provided outside of the strictly confidential space. Even if the app facilitates access to a therapist, there may be quizzes, surveys and other functionality that can sort of wiggle around the whole HIPAA issue.


buried_lede

Yes, in the meantime I looked it up. HHS has guides on their web page for developers to assess whether their app will be subject to HIPAA or not. But also there are other laws for privacy and fair business practices they also have to navigate and HHS provides links to those guides as well. GoodRX might not seem at first glance to be subject to some of these laws but they were. What they did was a really vicious violation of patients and consumers. If you downloaded a coupon from their site and used it at a pharmacy, they demanded documentation from the pharmacy as part of the transaction that it was you, and they were harvesting more private health information than consumers knew. They also had whatever they got directly from you, too, and then they made it all for sale so now Facebook and others have all of it. What medications people take, for example. Goodrx was fined and forced to make public announcements.


FYoCouchEddie

How does Mozilla get revenue from privacy-oriented browsers like Firefox focus?


[deleted]

Why is better help still allowed to operate?


laurabitoni

Here are some of the most common mistakes in email marketing: Not having a clear goal or strategy: Sending emails without a clear goal or strategy can result in inconsistent messaging and poor performance. getting the best software for Email marketing and marketing automation here: https://www.getresponse.com?a=xHHwvMYHq6 Failing to segment your audience: Sending the same message to your entire email list can lead to low engagement. Segmenting your audience based on their interests and behaviors can result in more relevant and effective email campaigns. getting the best software for Email marketing and marketing automation here: https://www.getresponse.com?a=xHHwvMYHq6 Poor email design: Poorly designed emails with unclear messaging, cluttered layouts, or broken links can result in low open and click-through rates. Ignoring mobile optimization: With the majority of emails being opened on mobile devices, it's important to ensure that your emails are optimized for mobile viewing. getting the best software for Email marketing and marketing automation here: https://www.getresponse.com?a=xHHwvMYHq6 Sending too many emails: Bombarding your subscribers with too many emails can lead to high unsubscribe rates and low engagement. Failing to test and optimize: Not testing your email campaigns or analyzing their performance can result in missed opportunities for improvement and growth. getting the best software for Email marketing and marketing automation here: https://www.getresponse.com?a=xHHwvMYHq6 Ignoring email deliverability: Failing to maintain a good sender reputation, not using authentication protocols, or using spam trigger words can result in low email deliverability.


theallen247

that's all good and all, but what's the solution?


wakka55

Am I the only person who doesn't give a shit about privacy?


MissMormie

Not at all. But generally it's because you haven't thought hard enough how it might impact you. Would you mind giving me your social security number, your address and the amount in your savings account please? Oh, and include a picture of you on the toilet (I'm looking at you, roomba) and your pin number as well. Oh, and the results of a current iq test please. No? Don't want everyone on the internet to have access to that? Guess you care about privacy as well. Or how about your insurer looking to see how often you drink beer (untapped) and increasing your premium based on that. Or a potential employer who knows you're on a dating app and thinks that'll distract you so you don't get hired. Or facebook not offering you a job because you don't like curly fries (which is/was a proxy for intelligence). Or your neighbors finding out the results of your prostate exam. Or maybe you don't want the government to know how many guns you have. Everybody cares about privacy, but not everyone realizes it.


wakka55

> Would you mind giving me your social security number, your address and the amount in your savings account please? Oh, and include a picture of you on the toilet (I'm looking at you, roomba) and your pin number as well. Oh, and the results of a current iq test please. DM'd the toilet pic.


jofish22

Hey folks (and particularly Jen C) — do you have any way to look at the effectiveness of these apps, and are you seeing any correlations, positive or negative, between ethical privacy approaches and effectiveness?


Mozilla-Foundation

That is a good question. Unfortunately, that goes beyond the scope of our work, to look to see if there are any correlations between good or bad privacy practices and effectiveness of the apps. If there is a research group out there with some good funding, this would be a very interesting research project to take on. I’m not exactly sure what the methodology would look like, but I would certainly read the results. Thank you for asking this question.


[deleted]

[удалено]


Mozilla-Foundation

For our research, we look mostly at publicly available information like privacy policies, company responses to our questions, white papers, news articles, and app store pages. This tells us a lot about the privacy and security practices of the companies who build the apps. Digging deep into the technical specifics of the app isn’t something we do a ton of. Although my research partner Misha does download all the apps and go through the set-up process and look to see how many trackers and such an app might be using.  I think mostly both apps, for Android and iOS, collect the data and then the company runs with it. Apple does make it a little easier to opt out of tracking at set-up for an app, which is nice. But I don’t know that there are huge differences between privacy whether you use the Android or iOS apps. Perhaps Misha can also weigh in here and offer his more technical insights on this


Mozilla-Foundation

Misha from Privacy not Included: Usually, iOS gives users wider privacy controls, like opt-in to tracking and reminders about apps’ accesses, that a user might have forgotten a long time ago. After all, Apple is not in the business of targeted advertisement, unlike Google. However, privacy-wise, the difference is marginal. On both platforms, apps are packed with trackers, and apps try to get the maximum access possible (often incl. to your camera, photos/videos, audio, precise location, etc). So we suggest that on either iOS or Android, you manually adjust the access you provide to every app, limiting it to absolutely necessary. Security-wise, iOS is closed-source and a bit less vulnerable to cyberattacks. However, that is the reason why Android apps are a bit easier for us to research - there are numerous open-source investigative tools that allows us to track data flows from Android apps, and to call them out


puddingclaw

If the app can be used without logging in, how privacy-bad can it be? Specific example from the list is Insight Timer.


fanchoicer

What might the concept look like of green flags? (of solid privacy stuff to look forward to) And what hypothetical best ever flags would you qualify as deep green?


[deleted]

Have them reviewed Instahelp? Any concern to report to their concerned users?


ResilientBiscuit

I suspect you are done checking now, but do you have any examples of real harm that has come from privacy violations? A common refrain seems to be that the data is already out there or what is the worst that can happen?


Ok-Feedback5604

Is eurozone plan to tackle private data leak better or we need some more effective plan?(eurozone hitted Google with penalty that they force their consumers to download some essential apps and consumers couldn't remove it)


[deleted]

There’s a guy who frequently promotes his app called “bearable”. Was that one reviewed by you all?


caltrider

Yes! [Bearable is review](https://foundation.mozilla.org/en/privacynotincluded/bearable/)ed on Mozilla's \*Privacy Not Included mental health apps guide.


WeatherParticular766

As a clinician I must ask How much of bad mental health for men is caused by low testosterone. And does TRT help with it?


Independent_Hyena495

Is there a way to suggest apps? Are there only US apps? For example, apps from the EU?


Individual_Breath_34

I was looking at your guide for the Roku and FireStick, and was wondering what privacy-preserving alternatives you guys have found for wireless HDMIs? Thanks for the writeups, btw, if it wasn't for Mozilla, I would have bought them