I started putting pieces of my medical history into my phone the same way many people do: out of convenience. A lab result I wanted to keep, a PDF from the clinic, a vaccine card photo, a prescription reminder. Then Apple and Google rolled out features to centralize medical records — Apple Health Records and Google’s efforts around Google Fit/Health Connect and partnerships with health systems — promising easier access and better control. That got me thinking: before you hand over your health data to a tech giant, what should you really be asking?

Where is my health data stored?

The first and most basic question is location. Is the data stored on the device, on the company's servers, or both? Apple has emphasized that a lot of Health data is stored locally on the device and encrypted at rest, whereas Google historically has relied more on cloud storage. But details matter: some features require cloud syncing (to access data across devices), and some integrations with hospitals may pull records into cloud systems. I always check the app’s documentation and the settings screen to see whether data is stored locally, synced to iCloud or Google servers, or pushed to third-party providers.

Who can access my data — now and in the future?

“Who” is the question that keeps me awake more than “where.” Nurse, doctor, researcher, family member, app developer, or an advertiser? Both Apple and Google say they don’t use Health data to target ads, but the devil is in the permissions. I ask:

  • Which parties can read my records? Is it only apps I explicitly authorize, or can partner services (like clinics or labs) access certain fields automatically?
  • What happens when I grant an app access? Some apps request broad access and can pull everything in your Health database; others ask only for specific types of data (steps, glucose readings, immunizations).
  • Will access persist if I revoke permission? Revoking an app’s permissions should stop future data flows, but I check whether the app retains copies of previously downloaded data.

How is my data protected?

Encryption is the headline, but implementation is the story. I look for:

  • Encryption in transit and at rest: Data should be encrypted when moving between my phone and servers and when stored. Apple advertises strong encryption by default; Google offers encryption but some features may rely on additional protections.
  • End-to-end encryption: Does the company offer end-to-end encryption for medical records? That means only I and authorized providers can decrypt the content. Not all services support E2E for all data types.
  • Two-factor authentication (2FA): Can I add 2FA to the account that holds my health records (Apple ID, Google Account)? I enable it immediately.

Can I control sharing and revoke it easily?

I want to be able to share a vaccination record with my employer for a week and then make sure it’s gone. That requires granular, time-limited sharing controls and clear revocation mechanics. I ask whether sharing creates copies that third parties keep indefinitely, and whether the platform offers audit logs showing who accessed my data and when. Apple’s Health app lets you control which apps access specific categories of health data; with Google, the mechanics vary depending on the connected app or health system.

How interoperable are these features with my provider?

If my hospital or clinic offers electronic health records (EHR) access through Apple Health Records or Google integrations, that’s useful. But compatibility isn’t universal. I check whether my health system supports the same standards (like HL7 FHIR) and whether data imports are complete or only partial (some systems push summaries, not full notes). I also verify whether the records carry clinical signatures and timestamps — important if you need them for medical decisions or legal purposes.

Will my data be used for research or commercial purposes?

I’m generally pro-research, but consent matters. Both companies may offer aggregated, de-identified data to researchers, or partner with academic projects. I want clarity about:

  • Whether my individual data will ever be combined with others for research without explicit consent.
  • Whether de-identification is robust — re-identification risks exist, especially with detailed health records.
  • Whether third-party developers can monetize access to data (e.g., selling insights or building targeted services).

What about third-party apps and integrations?

One of the appeals is the app ecosystem: sleep trackers, glucose apps, medication reminders. But each app is a potential new entry point. I go beyond the platform’s promises and examine:

  • The app developer’s privacy policy: Do they store or share data? For what purposes?
  • Permissions requested: Are they scoped narrowly (only glucose readings) or broad (full health database)?
  • History of security incidents: Has the developer had breaches or questionable practices?

Can I delete my data permanently?

Deleting should be straightforward and effective. I test deletion by removing records and then checking for copies in backups, synced devices, and any third-party apps that previously accessed the data. Ask the company: how long do backups retain deleted data? Does deletion from the app trigger deletion from all servers and partner systems? Transparency here varies.

What legal protections apply to my health data?

Privacy law matters and it depends on where you live. In the U.S., HIPAA protects data held by covered entities (healthcare providers and insurers), not necessarily by tech companies unless they’re acting as business associates. Apple and Google aren’t always covered by HIPAA when offering general consumer features, so I verify the legal context: are you dealing with an app connected to a hospital (often HIPAA-covered) or a consumer fitness app (often not)?

Who is liable if something goes wrong?

If an error in synced records leads to a clinical mistake, who bears responsibility? Medical decisions still lie with clinicians, but if device-supplied data is inaccurate, there’s a messy intersection of product liability, EHR vendors, and healthcare providers. I read terms of service and developer agreements to understand indemnifications and liability limits before relying on phone-stored records for clinical care.

What are the practical steps I take right now?

  • Review the platform settings: On iPhone, check Health > Privacy and Health Records settings; on Android, check Google Account permissions and Health Connect if available.
  • Enable 2FA for my Apple ID / Google Account and review devices with account access.
  • Audit third-party apps: revoke permissions for apps I no longer use.
  • Ask my provider whether they support direct record sharing via Apple/Google and what data fields they export.
  • Keep copies offline: export PDFs of critical records and store them in an encrypted backup I control.
Question What to look for
Storage location Device-only, cloud, or hybrid? iCloud vs Google Cloud specifics
Access controls Granularity of permissions, ability to revoke, audit logs
Encryption Transit and at rest; end-to-end where possible
Legal protections HIPAA applicability, regional privacy laws
Third-party risk Developer policies, data retention, previous incidents

I’m not suggesting you avoid using Apple or Google’s medical-record features — they can be enormously useful, especially in emergencies or when switching providers. But I do think we should treat health data as sensitive, ask pointed questions, and take control where possible. Before you trust a tech platform with your medical records, make those questions part of your checklist. It’s a small step that preserves privacy and keeps you in the driver’s seat of your own health story.