Great. I’ll begin a deep-dive into the intersection of Trusted Execution Environments (TEEs) and zkVMs with a focus on the health verification use case you’re building at status.health.
This will include:
- A multi-audience explanation of TEEs, their history, importance, applications (e.g., Apple’s FaceID), and key players.
- Why TEEs were selected for status.health from both a privacy and compliance perspective.
- A comparative section on TEEs versus alternatives like homomorphic encryption and non-hardware enclaves.
- A deeply technical appendix explaining TEE isolation, syscall restrictions, and how your proof-generation flow works with TEEs and zkVMs (with architecture diagrams and detailed process explanations).
I’ll let you know as soon as the research is ready.
Trusted Execution Environments (TEEs) – The Key to Trustless Health Verification in 2025
In today’s digital world, protecting sensitive data is paramount – especially when it comes to personal health information. Trusted Execution Environments (TEEs) have emerged as a powerful technology to keep data secure even while it’s being processed. In this article, we’ll explain what TEEs are and why they’re novel and important in 2025, using examples from everyday tech (like your phone’s FaceID sensor) and discussing who’s building these technologies. We’ll then dive into why our platform (status.health) chose TEEs to enable a “trustless” health verification layer – meaning users can prove health facts without ever exposing their private data. We’ll also compare TEEs with other privacy-preserving approaches (like zero-knowledge proofs, homomorphic encryption, and multi-party computation) to see how they stack up. Finally, a technical appendix is provided for readers who want a deep dive into how TEEs work under the hood (covering isolation models, system call restrictions, and how we combine TEEs with a zero-knowledge virtual machine for health proof generation).
What is a Trusted Execution Environment (TEE)?
At its core, a Trusted Execution Environment (TEE) is a secure area of a device’s main processor that runs code and processes data in isolation from the rest of the system. You can think of it as a “secure vault” inside your device’s CPU – whatever happens inside that vault is hidden and protected from the normal operating system and applications. The TEE keeps data confidential and code integrity intact, so even if malware or an administrator gains control of the main operating system, they cannot access what’s inside the enclave (the secure vault). In practical terms, the memory used by the TEE is encrypted and isolated by hardware. Any code running outside the TEE (in the “normal” part of the OS) can only see gibberish if it tries to read TEE memory. Likewise, the TEE’s code cannot be tampered with by outside programs – the hardware will reject any attempt to alter the TEE’s runtime or inject foreign code.
How does this work? Modern processors have special hardware features that carve out a protected region of memory and processor state for the TEE. Code has to be explicitly loaded into the TEE (usually at program start or device boot), and once inside, it executes with hardware-enforced isolation. Any data leaving the TEE (if at all) can be encrypted or verified. For example, when a normal app wants the TEE to do something, it will call into the TEE through special secure calls. The TEE will perform the computation internally and return only the results. If the outside world tries to peek during execution, it only sees encrypted data. In essence, the TEE acts like an inner sanctum: only authorized code can operate on the sensitive data inside, and nothing on the outside can eavesdrop or interfere.
To give a simple analogy, imagine a glass box that’s heavily tinted and locked. You (the authorized program) can see inside and work with what’s in the box, but anyone outside just sees an opaque container. Even if someone manages to pick up the box (akin to gaining admin rights on the OS), the contents remain unreadable. This is what TEEs provide in computing – a hardware-backed safe zone.
A Brief History of TEEs and Why They Matter in 2025
The concept of isolating sensitive code isn’t entirely new – elements of it go back to the 1990s when industries were looking for ways to protect things like digital media and payment info. Early precursors included smart cards and secure coprocessors, and later the Trusted Platform Module (TPM) for PCs. However, the modern notion of a general-purpose TEE really took off in the 2000s and 2010s as chipmakers built these features directly into CPUs.
-
ARM TrustZone (mid-2000s): One of the first widely deployed TEE concepts. TrustZone, introduced around 2004, created a hardware division of the system into two “worlds”: a Normal World (rich OS and apps) and a Secure World (the TEE). This allowed mobile phones to run a small secure OS in the Secure World alongside Android/Linux in the Normal World. Early smartphones (e.g. Nokia devices about 15 years ago) started using this for things like protecting SIM card operations and DRM.
-
Intel SGX (mid-2010s): In 2015, Intel launched Software Guard Extensions (SGX) on PC/server CPUs, marking a novel approach: instead of a whole secure world, SGX allows individual user applications to carve out enclaves (secure regions) within their own process. This fine-grained enclave model was new because even the OS kernel could not read those enclaves. SGX brought TEEs into the realm of cloud and PC software, with use cases like secure data processing on untrusted cloud servers. It was cutting-edge for enabling things like confidential cloud computing and was one of the first TEEs to support remote attestation (proving to a remote party that an enclave is genuine and what code is running) in a mainstream CPU.
-
AMD SEV (late 2010s): AMD took a slightly different path. Instead of protecting individual processes, AMD’s Secure Encrypted Virtualization (SEV) technology (around 2017) focuses on whole virtual machines. It encrypts the memory of entire VMs so that a cloud provider’s hypervisor (and by extension, any higher-privileged code) cannot read the VM’s memory. This approach is great for lifting-and-shifting existing software (no need to rewrite apps for enclaves) – you get a “confidential VM” where everything inside is protected transparently. By 2025, AMD’s SEV and its upgrades (SEV-ES, SEV-SNP) are heavily used in cloud platforms to offer Confidential Computing VMs.
-
Apple Secure Enclave (2013+): Apple introduced a Secure Enclave Processor (SEP) in the iPhone 5S (2013) to protect Touch ID fingerprint data. This isn’t marketed as a “TEE” by name, but it is one – it’s a separate processor core on the chip that runs its own micro-OS, handling fingerprint and face recognition data, encryption keys for Apple Pay, etc. Even iOS itself cannot read the Secure Enclave’s secrets. This model has been crucial in making features like FaceID/TouchID and Apple Pay secure. (If you’ve ever wondered how your face or fingerprint is stored safely on your phone – it’s the Secure Enclave.) Apple’s success here has made consumers aware that some data can be locked away in hardware for their safety.
Fast forward to 2025, and TEEs have become increasingly important and widespread. Here’s why they’re novel and crucial today:
-
Explosion of Sensitive Data & Privacy Regulations: We generate more sensitive data than ever (think health apps, financial apps, personal AI assistants, etc.), and regulations like GDPR, HIPAA, and various privacy laws demand strong protection. TEEs offer a way to process this data without exposing it even if systems are breached. This “data in use” protection is something traditional encryption can’t do (normally, data must be decrypted to be processed). By keeping data encrypted even during computation and only decrypting inside the secure enclave, TEEs address a major gap in data security. In sectors like healthcare and finance, this is a game-changer for compliance and security.
-
Rise of Confidential Computing in Cloud: All major cloud providers now offer some form of confidential computing. In Azure, for example, you can launch confidential VMs where the memory is encrypted with hardware keys (using AMD SEV-SNP or Intel TDX). Google and AWS have similar offerings. This means companies can put sensitive workloads (like analyzing encrypted health data or processing financial transactions) on the cloud and even the cloud provider can’t peek at the data while it runs. In 2025, this trend is moving from experimental to mainstream for enterprises. Gartner and others predict a large percentage of cloud workloads will incorporate confidential computing in the coming years.
-
Blockchain and Web3 Privacy: Interestingly, TEEs have found a niche in the blockchain world as well. Some decentralized platforms use TEEs to handle private or off-chain computations. For example, secret smart contract platforms and certain cryptocurrency projects leverage TEEs to process data that they don’t want to put publicly on the blockchain. This is because TEEs can quickly do complex processing on private data (like matching orders in a dark pool, or training on private data) and then output a result or proof to the blockchain. In contrast, pure cryptographic methods (like zero-knowledge circuits) might be too slow or unwieldy for the full task. We’ll discuss comparisons later, but the key point is that TEEs are pragmatic – they bring a high degree of security with performance that’s close to native, which is very attractive in many 2025 use cases.
-
Expanded Hardware Support: By 2025, essentially all major processor vendors support some form of TEE. This includes not just Intel SGX, AMD SEV, and ARM TrustZone as mentioned, but also new variants:
- Intel has introduced Trust Domain Extensions (TDX) (building on SGX’s ideas but for VM isolation, aimed at cloud usage).
- ARM introduced Realm Management Extension (RME) as part of its Confidential Compute Architecture (CCA) – this allows dynamic creation of secure “Realms” on ARMv9 chips, conceptually similar to enclaves but at the hypervisor level for ARM servers.
- IBM has TEEs on their mainframes (IBM Secure Execution for LinuxONE, etc.).
- NVIDIA GPUs now even have a form of TEE for secure AI computation (e.g., the H100 GPU supports confidential computing so that model weights/data can be processed without exposure).
With this broad industry support, TEEs are not a niche – they’re becoming a standard feature in modern computing infrastructure.
In summary, TEEs have evolved from a DRM tool into a cornerstone of privacy and security in the cloud and devices. They are novel in 2025 not because the idea is brand new, but because the context has shifted – we now rely on distributed computing and personal devices for incredibly sensitive operations, and TEEs are providing a solution at a time when software-only security often proves insufficient. The combination of strong protection during data use, with relatively low performance overhead, makes TEEs uniquely valuable.
How Are TEEs Used Today? (Everyday Examples)
To make TEEs more concrete, let’s look at some common use cases and examples of companies/products using TEEs in 2025:
-
Secure Biometrics and Authentication: If you unlock your phone with your fingerprint or face, you’re using a TEE. On iPhones, the Secure Enclave stores your fingerprint/FaceID data and performs the biometric matching inside itself. Android phones similarly use TrustZone-based TEEs (often with a secure OS like Trusty or Qualcomm’s QSEE) to compare your fingerprint. This means your biometric data never leaves the secure enclave – even the phone’s main OS only gets a “unlock success” or “fail” message, but cannot read the fingerprint data itself. This makes it extremely hard for an attacker or malware to steal your biometric identifiers. It’s also used in password-less authentication systems and secure unlocks on laptops (e.g., Windows Hello uses a TPM and CPU security features to protect face/fingerprint data).
-
Mobile Payments and Wallets: TEEs are heavily used in payment scenarios. For instance, Apple Pay and Google Pay rely on secure enclave processors to store your credit card cryptographic keys. When you tap to pay, the transaction is authorized by codes that come from the enclave, so the rest of the system never sees your actual card number or secret keys. Similarly, cryptocurrency wallets on mobile devices often use TEEs to protect private keys. The Wiki entry on TEEs notes that with the rise of crypto, “TEEs are increasingly used to implement crypto-wallets” because they can store tokens/keys more securely than the regular OS. If you’ve heard of hardware wallets (like Ledger devices), a TEE is like having a mini hardware wallet built into your phone or computer – isolating the keys from the potentially vulnerable OS.
-
Digital Rights Management (DRM) & Content Protection: Streaming services and media providers use TEEs to enforce DRM on high-definition content. For example, to play back a 4K movie on certain devices, the decryption of the video happens in a TEE so that you can’t grab the raw 4K stream and pirate it. The TEE has a protected path to the screen, meaning the video is decrypted and sent out to the display without the normal OS able to intercept it. This is often done via TrustZone on set-top boxes, smart TVs, and mobile devices. While consumers might find DRM frustrating at times, it’s a real-world example of TEEs providing a secure zone that even device owners can’t easily tamper with (the controversy being that it “deprives the owner of access,” which indeed is by design for DRM use cases).
-
Enterprise and Cloud Security: Many enterprises use TEEs to protect sensitive operations. For instance, a banking app might perform certain cryptographic operations (like generating a one-time password or decrypting a document) inside a TEE on the user’s device for extra security. On the server side, cloud providers offer “confidential computing” where, say, a hospital can run analytics on patient data in a cloud VM that is encrypted in memory using AMD SEV-SNP. The cloud provider (e.g., Azure or AWS) cannot see the data in the clear while it’s being processed. Governments are also interested in TEEs to secure citizen data and allow things like secure multiparty analytics (different agencies computing on joint data securely). The enterprise and government use case for TEEs is summarized well: a TEE can ensure that even if an employee’s device is compromised or if a server admin is malicious, certain sensitive apps/data remain off-limits, which gives strong assurances for security policies.
-
Secure Modular Applications: Some modern software design leverages TEEs to isolate critical modules. For example, say an app has a password management module – that module’s data could be kept in a TEE, separate from the rest of the app. This way if one part of the app is compromised, the passwords stay safe. The idea of secure modular programming is emerging, where each module can be put in a TEE enclave and only communicate through secure channels. This hasn’t hit mainstream consumer apps widely yet, but in industries like automotive or IoT, it’s a pattern (e.g., a car’s IVI system might keep the payment module in a TEE so that an attacker who hacks the music player can’t steal your credit card for toll payments).
In summary, TEEs are already all around us: in our phones, laptops, streaming devices, and increasingly in cloud servers. You might not notice them, because the user experience is just that things “are secure and just work” – you don’t get to see the enclave, but it’s working behind the scenes to keep secrets safe. The reason TEEs are trusted for these tasks is that they provide hardware-level security. Unlike a software sandbox (which can be broken if the OS is rootkitted), a properly designed TEE will hold strong even if the main OS is compromised. This makes them ideal for the most sensitive operations.
Who Builds TEEs? (Major Implementations and Players)
By 2025, virtually every major chip manufacturer and platform has some form of TEE:
-
ARM TrustZone and Derivatives: ARM’s TrustZone technology is one of the most widespread (found in billions of mobile devices, microcontrollers, etc.). Companies like Qualcomm (with their Secure Execution Environment, QSEE) and Samsung (Knox TEE) build on TrustZone. ARM has continued to evolve this – their new Confidential Compute Architecture (CCA) adds “Realms” which are like dynamically created enclaves for ARM servers. Who uses it? Virtually all Android phones (Qualcomm and MediaTek chips include a TrustZone TEE OS), iPhones (Apple’s SEP is conceptually similar), and even IoT devices (ARM Cortex-M microcontrollers now have TrustZone-M for embedded security).
-
Intel: Intel introduced SGX and more recently TDX. SGX (Software Guard Extensions) was notable for enabling app enclaves on Intel CPUs. While initially available on client and server CPUs, Intel has shifted focus to server use (and there were some roadbumps – certain consumer chips dropped SGX support around 11th-gen Core, mainly because SGX was causing incompatibilities with some DRM, etc.). Now Intel’s push is Intel TDX (Trust Domain eXtensions) which targets isolating entire virtual machines with enclaves (somewhat akin to AMD’s approach). Additionally, Intel has had other secure tech: e.g., Intel Management Engine (ME) contains a subsystems that act like a TEE (for anti-theft and remote management), and Intel TXT (Trusted Execution Technology) was an older tech for secure boot and measured launch. Intel’s SGX is still relevant in specialized uses (it’s used in some cloud offerings and research projects), and TDX is in preview on Azure cloud for example. Intel also runs initiatives like Project Amber for independent attestation services. Key point: Intel was a pioneer with SGX for enclave tech on general-purpose processors.
-
AMD: AMD’s Platform Security Processor (PSP) is a dedicated security core (actually an ARM core on the side) in AMD chips that, among other things, implements SEV for VMs. AMD SEV-SNP (the latest gen) not only encrypts VM memory but also adds hardware checks to prevent malicious hypervisor tricks (like replaying old memory pages). AMD’s tech is a cornerstone of many cloud confidential VM offerings (Azure, Google Cloud use SEV for Linux VMs, etc.). AMD also supports secure encrypted virtualization on GPUs to some extent (and likely will extend PSP concepts to accelerators). They have been quite aggressive in the confidential computing space; for instance, pushing for open-source firmware to support attestation of SEV VMs. In short, AMD provides full memory encryption at the VM level, a slightly different flavor than enclaves but achieving the goal of isolating workloads.
-
Apple: As discussed, Apple’s Secure Enclave is a big one in the consumer space. Not only in iPhones and iPads, but also in Macs (the T2 security chip and now the M1/M2 have secure enclaves integrated) and Apple Watches, etc. Apple doesn’t license this; it’s their proprietary implementation. But it’s trusted enough to secure very sensitive user data (financial info, biometrics). Apple’s use of a TEE is often cited as a reason why features like FaceID are safe – because even if iOS had a bug, your face data is in a separate island. Fun fact: Even Apple’s new passkeys (FIDO2) and iCloud Keychain secrets are protected by the secure enclave, meaning if someone extracted your phone’s storage, they still couldn’t get those secrets without the enclave’s cooperation.
-
Google: Google has the Titan family of security chips (Titan M in Pixel phones, used as a TEE for things like protecting lock screen and sensitive transactions). On the cloud side, Google Cloud’s Confidential Computing uses AMD SEV for VMs and they’ve also done interesting research with TEEs (e.g., Google’s Advanced Protection Program uses Titan security keys, etc., which are not TEEs in CPU, but separate secure elements). Also, Google’s Android includes Trusty OS – an open-source TEE OS that runs in TrustZone on Pixel devices. So Google is both a consumer (using TEEs in products) and a contributor (they maintain some open TEE software).
-
Microsoft: Microsoft leverages TEEs in Azure (Intel SGX in Azure Confidential Computing was one of the first cloud TEE services, and now they have SEV and will have Intel TDX). They also use a form of TEEs in Xbox (for DRM), and in Windows (features like Virtualization-Based Security, while not a true TEE, isolate parts of memory). Microsoft’s Pluton security processor in newer PCs is another layer (though Pluton is more akin to a TPM).
-
Others and Open-Source: There are open-source TEE efforts like OP-TEE (a popular open TEE OS that runs on TrustZone, used in many hobby and some production environments). Academic projects like Keystone (an open-source TEE for RISC-V) have made progress. Companies like Trustonic, Huawei (iTrustee), Samsung (TEEgris), etc., all have their own TEE implementations often based on ARM TrustZone. There are also specialized TEEs like for automotive (often using TrustZone M or other microcontroller TEEs for things like digital car keys, etc.).
In summary, the landscape is rich: Intel SGX/TDX, AMD SEV/PSP, ARM TrustZone/CCA, Apple Secure Enclave, and specialized secure elements all fall under the umbrella of TEEs or secure enclaves. Each has a slightly different threat model and use case (SGX/TrustZone for finer-grained apps, vs SEV/TDX for whole VMs, etc.), but they share the common goal of carving out a secure execution bubble in hardware. The fact that every major CPU vendor and even GPU vendors are on board highlights how important TEEs have become. It’s no longer a question of “does this platform have a TEE?” but rather “how do we use the TEE that’s there, and what are its limitations?”. And that brings us to how we at status.health leverage TEEs for our solution.
Why Status.Health Chose TEEs for Trustless Health Verification
Status.Health is building a trustless health verification layer – in plain terms, a system where users can prove certain health information (like completion of a health action, lab result, vaccination status, etc.) without ever revealing their actual private health data. Our mantra is to never collect or store users’ personal health information (PHI) on our servers – instead, all the sensitive data stays on the user’s device, and only a cryptographic proof of some health claim is sent out. This approach requires extreme security on the user’s side, because if the user’s device got compromised or if our software itself mis-handled the data, that private health info could leak. That’s where TEEs come in as an ideal solution for us.
Here are the key reasons we chose TEEs in our stack:
-
Complete On-Device Privacy: With a TEE, we perform all health data processing locally in a secure enclave on the user’s device. For example, if our app needs to verify a COVID test result or check your heart rate trends, all the raw data (the test result image or sensor data) is processed inside the enclave on your phone or computer. The unencrypted data never leaves the user’s device or the enclave. Even our own application outside the enclave can’t read it – it only gets the final answer (like “test result verified: positive/negative”) in a protected form. This means that as a company, we never see or handle your personal health data in the clear. We don’t transmit it, don’t store it – it stays with you. This drastically reduces the risk of data leakage. Even if our cloud infrastructure were completely compromised, an attacker would find no troves of health records – because we simply never have them. (All we might have are some verification proofs, which are mathematically unlinkable to the raw data.)
-
Defense Even If Device Is Hacked: You might wonder, “If the user’s own device is infected with malware, can’t that spyware steal the health info before it goes into the enclave?” Without a TEE, possibly yes. But with the TEE, even if the device’s OS is compromised, the malware cannot penetrate the enclave. For instance, say you have a keylogger on your phone (yikes) – if you opened a health document in a normal app, it could potentially snoop it. But if our verification code runs in the TEE, the keylogger can’t get into that memory. The enclave is like a secure black box. This is a huge deal for offering strong assurances in a trustless model. We’re essentially saying: not only do you not have to trust our company, you don’t even have to fully trust your own operating system. The hardware is the guardian. (Now, to be fair, TEEs are not 100% impregnable – side-channel attacks exist, etc., but they massively raise the bar for attackers.)
-
Regulatory Peace of Mind (HIPAA and others): By designing the system such that we (the company or service provider) never handle PHI, we can cleanly sidestep a lot of regulatory burdens. Under regulations like HIPAA in the US, if an entity doesn’t qualify as a healthcare provider, health plan, or their business associate, and crucially doesn’t receive any identifiable health information, they often are not bound by HIPAA rules. We’re essentially in that boat – we deliberately avoid becoming a “covered entity” by not taking custody of PHI. The data stays under the user’s control. If done right, our service could be seen more like a tool the user is running, rather than a processor of health data. This is a bit like how certain health apps that store data only on-device aren’t considered HIPAA-covered – because they’re not transmitting that data to a medical provider or insurance. Of course, we still care deeply about privacy and comply with relevant laws, but not having to store PHI on servers means significantly lower risk of massive breaches and potentially a lighter regulatory footprint. Even in contexts like GDPR, if we truly don’t process personal data (just handle anonymized proofs), our compliance story is simpler. In short, TEEs help us architect for “data minimization” to the extreme – essentially zero sensitive data collection – which regulators love to see.
-
Trustlessness and User Control: Our goal is a “trustless” verification – meaning the verifier (say, a school or employer or travel authority who needs to check your health status) shouldn’t have to just trust our word or a piece of paper; they should get a cryptographic proof. By using TEEs, we can ensure the proof was generated on genuine data without us seeing that data. Combine that with zero-knowledge proofs (which we’ll get to next), and you have a situation where the user is in full control of their data, yet the verifier can be confident in the result. TEEs enable a form of self-sovereign data handling – your device attests to the correctness of the processing. For additional trust, TEEs support remote attestation mechanisms, where our app can produce evidence (signed by the hardware vendor) that it is indeed running inside a legitimate TEE and that our code hasn’t been tampered with. In the future, this could allow a verifier to even check “was this proof generated by the official status.health enclave code on a real secure device?”. That ensures nobody tries to fake proofs by using modified software.
-
Security Even Against Us: This is worth emphasizing – by using TEEs, we as the service operators cannot snoop on user data even if we wanted to. And if some malicious insider or hacker compromised our update servers, pushing a fake app update wouldn’t help them get data unless that rogue update were also able to exploit the TEE (which is very hard without a hardware exploit). It’s an additional safeguard: users don’t have to blindly trust our team’s goodwill or security practices; there’s a technical barrier that protects them. In a way, we’re making ourselves blind to the data by design. This not only protects users, but it also protects us as a business – we become far less attractive to attackers (nothing juicy to steal on the server), and we have a clear story if someone asks “how do you handle sensitive health info?” – answer: we don’t – you do, on your device.
All these reasons led us to integrate TEEs at the core of our architecture. Concretely, our solution will run as a secure application inside whatever TEE is available on the user’s device:
- On modern smartphones, that could be TrustZone-based TEEs (we might use Android’s StrongBox or iOS’s secure enclave via an API, or use a companion native library utilizing ARM TrustZone).
- On laptops/desktops, it could leverage Intel SGX/TDX or simply run in a secure enclave provided by a browser extension (there are initiatives to integrate WebAssembly with TEEs, or we could ship a native component that utilizes something like Intel SGX if available).
- We aim to be device-agnostic, possibly shipping a signed binary that the user installs which engages the local TEE. (For instance, on Windows or Mac, this might involve using a TPM/HSM or virtualization-based security as the enclave; on Linux, maybe using an enclave if CPU supports, etc.)
In cases where a user’s device doesn’t have a known TEE, we will still isolate data as much as possible (using OS sandboxing or secure enclaves in software), but the gold standard will be to use hardware TEEs on Intel, AMD, or ARM. By targeting common TEEs across platforms, we want our technology to be broadly accessible.
In summary, we chose TEEs to ensure that “what happens on your device, stays on your device.” They allow us to create a health verification system that both users and verifiers can trust: users trust that their private info isn’t being leaked, and verifiers trust the results because they know they were produced in a secure, untampered environment.
TEEs vs Other Privacy-Preserving Technologies (Alternatives and Evolution)
Trusted Execution Environments are one approach to protecting data in use. There are a few other notable approaches to secure computation and privacy, each with its own pros and cons. Let’s compare TEEs with these alternatives and see how they’ve evolved:
1. Fully Homomorphic Encryption (FHE)
What it is: Homomorphic encryption allows computations to be performed directly on encrypted data, producing an encrypted result that can be later decrypted to reveal the answer. In theory, this means you could send encrypted health data to a cloud server, the server could run a program on it without ever decrypting it, and send you back encrypted results. The server learns nothing about your data – pretty magical! This concept has been around for decades, but a fully usable scheme was first demonstrated by Craig Gentry in 2009.
FHE vs TEE: The big advantage of FHE is that you don’t have to trust any hardware or server – the security is purely mathematical. Even if the server is malicious, it learns nothing if the crypto holds. In contrast, with a TEE you have to trust the hardware manufacturer (and that there are no bugs). However, the downside is performance. Fully homomorphic encryption, even in 2025, is extremely computationally heavy for general-purpose tasks. Simple operations can be 1000× to 1,000,000× slower than normal computation (though it’s improving). For many complex operations, using FHE is currently not feasible – the computation might take days or require huge cloud resources for something a TEE could do in seconds. There are also limits on what algorithms can be efficiently transformed into homomorphic form.
State in 2025: FHE has evolved – there are libraries like Microsoft SEAL and Google’s TFHE, and some cloud demos of FHE for specific tasks (like aggregations, simple ML on encrypted data). It’s an exciting area of research, but not yet practical for real-time or complex workflows like what we need (e.g., running OCR and AI on images is far beyond current FHE capabilities in a reasonable time). So while FHE might be the ultimate in privacy (no trust needed at all in hardware), it’s not an alternative we can use today for our use case. TEEs, on the other hand, provide a very practical solution today – they run at near native speed since the data is decrypted inside the secure CPU area and computed normally. In essence, TEEs trade some trust (in hardware security) to achieve tractable performance, whereas FHE removes hardware trust but incurs a huge performance cost. As technology evolves, maybe someday FHE will be efficient enough to replace some TEE use cases, but we’re not there yet for general computing.
2. Secure Multi-Party Computation (MPC)
What it is: MPC involves splitting data or computations among multiple parties such that no single party can see the whole picture, but together they compute a result. For example, you could have 5 servers each hold a random piece of your health record, and they run a protocol where at the end they collectively compute “Is BP > X?” without any server seeing the actual BP. It’s like pieces of a puzzle – no one has the whole puzzle, but they can jointly compute a function of the complete image.
MPC vs TEE: MPC, like FHE, is cryptography-based and doesn’t rely on special hardware. It often requires at least two or more parties/servers that do not collude. The benefit is you aren’t putting all your trust in one hardware vendor; you assume at least one of the parties is honest or that they won’t all collude. The drawback is complexity and interaction. MPC protocols can be very complex to implement, and they usually incur significant communication overhead (lots of messages passed between parties). For a use case like ours (user’s device verifying data), MPC would require, say, splitting the data among multiple cloud nodes or multiple user devices – that’s complicated and would introduce latency and many points of failure. In our scenario (a single user’s device and one service), MPC doesn’t naturally fit, because MPC shines when you have multiple independent participants each with secret inputs. Here the user’s device has all the secret data; involving other parties just to help compute on it is overkill – and ironically would expand the “attack surface” (more places bits of health data might transit through). We specifically decided not to go the MPC route because it would “overcomplicate” our use case and introduce new vectors for data handling that we want to avoid entirely (our philosophy: keep data handling confined to the user’s own sphere as much as possible).
Evolution: MPC protocols have improved (there are now faster techniques, and even “MPC-as-a-service” offerings for things like private set intersection, etc.). But similar to FHE, there’s a trade-off: MPC can have heavy computation and communication costs, especially for complex computations. It might require, say, 3 different cloud providers to each run a part of the computation – not exactly user-friendly for a consumer product. TEEs in contrast let you do it all in one place, fast, but you trust that one place (the enclave). An interesting note: some solutions actually combine MPC with TEEs (each party uses a TEE to further secure their part), showing these techniques can complement rather than strictly compete.
3. Zero-Knowledge Proofs (ZKPs / zkVMs)
What it is: Zero-knowledge proofs allow someone to prove a statement is true without revealing why it’s true or any additional info. For example, I could prove to you “I know a medical secret that hashes to value X” without telling you the secret. In context of computations, there are ZK VMs or circuits that can prove that a certain program was executed on some inputs and produced a certain output, without showing the inputs. This is very relevant to us – it’s exactly how we plan to let users prove health facts without revealing the underlying data.
ZKPs vs TEE: On the surface, ZK proofs sound like another way to achieve what TEEs do: hide data but still get verifiable results. The difference is, ZKPs are purely mathematical and verifiable by anyone after the fact, whereas TEEs rely on hardware and often require trusting attestations from that hardware. The big issue with ZKPs has been performance and complexity of implementation. Proving arbitrary computations in zero-knowledge can be extremely slow and resource-intensive, especially on typical user devices. For instance, proving something like “this image has a QR code with text ‘COVID Negative’” might be insanely heavy to do as a pure ZK circuit (imagine representing a whole OCR algorithm as an arithmetic circuit). TEEs can do that image processing in milliseconds because they run it natively, whereas a ZK approach might take many minutes or hours of computation to generate a proof, or the proof might be gigabytes in size, etc.
However, ZKPs have evolved too – there are now zkVMs (like Risc0, which we’re targeting, and others like StarkWare’s Cairo, Polygon zkEVM, etc.) that make it easier to prove general computations. Still, ZKPs often require writing custom circuits or using special languages, and even then, for complex tasks they can be impractical. They really shine in scenarios like verifying simpler computations or as part of blockchain scaling (zkEVM verifies lots of transactions with one proof).
Our approach actually combines ZKPs with TEEs – we use the TEE to do the heavy lifting on sensitive data (fast, secure), and then generate a zero-knowledge proof of the result or of a summary of the computation. In other words, the TEE helps us deal with the raw data safely, and the ZKP helps us make the outcome verifiable to anyone without trust. This way, we mitigate the weaknesses of each: the TEE’s output can be fed into a zkVM to produce a succinct proof that can be checked independently of the hardware trust. (We’ll detail this in the Appendix workflow.) Many experts see TEEs and ZKPs as complementary rather than mutually exclusive. One can use a TEE to quickly do something and then a ZKP to prove it was done right. Alternatively, one might use a ZKP to verify what a TEE is claiming (for instance, some projects convert TEE remote attestation into a ZK proof so it can be verified on a blockchain cheaply).
So in summary: pure ZK is more trustless (no hardware assumptions) but expensive; TEE is efficient but requires hardware trust. Combining them often gives a sweet spot – and that’s what we aim for: TEEs for privacy + ZKPs for trustless verification.
4. Virtualization and “Software” Sandboxes vs TEEs
One might say: “Can’t you just run a secure process or container, isn’t that isolation?” There are software-based isolation techniques like sandboxes, VMs, containers, etc. These help, but they are ultimately enforced by software (the hypervisor or OS kernel). If those layers are compromised, the isolation can break. For example, a virtual machine hypervisor bug could let an attacker peek into a VM’s memory. TEEs add hardware enforcement – e.g., memory encryption tied to CPU keys that the hypervisor cannot bypass even if it wanted to. It’s a stronger guarantee.
Over the years, an evolution has been that virtualization and TEEs are coming together – e.g., AMD SEV essentially takes a standard VM and makes it a TEE by encrypting its memory and checking integrity. So the line is blurring. But a plain software container is not as strong as a TEE, and things like kernel attack surface exist. For our needs (handling raw health data), we treat anything not hardware-protected as potentially vulnerable. That’s why even though we could just run a normal background process on the device, we choose to leverage the hardware TEE features when available, because it’s a world of difference if the OS is compromised.
Evolution note: Newer CPU architectures are adding even more robust isolation (like Intel’s upcoming features to encrypt memory of general processes, etc.). We anticipate that over time, the “baseline” security of all computing will rise (perhaps all computing will be confidential computing by default down the road). Until then, TEEs provide an optional but powerful security layer.
In conclusion, TEEs are not the only tool in the privacy tech toolbox. We carefully considered alternatives like homomorphic encryption and MPC, but those introduced significant complexity or performance penalties for our scenario. Zero-knowledge proofs are a must for our trustlessness, but by pairing them with TEEs we avoid the pitfalls of ZK alone (which could be too slow on user devices if we tried to do everything in ZK). Each approach has its place: FHE/MPC for maximum trustlessness but currently limited/hard, ZK for provable correctness (with some performance cost), and TEEs for practical, high-performance privacy with minimal changes to code. Our strategy is to use each where it fits best: TEEs to securely execute the logic and ZK to prove the outcome in a trustless manner. This gives users and relying parties the best of both worlds – strong privacy and strong integrity.
Appendix: Deep Dive – How TEEs and zkVMs Work Together in Status.Health
For those interested in the nitty-gritty technical details, this appendix explores how TEEs provide isolation at a lower level, and how we integrate a zkVM (zero-knowledge virtual machine) into the pipeline for generating proofs of health actions. This section will cover TEE isolation mechanisms, enclave interaction (syscalls, etc.), and our proof generation workflow step-by-step.
TEE Isolation Mechanisms and Enclave Execution Model
A TEE’s magic comes from hardware features that isolate memory and restrict control. Here’s how it typically works under the hood:
-
Memory Encryption & Access Control: When an enclave (TEE region) is created, the CPU allocates a chunk of memory for it which is automatically encrypted with a key that lives in the processor. For example, Intel SGX uses the Memory Encryption Engine (MEE) to encrypt enclave pages in DRAM. If anything outside the CPU tries to read those pages, they’ll only see encrypted data. The decryption key is hard-wired inside the CPU and not accessible to software. Additionally, the CPU memory controller checks addresses – if code from outside the enclave tries to access an enclave page, it’s blocked (or it causes a fault). Thus, an enclave’s memory is both encrypted and access-restricted. This is why even a kernel or hypervisor, which normally can read all memory, cannot read enclave memory in plaintext.
-
CPU Mode Privilege Restrictions: Enclaves typically run at user-level but with special privileges enforced by hardware. In SGX’s case, enclave code runs in ring 3 (user mode) but any attempt to execute certain privileged instructions or to access hardware directly will not work. The enclave relies on the host OS for things like I/O (which it must do carefully via controlled calls). The hardware ensures the enclave code doesn’t “escape” its sandbox – e.g., no arbitrary syscalls without exiting the enclave.
-
Secure World Switch (TrustZone): In ARM TrustZone, the CPU actually has two states (secure and non-secure). The Secure World can see Normal World memory if allowed, but not vice versa. When a context switch to the Secure World happens (via a Secure Monitor Call instruction), the hardware flips to an entirely separate set of CPU registers and can even bank certain resources. Think of it as two parallel universes on the same chip. The Secure World (TEE) typically runs a small secure OS, and normal apps make requests to it via special API calls. The TrustZone hardware ensures that as long as code is running in Secure World, any memory marked secure cannot be accessed from the normal world, and certain peripherals or interrupts can be configured to belong to the secure side.
-
Launching and Measurement: When a TEE session starts, the code that is allowed to run inside is measured (cryptographically hashed) and often signed. For instance, SGX requires enclaves to be signed by a key and during creation, it produces a measurement of the enclave code. This measurement can later be used in remote attestation to prove which code is running. The CPU will only initialize the enclave if it’s loaded correctly. On TrustZone devices, secure boot is used – the secure OS itself is loaded via a chain of trust from boot ROM, so you know the secure OS hasn’t been tampered with (assuming the root of trust is secure).
-
Interaction via ECALL/OCALL: Once running, enclaves often need to interact with the outside world (request data or output results). To do this safely, they use controlled gates. In Intel SGX terminology, an ECALL (Enclave Call) is how untrusted outside code calls into a pre-defined function inside the enclave, and an OCALL (Outside Call) is how the enclave calls back out to the untrusted world for help (like printing to a console or reading a file). These calls are defined in an interface definition (e.g., EDL – Enclave Definition Language in SGX). At build time, wrapper “proxy” functions are generated on both sides to marshal data in and out. When an ECALL happens, the CPU switches into enclave mode and jumps to the specific function, with the input parameters copied safely. When the enclave needs to do an OCALL, it exits enclave mode to a proxy that runs in normal mode, which then can do things like make a system call on behalf of the enclave. This way, there’s a clear boundary – no arbitrary jumping in or out; only defined entry/exit points. The data passed is usually copied through shared memory buffers that the enclave and outside have agreed on (the enclave will copy any secret data into/out of these buffers carefully to avoid leakage). The enclave cannot be re-entered arbitrarily; the hardware ensures the flow is controlled, preventing things like a malicious OS from tricking the enclave into jumping to an arbitrary instruction (it will only enter at an ECALL entry point).
-
Trusted I/O and Peripherals: In some TEE setups (particularly TrustZone on mobile), there’s a concept of a Trusted UI or I/O. For example, TrustZone can mediate access to the display or touchscreen – when you’re entering a PIN in a TEE, it can blank out normal world access to the screen and only show a secure keypad. This requires hardware support (screen controllers that accept TrustZone signals to not let the normal OS draw over certain areas, etc.). Many phones implement a trusted keyboard for entering mobile payment PINs. It’s an isolation extending beyond just CPU and memory, to certain peripherals. (In our case, we likely won’t directly use trusted UI features, aside from relying on OS prompts if available – e.g., Android’s TEE can show a “Confirm” dialog that the OS can’t spoof.)
In short, TEEs create an isolated execution island. The hardware encrypts and walls off memory, strictly manages how code enters/exits the island, and provides attestation to verify what’s running inside. The parent OS is still needed for general resources, but it cannot violate the enclave’s integrity or confidentiality (absent bugs). This model has been formally studied and tested – though side-channel attacks (like observing access patterns or cache timing) have shown that enclaves are not perfect if an attacker can measure those. CPU vendors have been addressing these with mitigations (flush cache, data-oblivious techniques, etc.), and it’s an ongoing area of research. But for most practical purposes, breaking a modern TEE requires a very sophisticated hardware attack or an exploit in the enclave code itself.
Status.Health Proof Generation Workflow in the TEE + zkVM
Now let’s connect this with our specific use case: generating a proof of a health action or status. Here’s an example scenario and how data flows through our system:
Use Case Example: A user wants to prove to a third party (say, a university health office) that they have a valid negative COVID-19 test result from an approved lab, without revealing the actual lab report or any personal details on it.
Step 1: Data Ingestion into the TEE – The user either takes a photo of the lab result or downloads a digital copy. Our application, running on the user’s device, will pass this data into the TEE enclave. If it’s a photo, the image (maybe containing the test QR code or text) is transferred into the enclave via an ECALL when the enclave starts the verification function. The enclave now has the raw health data (image of the test result) inside its protected memory. From here on, all processing happens internally. The user’s device may also supply some reference data, like a list of authorized lab public keys (so the enclave knows what signature to verify). That reference info can be considered public and can come through the ECALL as well or be pre-loaded.
Step 2: Verification and Computation inside TEE – Inside the enclave, our code will:
- Run OCR or barcode scanning on the test result image to extract the relevant info (e.g., the test result, date, lab identifier, and maybe a digital signature).
- Check authenticity: Perhaps the lab report has a digital signature or a QR code that encodes a signed JSON. The enclave can carry a list of trusted lab public keys (this list might be baked into the code or provided securely from our service and checked via attestation). The enclave will verify the signature on the report to ensure it’s not been tampered with and is issued by an approved provider.
- Evaluate the health criteria: e.g., confirm that the test is indeed negative and was taken within the last 72 hours, etc., depending on what needs to be proven.
During this process, any external help needed (like current time, or an API call to check something) is done carefully. For example, if the enclave needs the current date/time (to ensure test is recent), it might call out (OCALL) to get the time. This could be a vulnerability if not careful (a malicious OS could lie about the time). In a robust design, we might use multiple sources or secure time from a secure clock (some TEEs provide monotonic counters or secure time via a trusted source). For simplicity, assume we get the current timestamp from the device – we might double-check it isn’t out of expected bounds using attestation of time or user input.
The bottom line is, by the end of this step, the enclave has determined something like: “LabTest from XYZ Lab, ID 12345, for John Doe, on 2025-07-01, result = NEGATIVE, signature verified”. The enclave will then extract the relevant claim: in this case, perhaps “Test XYZ on 2025-07-01 is negative and from approved lab”. Importantly, it will drop any personal identifiers not needed (like the name “John Doe” if the proof doesn’t need to identify the person, or perhaps we hash it if needed for linking). The enclave essentially forms the statement to be proved, in a succinct form, and prepares to output only that.
Step 3: Generating a Zero-Knowledge Proof (zkVM) – Now comes the zkVM part. Inside the enclave, once the data is verified and we have the relevant info, we invoke a zkVM (Zero-Knowledge Virtual Machine), such as Risc0, also within the enclave. Risc0 is a system where you can run code and it produces a cryptographic receipt (a zero-knowledge proof) that the code ran correctly on some input. In our case, we will have a zkVM guest program that perhaps does a trivial computation: it takes as input the results of the enclave’s verification (like a boolean “valid_test” and maybe some public data like test date or an ID) and it does essentially assert(valid_test == true)
and outputs the fact. The reason we use the zkVM is so that we can give the outside world a proof that this whole process was done correctly, without revealing the sensitive intermediate data. The zkVM’s proof will not include the private inputs (like the image or user’s name, etc.) – those remain secret, only the statements we allow will be revealed.
Actually, in many cases, we might integrate the verification logic directly into the zkVM program. There’s a design choice here:
- Option A: Use the TEE to do the heavy lifting (OCR, signature verify), then prove a minimal statement with zkVM (like “I attest that I saw a valid result”). This is simpler and faster, but it requires you to trust the enclave’s attestation that it did those steps correctly (or we incorporate that trust by limiting keys etc.).
- Option B: Actually run a lot of the logic inside the zkVM itself, but inside the enclave for privacy. For example, we could compile an OCR algorithm to Risc0 and generate proof of the OCR and signature check. This way, the proof itself shows the steps were done correctly. The enclave in this scenario is used to keep the data secret while running the zkVM prover (since proving might take significant compute and we don’t want an external observer to see intermediate data).
We are leaning towards a hybrid: use enclave to handle data and prepare inputs, and let zkVM prove the critical parts that a verifier would care about (like that the signature was valid and result was negative). The Risc0 zkVM essentially acts as a mini RISC-V computer inside which we run our verification code one more time, but this time with cryptographic tracing. Risc0 will output a proof (and a hash of the code that was run) that we (or anyone) can verify later. Importantly, Risc0’s proof is zero-knowledge – it does not reveal the test data itself, only the conclusions we programmed it to reveal.
This proof generation inside the enclave ensures that even during proof generation, the sensitive data doesn’t leak. Without the enclave, running a zkVM on plaintext data could risk that if the device is compromised, those inputs might leak via memory or disk during the proving process. The TEE keeps it all in a shielded environment.
- Performance note: zkVM proving is still relatively expensive (maybe it takes, say, tens of seconds or a couple minutes depending on complexity, and it can be optimized with GPUs etc.). This is an area we are actively working on – possibly forking Risc0 or optimizing it for our specific use cases to improve proof times given TEE constraints. Since TEEs sometimes have memory or CPU limits (e.g., Intel SGX enclaves have memory size limits, etc.), we’ll tailor the proving process to fit those. We might also explore doing some proving outside but on encrypted data – however, that gets into FHE territory which we wanted to avoid. So, we will likely accept a bit of a delay (maybe the user waits 30 seconds for a proof to generate) as a trade-off for trustless verification. This is still orders of magnitude better than doing the whole image processing in ZK from scratch thanks to the enclave’s help.
Step 4: Outputting the Proof – Once the zkVM finishes, we have a proof (sometimes called a receipt in Risc0 terms). The enclave can then output this proof to the outside world via an ECALL return or OCALL. The proof is just some bytes (maybe a few kilobytes), which is fine to share. It contains no sensitive personal data, just cryptographic assertions. Along with the proof, we include any public data that needs to accompany it (for example, the statement “Test result negative as of July 1, 2025” and perhaps a unique ID or something). This proof and statement can now leave the enclave – we send it to our server or directly to the verifier.
Notably, the verifier (the university in this case) doesn’t need to run a TEE or trust any server. They can independently verify the zero-knowledge proof using the public verification key of our zkVM. This will confirm that some approved program (our verification logic) ran and produced an output “negative test” given some hidden inputs, without ever revealing those inputs. Additionally, if we incorporate the TEE’s attestation in the proof (or as part of the statement), the verifier might also know that the data really came from the user’s device and not, say, a simulated environment. There are schemes to convert an enclave’s attestation (which says “this code ran on a real enclave”) into a form that can be included in a ZK proof. That gets complex, but it’s possible.
Even without that, the user’s proof is tied to our program logic, so if someone tried to cheat by writing their own program that outputs “negative” without real data, the proof’s program hash would differ and the verifier would reject it (they will only accept proofs generated by the hash of our genuine program). Risc0, for example, includes the code’s cryptographic hash in the proof, so you know which code was executed. We will publish the hash of our zkVM guest code that corresponds to legitimate verification. Thus, trust is anchored in the code (which can be audited) and the math, rather than any one person or system.
Step 5: Proof Verification by Verifier – The final step happens on the verifier’s side (which could be our server or a third-party service using our library). They take the proof and run a verification algorithm (this is usually very fast – milliseconds) to check its validity. If it checks out, they now have high assurance that the user had a valid health credential meeting the criteria. And they learned nothing else: not the user’s name (if we didn’t include it), not the exact document image, nothing – just the claim proven. If the verifier is our server aggregating results (say, for generating a health pass), even we don’t see the raw data, we just see proof outcomes. This significantly reduces liability and privacy concerns.
Syscall/Interaction considerations: In implementing this workflow, we have to handle some practical aspects:
- Large data (images) might not all fit in enclave memory at once if there are limits, so we might stream or chunk processing.
- Any machine learning models (for OCR or so) if used inside enclave have to be loaded into it. That could be heavy, but perhaps we use lightweight models or classic computer vision to keep it small. Alternatively, if the enclave can’t handle full OCR, one could do OCR outside but feed only partial info in (though that breaks the privacy guarantee, so better to keep it inside).
- Randomness: zk proofs require some randomness; the enclave can use its secure RNG or we ensure to seed it properly (not using something the OS can predict).
- Attestation: if we want a relying party to be sure the enclave was genuine, we can perform a remote attestation step with the platform’s attestation service (Intel’s DCAP or ARM’s attestation). The enclave could get a quote (a signed attestation) and we could include that or its digest in the proof. For now, our main trust comes from ZK proof of code, which might suffice because if someone tries to fake the process, they’d have to either break the zk proof (considered infeasible) or get our secret proving key (in some zk systems) or run a malicious program with the same hash (which they can’t unless they find a hash collision or they legitimately have a negative test anyway).
To illustrate the combined power: Using a TEE alone, the verifier would have to trust the attestation from Intel/ARM that “yes, this user’s enclave says it saw a negative test”. Using ZK alone, the user could prove a statement but might have trouble proving authenticity of the data (without a signature check inside ZK, which is heavy). By doing both, the heavy lifting (like RSA/ECDSA signature verification of the lab’s result) can be done in hardware, and then a succinct proof conveys that fact. This approach was even noted in industry research: combining faster TEE proofs with zkVM can yield better performance than doing either alone for certain tasks. Our system effectively beats the naive baseline by using the right tool for each part.
Diagram: Putting It All Together
App Enclaves and Confidential VMs on CPUs. This diagram (from Microsoft Azure docs) highlights two models: on the left, a fine-grained app enclave (like Intel SGX) where an untrusted app calls into a trusted enclave part (steps 1-7 show partition, create enclave, attest, call, execute, return); on the right, a whole virtual machine protected by encrypted memory (AMD SEV-SNP or Intel TDX). In our design, we focus on the left model – we partition our application so that the “Trusted part” (verification logic) runs in an enclave on the user’s device, isolated from the untrusted part (the regular app/UI). The enclave executes the sensitive operations and attests to its code, similar to the left diagram. Meanwhile, the right side is analogous to cloud scenarios which we could use if we ever process in a back-end (we could host a service in a confidential VM). This figure underscores the hardware separation (lower boxes: OS/VMM vs hardware). Our use of zkVM adds an extra layer of assurance on top of this enclave execution.
Future Considerations and Evolution
Our current approach is to develop on existing TEEs (Intel, ARM, etc.) and use a zkVM like Risc0. We anticipate optimizing the zkVM for use within enclaves. If needed, we might fork Risc0 to better suit our performance needs (for example, reducing proving time by specializing circuits for common health data checks). We are also keeping an eye on emerging tech like GPU TEEs (imagine offloading heavy AI tasks to a secure GPU enclave – NVIDIA is working on this). That could one day allow running a full AI model on health data in a TEE for advanced analysis, then proving the output.
We deliberately avoid approaches like multi-party or cloud-based splitting of data – those would complicate compliance and increase attack surface, which is contrary to our goal. Simplicity and strong local security are our guiding principles.
In summary, the combination of TEEs and zkVMs in status.health’s architecture provides end-to-end trustworthy computation: the data is protected at the source by the enclave, and the result is made trustworthy to others by the zero-knowledge proof. By isolating data from the host OS and from our own service, TEEs ensure that even if something goes wrong in those layers, user data remains safe. And by producing verifiable proofs, we remove the need for any party to simply “trust” the output – they can mathematically verify it. This layered approach – hardware security + cryptographic verification – is, we believe, the future for sensitive personal data applications, not just in health but beyond. It puts users in control of their data while still enabling them to use that data in beneficial ways (like proving they meet a requirement) without compromising privacy.
:Thank you for reading this deep dive! If you have questions or want more details about any part of this architecture or about TEEs in general, feel free to reach out on our blog or forums. We’re excited about the potential of TEEs and zk-proofs to change the game for health data privacy and trust.**