status.health logo status.health logo status.health

Great. I’ll begin a deep-dive into the intersection of Trusted Execution Environments (TEEs) and zkVMs with a focus on the health verification use case you’re building at status.health.

This will include:

I’ll let you know as soon as the research is ready.

Trusted Execution Environments (TEEs) – The Key to Trustless Health Verification in 2025

In today’s digital world, protecting sensitive data is paramount – especially when it comes to personal health information. Trusted Execution Environments (TEEs) have emerged as a powerful technology to keep data secure even while it’s being processed. In this article, we’ll explain what TEEs are and why they’re novel and important in 2025, using examples from everyday tech (like your phone’s FaceID sensor) and discussing who’s building these technologies. We’ll then dive into why our platform (status.health) chose TEEs to enable a “trustless” health verification layer – meaning users can prove health facts without ever exposing their private data. We’ll also compare TEEs with other privacy-preserving approaches (like zero-knowledge proofs, homomorphic encryption, and multi-party computation) to see how they stack up. Finally, a technical appendix is provided for readers who want a deep dive into how TEEs work under the hood (covering isolation models, system call restrictions, and how we combine TEEs with a zero-knowledge virtual machine for health proof generation).

What is a Trusted Execution Environment (TEE)?

At its core, a Trusted Execution Environment (TEE) is a secure area of a device’s main processor that runs code and processes data in isolation from the rest of the system. You can think of it as a “secure vault” inside your device’s CPU – whatever happens inside that vault is hidden and protected from the normal operating system and applications. The TEE keeps data confidential and code integrity intact, so even if malware or an administrator gains control of the main operating system, they cannot access what’s inside the enclave (the secure vault). In practical terms, the memory used by the TEE is encrypted and isolated by hardware. Any code running outside the TEE (in the “normal” part of the OS) can only see gibberish if it tries to read TEE memory. Likewise, the TEE’s code cannot be tampered with by outside programs – the hardware will reject any attempt to alter the TEE’s runtime or inject foreign code.

How does this work? Modern processors have special hardware features that carve out a protected region of memory and processor state for the TEE. Code has to be explicitly loaded into the TEE (usually at program start or device boot), and once inside, it executes with hardware-enforced isolation. Any data leaving the TEE (if at all) can be encrypted or verified. For example, when a normal app wants the TEE to do something, it will call into the TEE through special secure calls. The TEE will perform the computation internally and return only the results. If the outside world tries to peek during execution, it only sees encrypted data. In essence, the TEE acts like an inner sanctum: only authorized code can operate on the sensitive data inside, and nothing on the outside can eavesdrop or interfere.

To give a simple analogy, imagine a glass box that’s heavily tinted and locked. You (the authorized program) can see inside and work with what’s in the box, but anyone outside just sees an opaque container. Even if someone manages to pick up the box (akin to gaining admin rights on the OS), the contents remain unreadable. This is what TEEs provide in computing – a hardware-backed safe zone.

A Brief History of TEEs and Why They Matter in 2025

The concept of isolating sensitive code isn’t entirely new – elements of it go back to the 1990s when industries were looking for ways to protect things like digital media and payment info. Early precursors included smart cards and secure coprocessors, and later the Trusted Platform Module (TPM) for PCs. However, the modern notion of a general-purpose TEE really took off in the 2000s and 2010s as chipmakers built these features directly into CPUs.

Fast forward to 2025, and TEEs have become increasingly important and widespread. Here’s why they’re novel and crucial today:

In summary, TEEs have evolved from a DRM tool into a cornerstone of privacy and security in the cloud and devices. They are novel in 2025 not because the idea is brand new, but because the context has shifted – we now rely on distributed computing and personal devices for incredibly sensitive operations, and TEEs are providing a solution at a time when software-only security often proves insufficient. The combination of strong protection during data use, with relatively low performance overhead, makes TEEs uniquely valuable.

How Are TEEs Used Today? (Everyday Examples)

To make TEEs more concrete, let’s look at some common use cases and examples of companies/products using TEEs in 2025:

In summary, TEEs are already all around us: in our phones, laptops, streaming devices, and increasingly in cloud servers. You might not notice them, because the user experience is just that things “are secure and just work” – you don’t get to see the enclave, but it’s working behind the scenes to keep secrets safe. The reason TEEs are trusted for these tasks is that they provide hardware-level security. Unlike a software sandbox (which can be broken if the OS is rootkitted), a properly designed TEE will hold strong even if the main OS is compromised. This makes them ideal for the most sensitive operations.

Who Builds TEEs? (Major Implementations and Players)

By 2025, virtually every major chip manufacturer and platform has some form of TEE:

In summary, the landscape is rich: Intel SGX/TDX, AMD SEV/PSP, ARM TrustZone/CCA, Apple Secure Enclave, and specialized secure elements all fall under the umbrella of TEEs or secure enclaves. Each has a slightly different threat model and use case (SGX/TrustZone for finer-grained apps, vs SEV/TDX for whole VMs, etc.), but they share the common goal of carving out a secure execution bubble in hardware. The fact that every major CPU vendor and even GPU vendors are on board highlights how important TEEs have become. It’s no longer a question of “does this platform have a TEE?” but rather “how do we use the TEE that’s there, and what are its limitations?”. And that brings us to how we at status.health leverage TEEs for our solution.

Why Status.Health Chose TEEs for Trustless Health Verification

Status.Health is building a trustless health verification layer – in plain terms, a system where users can prove certain health information (like completion of a health action, lab result, vaccination status, etc.) without ever revealing their actual private health data. Our mantra is to never collect or store users’ personal health information (PHI) on our servers – instead, all the sensitive data stays on the user’s device, and only a cryptographic proof of some health claim is sent out. This approach requires extreme security on the user’s side, because if the user’s device got compromised or if our software itself mis-handled the data, that private health info could leak. That’s where TEEs come in as an ideal solution for us.

Here are the key reasons we chose TEEs in our stack:

All these reasons led us to integrate TEEs at the core of our architecture. Concretely, our solution will run as a secure application inside whatever TEE is available on the user’s device:

In cases where a user’s device doesn’t have a known TEE, we will still isolate data as much as possible (using OS sandboxing or secure enclaves in software), but the gold standard will be to use hardware TEEs on Intel, AMD, or ARM. By targeting common TEEs across platforms, we want our technology to be broadly accessible.

In summary, we chose TEEs to ensure that “what happens on your device, stays on your device.” They allow us to create a health verification system that both users and verifiers can trust: users trust that their private info isn’t being leaked, and verifiers trust the results because they know they were produced in a secure, untampered environment.

TEEs vs Other Privacy-Preserving Technologies (Alternatives and Evolution)

Trusted Execution Environments are one approach to protecting data in use. There are a few other notable approaches to secure computation and privacy, each with its own pros and cons. Let’s compare TEEs with these alternatives and see how they’ve evolved:

1. Fully Homomorphic Encryption (FHE)

What it is: Homomorphic encryption allows computations to be performed directly on encrypted data, producing an encrypted result that can be later decrypted to reveal the answer. In theory, this means you could send encrypted health data to a cloud server, the server could run a program on it without ever decrypting it, and send you back encrypted results. The server learns nothing about your data – pretty magical! This concept has been around for decades, but a fully usable scheme was first demonstrated by Craig Gentry in 2009.

FHE vs TEE: The big advantage of FHE is that you don’t have to trust any hardware or server – the security is purely mathematical. Even if the server is malicious, it learns nothing if the crypto holds. In contrast, with a TEE you have to trust the hardware manufacturer (and that there are no bugs). However, the downside is performance. Fully homomorphic encryption, even in 2025, is extremely computationally heavy for general-purpose tasks. Simple operations can be 1000× to 1,000,000× slower than normal computation (though it’s improving). For many complex operations, using FHE is currently not feasible – the computation might take days or require huge cloud resources for something a TEE could do in seconds. There are also limits on what algorithms can be efficiently transformed into homomorphic form.

State in 2025: FHE has evolved – there are libraries like Microsoft SEAL and Google’s TFHE, and some cloud demos of FHE for specific tasks (like aggregations, simple ML on encrypted data). It’s an exciting area of research, but not yet practical for real-time or complex workflows like what we need (e.g., running OCR and AI on images is far beyond current FHE capabilities in a reasonable time). So while FHE might be the ultimate in privacy (no trust needed at all in hardware), it’s not an alternative we can use today for our use case. TEEs, on the other hand, provide a very practical solution today – they run at near native speed since the data is decrypted inside the secure CPU area and computed normally. In essence, TEEs trade some trust (in hardware security) to achieve tractable performance, whereas FHE removes hardware trust but incurs a huge performance cost. As technology evolves, maybe someday FHE will be efficient enough to replace some TEE use cases, but we’re not there yet for general computing.

2. Secure Multi-Party Computation (MPC)

What it is: MPC involves splitting data or computations among multiple parties such that no single party can see the whole picture, but together they compute a result. For example, you could have 5 servers each hold a random piece of your health record, and they run a protocol where at the end they collectively compute “Is BP > X?” without any server seeing the actual BP. It’s like pieces of a puzzle – no one has the whole puzzle, but they can jointly compute a function of the complete image.

MPC vs TEE: MPC, like FHE, is cryptography-based and doesn’t rely on special hardware. It often requires at least two or more parties/servers that do not collude. The benefit is you aren’t putting all your trust in one hardware vendor; you assume at least one of the parties is honest or that they won’t all collude. The drawback is complexity and interaction. MPC protocols can be very complex to implement, and they usually incur significant communication overhead (lots of messages passed between parties). For a use case like ours (user’s device verifying data), MPC would require, say, splitting the data among multiple cloud nodes or multiple user devices – that’s complicated and would introduce latency and many points of failure. In our scenario (a single user’s device and one service), MPC doesn’t naturally fit, because MPC shines when you have multiple independent participants each with secret inputs. Here the user’s device has all the secret data; involving other parties just to help compute on it is overkill – and ironically would expand the “attack surface” (more places bits of health data might transit through). We specifically decided not to go the MPC route because it would “overcomplicate” our use case and introduce new vectors for data handling that we want to avoid entirely (our philosophy: keep data handling confined to the user’s own sphere as much as possible).

Evolution: MPC protocols have improved (there are now faster techniques, and even “MPC-as-a-service” offerings for things like private set intersection, etc.). But similar to FHE, there’s a trade-off: MPC can have heavy computation and communication costs, especially for complex computations. It might require, say, 3 different cloud providers to each run a part of the computation – not exactly user-friendly for a consumer product. TEEs in contrast let you do it all in one place, fast, but you trust that one place (the enclave). An interesting note: some solutions actually combine MPC with TEEs (each party uses a TEE to further secure their part), showing these techniques can complement rather than strictly compete.

3. Zero-Knowledge Proofs (ZKPs / zkVMs)

What it is: Zero-knowledge proofs allow someone to prove a statement is true without revealing why it’s true or any additional info. For example, I could prove to you “I know a medical secret that hashes to value X” without telling you the secret. In context of computations, there are ZK VMs or circuits that can prove that a certain program was executed on some inputs and produced a certain output, without showing the inputs. This is very relevant to us – it’s exactly how we plan to let users prove health facts without revealing the underlying data.

ZKPs vs TEE: On the surface, ZK proofs sound like another way to achieve what TEEs do: hide data but still get verifiable results. The difference is, ZKPs are purely mathematical and verifiable by anyone after the fact, whereas TEEs rely on hardware and often require trusting attestations from that hardware. The big issue with ZKPs has been performance and complexity of implementation. Proving arbitrary computations in zero-knowledge can be extremely slow and resource-intensive, especially on typical user devices. For instance, proving something like “this image has a QR code with text ‘COVID Negative’” might be insanely heavy to do as a pure ZK circuit (imagine representing a whole OCR algorithm as an arithmetic circuit). TEEs can do that image processing in milliseconds because they run it natively, whereas a ZK approach might take many minutes or hours of computation to generate a proof, or the proof might be gigabytes in size, etc.

However, ZKPs have evolved too – there are now zkVMs (like Risc0, which we’re targeting, and others like StarkWare’s Cairo, Polygon zkEVM, etc.) that make it easier to prove general computations. Still, ZKPs often require writing custom circuits or using special languages, and even then, for complex tasks they can be impractical. They really shine in scenarios like verifying simpler computations or as part of blockchain scaling (zkEVM verifies lots of transactions with one proof).

Our approach actually combines ZKPs with TEEs – we use the TEE to do the heavy lifting on sensitive data (fast, secure), and then generate a zero-knowledge proof of the result or of a summary of the computation. In other words, the TEE helps us deal with the raw data safely, and the ZKP helps us make the outcome verifiable to anyone without trust. This way, we mitigate the weaknesses of each: the TEE’s output can be fed into a zkVM to produce a succinct proof that can be checked independently of the hardware trust. (We’ll detail this in the Appendix workflow.) Many experts see TEEs and ZKPs as complementary rather than mutually exclusive. One can use a TEE to quickly do something and then a ZKP to prove it was done right. Alternatively, one might use a ZKP to verify what a TEE is claiming (for instance, some projects convert TEE remote attestation into a ZK proof so it can be verified on a blockchain cheaply).

So in summary: pure ZK is more trustless (no hardware assumptions) but expensive; TEE is efficient but requires hardware trust. Combining them often gives a sweet spot – and that’s what we aim for: TEEs for privacy + ZKPs for trustless verification.

4. Virtualization and “Software” Sandboxes vs TEEs

One might say: “Can’t you just run a secure process or container, isn’t that isolation?” There are software-based isolation techniques like sandboxes, VMs, containers, etc. These help, but they are ultimately enforced by software (the hypervisor or OS kernel). If those layers are compromised, the isolation can break. For example, a virtual machine hypervisor bug could let an attacker peek into a VM’s memory. TEEs add hardware enforcement – e.g., memory encryption tied to CPU keys that the hypervisor cannot bypass even if it wanted to. It’s a stronger guarantee.

Over the years, an evolution has been that virtualization and TEEs are coming together – e.g., AMD SEV essentially takes a standard VM and makes it a TEE by encrypting its memory and checking integrity. So the line is blurring. But a plain software container is not as strong as a TEE, and things like kernel attack surface exist. For our needs (handling raw health data), we treat anything not hardware-protected as potentially vulnerable. That’s why even though we could just run a normal background process on the device, we choose to leverage the hardware TEE features when available, because it’s a world of difference if the OS is compromised.

Evolution note: Newer CPU architectures are adding even more robust isolation (like Intel’s upcoming features to encrypt memory of general processes, etc.). We anticipate that over time, the “baseline” security of all computing will rise (perhaps all computing will be confidential computing by default down the road). Until then, TEEs provide an optional but powerful security layer.

In conclusion, TEEs are not the only tool in the privacy tech toolbox. We carefully considered alternatives like homomorphic encryption and MPC, but those introduced significant complexity or performance penalties for our scenario. Zero-knowledge proofs are a must for our trustlessness, but by pairing them with TEEs we avoid the pitfalls of ZK alone (which could be too slow on user devices if we tried to do everything in ZK). Each approach has its place: FHE/MPC for maximum trustlessness but currently limited/hard, ZK for provable correctness (with some performance cost), and TEEs for practical, high-performance privacy with minimal changes to code. Our strategy is to use each where it fits best: TEEs to securely execute the logic and ZK to prove the outcome in a trustless manner. This gives users and relying parties the best of both worlds – strong privacy and strong integrity.

Appendix: Deep Dive – How TEEs and zkVMs Work Together in Status.Health

For those interested in the nitty-gritty technical details, this appendix explores how TEEs provide isolation at a lower level, and how we integrate a zkVM (zero-knowledge virtual machine) into the pipeline for generating proofs of health actions. This section will cover TEE isolation mechanisms, enclave interaction (syscalls, etc.), and our proof generation workflow step-by-step.

TEE Isolation Mechanisms and Enclave Execution Model

A TEE’s magic comes from hardware features that isolate memory and restrict control. Here’s how it typically works under the hood:

In short, TEEs create an isolated execution island. The hardware encrypts and walls off memory, strictly manages how code enters/exits the island, and provides attestation to verify what’s running inside. The parent OS is still needed for general resources, but it cannot violate the enclave’s integrity or confidentiality (absent bugs). This model has been formally studied and tested – though side-channel attacks (like observing access patterns or cache timing) have shown that enclaves are not perfect if an attacker can measure those. CPU vendors have been addressing these with mitigations (flush cache, data-oblivious techniques, etc.), and it’s an ongoing area of research. But for most practical purposes, breaking a modern TEE requires a very sophisticated hardware attack or an exploit in the enclave code itself.

Status.Health Proof Generation Workflow in the TEE + zkVM

Now let’s connect this with our specific use case: generating a proof of a health action or status. Here’s an example scenario and how data flows through our system:

Use Case Example: A user wants to prove to a third party (say, a university health office) that they have a valid negative COVID-19 test result from an approved lab, without revealing the actual lab report or any personal details on it.

Step 1: Data Ingestion into the TEE – The user either takes a photo of the lab result or downloads a digital copy. Our application, running on the user’s device, will pass this data into the TEE enclave. If it’s a photo, the image (maybe containing the test QR code or text) is transferred into the enclave via an ECALL when the enclave starts the verification function. The enclave now has the raw health data (image of the test result) inside its protected memory. From here on, all processing happens internally. The user’s device may also supply some reference data, like a list of authorized lab public keys (so the enclave knows what signature to verify). That reference info can be considered public and can come through the ECALL as well or be pre-loaded.

Step 2: Verification and Computation inside TEE – Inside the enclave, our code will:

During this process, any external help needed (like current time, or an API call to check something) is done carefully. For example, if the enclave needs the current date/time (to ensure test is recent), it might call out (OCALL) to get the time. This could be a vulnerability if not careful (a malicious OS could lie about the time). In a robust design, we might use multiple sources or secure time from a secure clock (some TEEs provide monotonic counters or secure time via a trusted source). For simplicity, assume we get the current timestamp from the device – we might double-check it isn’t out of expected bounds using attestation of time or user input.

The bottom line is, by the end of this step, the enclave has determined something like: “LabTest from XYZ Lab, ID 12345, for John Doe, on 2025-07-01, result = NEGATIVE, signature verified”. The enclave will then extract the relevant claim: in this case, perhaps “Test XYZ on 2025-07-01 is negative and from approved lab”. Importantly, it will drop any personal identifiers not needed (like the name “John Doe” if the proof doesn’t need to identify the person, or perhaps we hash it if needed for linking). The enclave essentially forms the statement to be proved, in a succinct form, and prepares to output only that.

Step 3: Generating a Zero-Knowledge Proof (zkVM) – Now comes the zkVM part. Inside the enclave, once the data is verified and we have the relevant info, we invoke a zkVM (Zero-Knowledge Virtual Machine), such as Risc0, also within the enclave. Risc0 is a system where you can run code and it produces a cryptographic receipt (a zero-knowledge proof) that the code ran correctly on some input. In our case, we will have a zkVM guest program that perhaps does a trivial computation: it takes as input the results of the enclave’s verification (like a boolean “valid_test” and maybe some public data like test date or an ID) and it does essentially assert(valid_test == true) and outputs the fact. The reason we use the zkVM is so that we can give the outside world a proof that this whole process was done correctly, without revealing the sensitive intermediate data. The zkVM’s proof will not include the private inputs (like the image or user’s name, etc.) – those remain secret, only the statements we allow will be revealed.

Actually, in many cases, we might integrate the verification logic directly into the zkVM program. There’s a design choice here:

We are leaning towards a hybrid: use enclave to handle data and prepare inputs, and let zkVM prove the critical parts that a verifier would care about (like that the signature was valid and result was negative). The Risc0 zkVM essentially acts as a mini RISC-V computer inside which we run our verification code one more time, but this time with cryptographic tracing. Risc0 will output a proof (and a hash of the code that was run) that we (or anyone) can verify later. Importantly, Risc0’s proof is zero-knowledge – it does not reveal the test data itself, only the conclusions we programmed it to reveal.

This proof generation inside the enclave ensures that even during proof generation, the sensitive data doesn’t leak. Without the enclave, running a zkVM on plaintext data could risk that if the device is compromised, those inputs might leak via memory or disk during the proving process. The TEE keeps it all in a shielded environment.

Step 4: Outputting the Proof – Once the zkVM finishes, we have a proof (sometimes called a receipt in Risc0 terms). The enclave can then output this proof to the outside world via an ECALL return or OCALL. The proof is just some bytes (maybe a few kilobytes), which is fine to share. It contains no sensitive personal data, just cryptographic assertions. Along with the proof, we include any public data that needs to accompany it (for example, the statement “Test result negative as of July 1, 2025” and perhaps a unique ID or something). This proof and statement can now leave the enclave – we send it to our server or directly to the verifier.

Notably, the verifier (the university in this case) doesn’t need to run a TEE or trust any server. They can independently verify the zero-knowledge proof using the public verification key of our zkVM. This will confirm that some approved program (our verification logic) ran and produced an output “negative test” given some hidden inputs, without ever revealing those inputs. Additionally, if we incorporate the TEE’s attestation in the proof (or as part of the statement), the verifier might also know that the data really came from the user’s device and not, say, a simulated environment. There are schemes to convert an enclave’s attestation (which says “this code ran on a real enclave”) into a form that can be included in a ZK proof. That gets complex, but it’s possible.

Even without that, the user’s proof is tied to our program logic, so if someone tried to cheat by writing their own program that outputs “negative” without real data, the proof’s program hash would differ and the verifier would reject it (they will only accept proofs generated by the hash of our genuine program). Risc0, for example, includes the code’s cryptographic hash in the proof, so you know which code was executed. We will publish the hash of our zkVM guest code that corresponds to legitimate verification. Thus, trust is anchored in the code (which can be audited) and the math, rather than any one person or system.

Step 5: Proof Verification by Verifier – The final step happens on the verifier’s side (which could be our server or a third-party service using our library). They take the proof and run a verification algorithm (this is usually very fast – milliseconds) to check its validity. If it checks out, they now have high assurance that the user had a valid health credential meeting the criteria. And they learned nothing else: not the user’s name (if we didn’t include it), not the exact document image, nothing – just the claim proven. If the verifier is our server aggregating results (say, for generating a health pass), even we don’t see the raw data, we just see proof outcomes. This significantly reduces liability and privacy concerns.

Syscall/Interaction considerations: In implementing this workflow, we have to handle some practical aspects:

To illustrate the combined power: Using a TEE alone, the verifier would have to trust the attestation from Intel/ARM that “yes, this user’s enclave says it saw a negative test”. Using ZK alone, the user could prove a statement but might have trouble proving authenticity of the data (without a signature check inside ZK, which is heavy). By doing both, the heavy lifting (like RSA/ECDSA signature verification of the lab’s result) can be done in hardware, and then a succinct proof conveys that fact. This approach was even noted in industry research: combining faster TEE proofs with zkVM can yield better performance than doing either alone for certain tasks. Our system effectively beats the naive baseline by using the right tool for each part.

Diagram: Putting It All Together

App Enclaves and Confidential VMs on CPUs. This diagram (from Microsoft Azure docs) highlights two models: on the left, a fine-grained app enclave (like Intel SGX) where an untrusted app calls into a trusted enclave part (steps 1-7 show partition, create enclave, attest, call, execute, return); on the right, a whole virtual machine protected by encrypted memory (AMD SEV-SNP or Intel TDX). In our design, we focus on the left model – we partition our application so that the “Trusted part” (verification logic) runs in an enclave on the user’s device, isolated from the untrusted part (the regular app/UI). The enclave executes the sensitive operations and attests to its code, similar to the left diagram. Meanwhile, the right side is analogous to cloud scenarios which we could use if we ever process in a back-end (we could host a service in a confidential VM). This figure underscores the hardware separation (lower boxes: OS/VMM vs hardware). Our use of zkVM adds an extra layer of assurance on top of this enclave execution.

Future Considerations and Evolution

Our current approach is to develop on existing TEEs (Intel, ARM, etc.) and use a zkVM like Risc0. We anticipate optimizing the zkVM for use within enclaves. If needed, we might fork Risc0 to better suit our performance needs (for example, reducing proving time by specializing circuits for common health data checks). We are also keeping an eye on emerging tech like GPU TEEs (imagine offloading heavy AI tasks to a secure GPU enclave – NVIDIA is working on this). That could one day allow running a full AI model on health data in a TEE for advanced analysis, then proving the output.

We deliberately avoid approaches like multi-party or cloud-based splitting of data – those would complicate compliance and increase attack surface, which is contrary to our goal. Simplicity and strong local security are our guiding principles.

In summary, the combination of TEEs and zkVMs in status.health’s architecture provides end-to-end trustworthy computation: the data is protected at the source by the enclave, and the result is made trustworthy to others by the zero-knowledge proof. By isolating data from the host OS and from our own service, TEEs ensure that even if something goes wrong in those layers, user data remains safe. And by producing verifiable proofs, we remove the need for any party to simply “trust” the output – they can mathematically verify it. This layered approach – hardware security + cryptographic verification – is, we believe, the future for sensitive personal data applications, not just in health but beyond. It puts users in control of their data while still enabling them to use that data in beneficial ways (like proving they meet a requirement) without compromising privacy.

:Thank you for reading this deep dive! If you have questions or want more details about any part of this architecture or about TEEs in general, feel free to reach out on our blog or forums. We’re excited about the potential of TEEs and zk-proofs to change the game for health data privacy and trust.**