Great project. I always try to post Keystone updates here :)
Can you maybe compare this work with Hex Five MultiZone Security?
Are the plans to integrate this into the Rocket (BOOM), Pulp, LowRisc and others SoCs?
Can you do all that a modern IPhone Secure Enclave provides?
Keystone has been demonstrated to run on Rocket Chip on FPGAs using FireSim (https://fires.im). There are docs on how to do so here:
There's also a branch of BOOM (https://github.com/riscv-boom/riscv-boom/tree/secureboom) that supports Keystone, we're just waiting for it to be merged into master.
It is great to see interest in Keystone on here!
A couple comments that may help clarify what we are aiming for and where we are right now.
Keystone is a (still pretty early!) research project and enclave/TEE
framework for general use on RISC-V processors. We'll be using it
ourselves heavily as a platform for secure hardware and systems
There are a couple logical parts to Keystone:
- The security monitor (riscv-pk/sm) is the core of this, and is the
TCB for the entire system. It provides privacy by isolating enclave
memory from the rest of the system.
- The SDK/runtime provides one possible (minimal) example of how to
target and host applications in the enclave, we'll be making new
tools and runtimes as targets going forward.
- The demo, which uses our minimal sdk and runtime.
The demo isn't particularly flashy, but demonstrates how to accomplish
privacy and integrity against a malicious OS on a remote RISC-V
machine. Right now we don't have protection mechanisms against
side-channels and the like, but those are some of our next goals.
We're excited to build a community around open-source secure enclave
development on RISC-V and are actively looking for collaborators and
other researchers. Expect to see significantly more documentation and
information on keystone-enclave.org and docs.keystone-enclave.org in
the next week. Also check out the slides from our recent talk here:
> there's not much there
Workshop on Building Open Source Secure Enclave, with little information about what was presented . The Google slides are superficially interesting: https://keystone-enclave.org/workshop-website-2018/slides/Sc...
Mailing list: https://groups.google.com/forum/#!forum/keystone-enclave
I'm not saying there's anything missing, just that there isn't much in it; it's a mirror of the already-existing "tethered RISC-V" proxy kernel, with a trivial hello-world app on top of it (the app listens for messages over the "HTIF" interface, uses NaCL to encrypt/decrypt them, and implements essentially "hello world").
Compare to the object/capability schemes in modern L4s.
riscv-pk contains Berkeley bootloader for several open-source RISC-V processors, and Keystone adds security monitor which isolates physical memory using PMP. riscv-pk/sm contains most of the code for the security monitor:
> For the same application, Apple uses L4
Well, I don't know what they're doing applications-wise, but there's a seL4 port for RISC-V. Nothing seems to really be stopping anyone from putting enclave applications on seL4 in a RISC-V processor.
This appears to be the meat of it:
... there's not much there.
(For the same application, Apple uses L4).
DRM is certainly a use case and was widely maligned as the idea behind early TEE / keystore concepts like Intel TPM, but they’re much more broadly useful. Common use cases include trusted boot (verification of OS components in an environment an attacker can’t tamper with) and secure key management (private keys which aren’t accessible to even a root level attacker). Secure key management enables a variety of trust scenarios like machine level root identity for service to service authentication in the data center (see Google Titan). TEE with access to biometric hardware also provides strong user identity, as is the case in the iPhone where data can be encrypted using a key which is undiscoverable without access to the user’s biometrics like fingerprint or face print and theoretically impossible to exfiltrate for offline use.
> DRM is certainly a use case and was widely maligned as the idea behind early TEE / keystore concepts like Intel TPM
As I recall from my skimming on the issue back then, and as I see from what was written on the issue in Wikipedia, the criticism circulated mostly around basically the possibility of a manufacturer like Intel requiring booted software to be trusted by them (IOW, trusting Windows but not Linux and others because money, further grounding Microsoft as a monopoly of consumer OSes):
> The concerns include the abuse of remote validation of software (where the manufacturer—and not the user who owns the computer system—decides what software is allowed to run
That's actually part of the "trusted boot" feature you mention, and not what allows for DRM.
There were also concerns of:
> possible ways to follow actions taken by the user being recorded in a database, in a manner that is completely undetectable to the user.
But that seems like an issue with using closed source stuff in general, not specifically of TEEs.
Anyway, I can see how DRM can always be maligned from a GNUish we-should-never-trust-closed-source perspective. While very much appreciate the ideals that GNU/FSF promotes, I worry a bit about the current trend of how software vendors prefer to provide software as a web service when it doesn't really provide any technical benefit from the user's perspective. I know there are other reasons for doing so, but it seems to me that a big reason for that trend is that it's the most effective and available way to avoid piracy.
I wonder if widespread availability of TEEs (ones that wouldn't have the trusted boot issue mentioned) and a standard procedure like what I've mentioned in this SE question could reverse that trend of further making the web an operating system, moving ownership/control of our data and processes from our own machines to various online entities.
> I worry a bit about the current trend of how software vendors prefer to provide software as a web service when it doesn't really provide any technical benefit from the user's perspective.
Given that I'm not going to have control over the software either way, I'd rather it were running on their hardware, safely isolated on the other side of an Internet link, than on hardware which I payed for and nominally own but which has been partitioned off for someone else's use.
TEEs and TPMs have legitimate uses, but only so long as they are fully controlled by the owner of the device. That implies that there are no pre-installed keys which the owner doesn't control: to a remote exploiter, an emulated TEE/TPM should be indistinguishable from an official hardware device. Unfortunately this is not something that can be designed into the hardware, short of omitting the feature altogether, since non-owner-controlled keys could be installed at any point prior to final delivery. Erasing them after delivery is no good; the mere expectation that the manufacturer's key is present is enough to make treacherous remote attestation practical. Devices controlled by their owners should be the norm, not second-class citizens.
The problem is ultimately a social one, not a technical one, but the technical capabilities of TEEs and TPMs are empowering the wrong side. From one point of view they may just be tools, but they're tools which are more readily used against the interests of device owners then for them.
Think cloud computing. You can potentially run your webserver or database inside an enclave in the cloud such that the cloud provider (e.g. Amazon) has no way of accessing the unencrypted data it processes.
This is the first I've read of TEEs, though I've heard mention of Enclaves on iPhones without really looking into that.
Do I understand correctly that these are basically a hardware execution environment (processor + ram?) where code and data need to be input encrypted with a public key and that the private key is unobtainable to the user? Is that the key feature? For users to be able to execute code without being able to know what that code is and therefore be unable to copy and execute it elsewhere (aka engage in piracy)?
AMD is doing a lot of open source work in this area as well.
I just posted a question to SE if anyone's interested.
The project appears to vastly predate reliable Rust on RISC V, so I'm not really sure what other production-ready options they have.
Seems like a shame to write this in a memory unsafe language when it requires such a high-assurance level to be useful.
It's defined in machine/htif.c. HTIF is a RISC-V thing; see slide 12 of:
There's (I think) an analogous design in the Apple SEP; search "mailbox" in:
Thanks, those slides have some great info.
What exactly does it mean to proxy a syscall to the debugging host?
Thanks for your interest!
We've just posted these slides: https://keystone-enclave.org/files/keystone-risc-v-summit.pd...
Keystone is in its early stage, and the first version contains somewhat minimal functioning components.
Can anyone explain a bit more context around where this would fit in a running system? Would it handle cryptographic operations like key generation, signing etc? How does the boot loader compare to other projects like libreboot?
All of the sys calls in the GitHub repo just set flags in something called magic_mem and then call hitf_syscall, for which I can't find a definition.
Unix is born on the ashes of multics that was big and unusable.
As far as I know Multics was an answer to Unix, not an ancestor... Multics was named after Unix to be the opposite and is evolved in Plan9.
Wikipedia says otherwise. Multics was started in 1964, whereas Unix is from the early 1970s:
In 1964, Multics was developed initially for the GE-645 mainframe [..] Bell Labs pulled out of the project in 1969; some of the people who had worked on it there went on to create the Unix system.
Mh, I have to refresh my historic knowledge then... Perhaps I have mixed ITS and Multics...
Mh, Solaris in the past have offered something similar together with TrustedExtensions... The final result was a big bureaucratic and unusable mess...
A related project is https://cryptech.is, an open hardware HSM.
If they're going through the trouble of doing a FOSH HSM, they ought to add a TCSRNG like EntropyKey -> egd.