anyway, your CFI approach is inferior to RAP, not sure why it'd deserve singling it out among the other inferior works.
1
Does RAP protect access to the MMU? Specifically CR0. If not then I’m sorry you lose. But yeah I love RAP. Way better than the policy enforced by KCoFI, which is obviously terribly imprecise for returns. And yes I totally missed citations on NK paper, sorry.
3
needless to say, i've got plans for addressing that (KERNSEAL from 2005 and some more from later years) but priorities have been elsewhere so i've got nothing to present you for now.
1
Have an approach that efficiently controls all aliases. I’d be interested to hear what you think. Basic idea is map all PTs as read only and create small interface while mediating access to CR0, CR3, CR4, and EFER. Overhead less than 3% for kcompile. nestedkernel.org
3
1
it's a step in the right direction (self-protection FTW :) but i don't see how it can have an acceptable perf impact (try 'du -s' or iperf which are not userland dominated workloads). also how are runtime codegen, large pages, etc handled?
1
1
Perf: only bad on PT updates (fork, mmap, etc) so kernel sees 0 overhead if not doing that. Codegen: pages mapped NX+W, then client requests code map, then scan, then RO+X. Large pages: not bad: only need 4 types: RO+NX (Const Data+PTs), RO+X (code), W+NX (data), USER (SMEP+SMAP)
2
how do you prevent JIT'ing in some useful code sequence that's executed as gadgets (i.e., code scanning, presumably, wouldn't flag it?). for large pages, how do you handle the very dynamic nature of page tables w/o breaking up large pages (1GB/2MB) in the direct map all the time?
2
DMAP: fun one that I haven’t looked at. Maybe handle by reserving large chunks of the address space for these data types? Roughly already do for code and the heap. So take a few GBs of address space for each type. Actually, just alloc in chunks of 1GB/2MB?
1
static reservation (of physical memory, address space doesn't matter, we have enough of that on 64 bit) doesn't scale in real life, there's always a workload somewhere that runs out of memory. the solution has to be dynamic (runtime) partitioning and have 0 impact, that's hard ;)
1
large page maps are contiguous in both physical and virtual AS, that's what makes the TLB efficient and breaking them up not so good for performance.. if you don't break them up then you'll soon litter the AS with read-only maps (needed for PTs) and then users will complain :).
Dec 27, 2018 · 3:02 PM UTC

