Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tagged pointer prototype and ready to run interpreter integration study #2863

Draft
wants to merge 2 commits into
base: feature/CoreclrInterpreter
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
151 changes: 151 additions & 0 deletions ReadyToInterpret.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,151 @@
# Investigation on making interpreter work with ReadyToRun

## Status

This document is preliminary - it only covers the most basic case - it doesn't even cover very often used case (i.e. virtual method calls).

Imagine I am doing a Hackathon overnight trying to get something working, not designing something for the long term, yet.

## Goals

- Figure out how relevant parts of ready to run works.
- Figure out how to hack it so that we can get into the CoreCLR interpreter.

## Non Goals

- Deliver a working prototype (I just don't have the time - and the CoreCLR interpreter is not the right target)
- Come up with an optimal design (Same, I just don't have the time)

## High-level observations

We already have a mechanism to call an arbitrary managed method from the native runtime - this mechanism can be used to call ReadyToRun compiled method. So in general, interpreter -> ReadyToRun is not an issue.

The key challenge is to get ReadyToRun code to call into the interpreter.

## Understanding what happened when we are about to make an outgoing call from ReadyToRun

When ReadyToRun code makes a call to a static function, it

- push the arguments on the register/stack as per the calling convention
- call into a redirection cell
- get into the runtime.

Inside the runtime, I will eventually get to `ExternalMethodFixupWorker` defined in `prestub.cpp`.

At this point, I have
- transitionBlock - no idea what it is
- pIndirection - the address for storing the callee address
- sectionIndex - a number, pushed by the thunk, and
- pModule - a pointer to the module containing the call instruction

Since the call comes from a ReadyToRun image, `pModule` must have a ready to run image

We can easily calculate the RVA of the `pIndirection`

If the call provided the `sectionIndex`, we will just use it, otherwise we can still calculate the section index based on the RVA.

The calculation is simply by sequentially scanning the import sections, each section is self describing its address range so we can check

The import section has an array signature - using the rva - beginning rva of the section. we can index into the signature array to find the signature.

The signature is then parsed to become a `MethodDesc` - where the method preparation continues as usual

Last but not least, eventually, the `pIndirection` will be patched with that entry point, and the call proceed by using the arguments already on the stack/restored registers.

## How the potential hack looks like

We keep everything the same up to the method preparation part.

We knew it is possible to produce an `InterpreterMethodInfo` given a `MethodDesc` when the system is ready to JIT, so we should be able to produce the `InterpreterMethodInfo` there.

The arguments are already on the registers, but we can't dynamically generate the `InterpreterStub`, the only reasonable thing is to pre-generate the stubs in the ReadyToRun image itself.

> A stub per signature is necessary because each signature need a different way to populate the arguments (and the interpreter method info). On the other hand, a stub per signature is sufficient because if we knew how to prepare the register to begin with, we must know exactly what steps are needed to put them into a format the `InterpretMethodBody` likes. As people points out, this is going to be a large volume, this is by no means optimal.

The stub generation code can 'mostly' be exactly the same as `GenerateInterpreterStub` with two twists:

- We need to use indirection to get to the `InterpreterMethodInfo` object. That involves having a slot that the `InterpreterMethodInfo` construction process need to patch.
- What if the call signature involves unknown struct size (e.g. a method in A.dll take a struct in B.dll where B.dll is considered not in the same version bubble)

Next, we need the data structure that get us to the address of the stub as well as the address of the cell storing the `InterpreterMethodInfo`. What we have is `pIndirection` and therefore `MethodDesc`.

To do that, we might want to mimic how the runtime locate ReadyToRun code.

Here is a stack of how the ready to run code discovery look like:

```
coreclr!ReadyToRunInfo::GetEntryPoint+0x238 [C:\dev\runtime\src\coreclr\vm\readytoruninfo.cpp @ 1148]
coreclr!MethodDesc::GetPrecompiledR2RCode+0x24e [C:\dev\runtime\src\coreclr\vm\prestub.cpp @ 507]
coreclr!MethodDesc::GetPrecompiledCode+0x30 [C:\dev\runtime\src\coreclr\vm\prestub.cpp @ 443]
coreclr!MethodDesc::PrepareILBasedCode+0x5e6 [C:\dev\runtime\src\coreclr\vm\prestub.cpp @ 412]
coreclr!MethodDesc::PrepareCode+0x20f [C:\dev\runtime\src\coreclr\vm\prestub.cpp @ 319]
coreclr!CodeVersionManager::PublishVersionableCodeIfNecessary+0x5a1 [C:\dev\runtime\src\coreclr\vm\codeversion.cpp @ 1739]
coreclr!MethodDesc::DoPrestub+0x72d [C:\dev\runtime\src\coreclr\vm\prestub.cpp @ 2869]
coreclr!PreStubWorker+0x46d [C:\dev\runtime\src\coreclr\vm\prestub.cpp @ 2698]
coreclr!ThePreStub+0x55 [C:\dev\runtime\src\coreclr\vm\amd64\ThePreStubAMD64.asm @ 21]
coreclr!CallDescrWorkerInternal+0x83 [C:\dev\runtime\src\coreclr\vm\amd64\CallDescrWorkerAMD64.asm @ 74]
coreclr!CallDescrWorkerWithHandler+0x12b [C:\dev\runtime\src\coreclr\vm\callhelpers.cpp @ 66]
coreclr!MethodDescCallSite::CallTargetWorker+0xb79 [C:\dev\runtime\src\coreclr\vm\callhelpers.cpp @ 595]
coreclr!MethodDescCallSite::Call+0x24 [C:\dev\runtime\src\coreclr\vm\callhelpers.h @ 465]
```

The interesting part, of course, is how `GetEntryPoint` works. Turn out it is just a `NativeHashtable` lookup given a `VersionResilientMethodHashCode`, so we should be able to encode the same hash table for the stubs as well.

Note that `GetEntryPoint` has the fixup concept, maybe we can use the same concept to patch the slot for `InterpreterMethodInfo`.

## How to implement the potential hack

From the compiler side:

### When do we need to generate the stubs?
When the ReadyToRun compiler generate a call, the JIT will call back into crossgen2 to create a slot for it. At that point, we should know what we need to make sure a stub is available for it by working with the dependency tracking engine.

### Actually generate the stubs

To stub generation should mostly work the same as in `GenerateInterpreterStub` today with a couple twists
- We don't need to generate the `InterpreterMethodInfo`, that work is left until runtime.
- If the stub involve types with unknown size, we need to generate the right stub code for it (e.g. A.dll call a function that involves a struct defined in `B.dll` where they are not in the same version bubble)
- The stub needs an instance of `InterpreterMethodInfo`, it cannot be hardcoded, the pointer of it must be read from somewhere else.
- Whenever we generate the stub, we need to store it somewhere so that we can follow the logic as in `MethodEntryPointTableNode`

From the runtime side:

### Locating the stub
- When we reach `ExternalMethodFixupWorker`, we need to use the table to get back to the generated stubs

### Preparing the data
- We need to create the `InterpreterMethodInfo` and make sure the stub code will be able to read it.

## Alternative designs
Following the thought on the earlier prototype for tagged pointers, we could envision a solution that ditch all those stubs, e.g.

1. Changing the call convention for every method so that it is the same as what the interpreter method likes.

Pros:
- Consistency, easily to understand
- No need for stubs, efficient for interpreter calls

Cons:
- Lots of work to have a different calling convention
- Inefficient for non interpreter calls

2. Changing the call site so that it detects tagged pointers and call differently

Pros:
- Similar with what we have in the tagged pointer prototype
- No need for stubs, efficient for interpreter calls

Cons:
- Every call involves dual call code

3. The approach described in this document (i.e. using stubs)

Pros:
- Probably cheapest to implement

Cons:
- Lots of stubs
- Inefficient for interpreter call (involve stack rewriting)
- Unclear how it could work with virtual or interface calls

I haven't put more thoughts into these alternative solutions, but I am aware they exists.
71 changes: 71 additions & 0 deletions TaggedFunctionPrototype.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# Tagged Function Prototype

The document is written to describe my prototype available [here](https://cloudbuild.microsoft.com/build?id=7a730686-9d69-fe04-56a7-2118a28196ea&bq=devdiv_DevDiv_DotNetFramework_QuickBuildNoDrops_AutoGen) as a PR. I am not planning to merge it.

The goal of this prototype is to investigate whether or not the tagged function concept is practically feasible in the CoreCLR code base.

## How does the CoreCLR interpreter work today?

This section covers a small portion of how the interpreter integrate with the runtime. It does NOT attempt to explain the full interpreter execution process.

The interpreter work by pretending itself as jitted code, as such, it needs to

1. Convert the incoming arguments from the register/stack to something C++ understands
2. The control flows to `InterpretMethodBody`, where it interprets the byte code.
3. Call any other callee as if they are jitted code as well, and
4. Put thing back on the stack as if it were produced by jitted code.

Step 1 is something require special generated code to do, right now, it is done by `GenerateInterpreterStub`. It is meant to be a tiny routine that take arguments from the stack
and rewrite the stack so that the values can be consumed by C++.

## What do we want?

We want to get rid of the concept of interpreter stub, and instead, have the caller calling the actual `InterpretMethodBody` directly.

`InterpretMethodBody` requires an `InterpreterMethodInfo` object, which basically is a representation where we can easily access its signature and its byte code.

So the problem is reduced to:

1. Identify a caller that is currently calling using the standard calling convention.
2. Get that caller to access an `InterpreterMethodInfo` object, and so
3. Make it calls `InterpretMethodBody` instead.

## Wrong attempts

I tried 3 different approaches to that and only the last one succeed. These wrong attempts are documented just so we don't try the same wrong idea again.

### Idea 1

- Make `GenerateInterpreterStub` return a tagged pointer instead

This approach failed because `GenerateInterpreterStub` is called as part of `ThePreStub`. `ThePreStub` works by leaving the call arguments on the stack, so the incoming call arguments are already on the stack, and we at least need some code to get it back.

### Idea 2

Now we know we must perform call `InterpretMethodBody` earlier then `ThePreStub`, which means `ThePreStub` must be replaced by something else. In fact, how does `ThePreStub` knows what `MethodDesc` to interpret? Upon investigation, I learn about this concept of `Precode`.

Basically, every method has a `Precode`, that is a simple `jmp` instruction the goes somewhere else. This is the first instruction that get executed. To begin with, that instruction jumps to `ThePreStub`, and that instruction is code generated. Given the precode, we can get to the MethodDesc.

What that means is that we need to get rid of the code generation during the Precode generation, which means will no longer have the jmp instruction. Instead, we will put a thing there that allow us to get to the `InterpreterMethodInfo`.

A reasonable choice is to put a pointer to the `InterpreterMethodInfo` object right there. We will tag the least significant bit of it so that we know it is not a normal function entry point.

To be more concrete, the precode is generated during `MethodDesc::EnsureTemporaryEntryPointCore`. We will modify that code so that it translate the `MethodDesc` into an `InterpreterMethodInfo` there and tag it so that we put it into the method table there.

The reason why this approach fails is more subtle. It turns out that the `InterpreterMethodInfo` construction process leveraged the code that supports the JIT to extract the IL, and that code assumed the method tables are also properly constructed, but that's not true at the time `MethodDesc::EnsureTemporaryEntryPointCore` is called. So we must delay the process of `InterpreterMethodInfo` object construction.

## Working approach

### Idea 3

To get around the cyclic dependency issue above, I tagged the MethodDesc pointer instead. By the time we are about to call the function, then we construct the `InterpreterMethodInfo`. This worked.

The down side of this approach, obviously, is that the pointer in the method table is no longer a valid entry point, so anything else that try to call it will lead to an access violation. This will work in a pure interpreted scenario, where the interpreter is the only thing that runs in the process.

Suppose we also want to let (e.g. ReadyToRun) code to run, that won't work unless we also change the ReadyToRun callers.

The code in this branch demonstrated this concept. It will execute some code under the interpreter (and fail pretty quickly because I haven't implemented everything yet).

### Lowlights

This code is still using dynamic code generation for a couple of things. We are still generating code for GC write barrier, and we are still generating some glue code for pinvoke. Lastly, the call made by the interpreter is not converted to use the new call convention yet. These seems to be solvable problems.
7 changes: 7 additions & 0 deletions src/coreclr/minipal/Windows/doublemapping.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -184,8 +184,15 @@ void* VMToOSInterface::ReserveDoubleMappedMemory(void *mapperHandle, size_t offs
return pResult;
}

extern void andrew_debug();

void *VMToOSInterface::CommitDoubleMappedMemory(void* pStart, size_t size, bool isExecutable)
{
if (isExecutable)
{
// Whenever this is called, we are generating code.
andrew_debug();
}
return VirtualAlloc(pStart, size, MEM_COMMIT, isExecutable ? PAGE_EXECUTE_READ : PAGE_READWRITE);
}

Expand Down
43 changes: 43 additions & 0 deletions src/coreclr/vm/callhelpers.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@
#include "invokeutil.h"
#include "argdestination.h"

void andrew_debug();

#if defined(FEATURE_MULTICOREJIT) && defined(_DEBUG)

// Allow system module for Appx
Expand All @@ -33,6 +35,10 @@ void AssertMulticoreJitAllowedModule(PCODE pTarget)

#endif

void* ToInterpreterMethodInfo(MethodDesc* pMd);

void CallInterpretMethod(void* interpreterMethodInfo, BYTE* ilArgs);

// For X86, INSTALL_COMPLUS_EXCEPTION_HANDLER grants us sufficient protection to call into
// managed code.
//
Expand Down Expand Up @@ -60,7 +66,44 @@ void CallDescrWorkerWithHandler(

BEGIN_CALL_TO_MANAGEDEX(fCriticalCall ? EEToManagedCriticalCall : EEToManagedDefault);

#ifdef FEATURE_INTERPRETER
uint64_t pCallTarget = (uint64_t)(pCallDescrData->pTarget);
if ((pCallTarget & 0x3) == 0x3)
{
//
// Experiment comment:
// Step 4: When we call a method, we simply redirect it to use CallInterpretMethod instead
// of calling a stub and then redirecting back to call InterpretMethod anyway.
//
// That involves first converting the MethodDesc to an InterpreterMethodInfo. We will store
// that on the MethodTable slot so we do not do repeated conversion.
//
MethodDesc* pMD = (MethodDesc*)(pCallTarget & (~0x3));

if (pMD->IsIL() && !pMD->IsUnboxingStub())
{
void* translated = ToInterpreterMethodInfo(pMD);
*(pMD->GetAddrOfSlot()) = pCallTarget = (PCODE)translated;
}
}
if ((pCallTarget & 0x3) == 0x1)
{
//
// Experiment comment:
// Step 5: Now we have an InterpreterMethodInfo, simply call CallInterpretMethod
//
// That involves first converting the MethodDesc to an InterpreterMethodInfo. We will store
// that on the MethodTable slot so we do not do repeated conversion.
//
CallInterpretMethod((void*)(pCallTarget & (~0x1)), (BYTE*)pCallDescrData->pSrc);
}
else
{
CallDescrWorker(pCallDescrData);
}
#else
CallDescrWorker(pCallDescrData);
#endif

END_CALL_TO_MANAGED();
}
Expand Down
9 changes: 9 additions & 0 deletions src/coreclr/vm/ecall.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,16 @@ void ECall::PopulateManagedStringConstructors()
MethodDesc* pMD = CoreLibBinder::GetMethod((BinderMethodID)(METHOD__STRING__CTORF_FIRST + i));
_ASSERTE(pMD != NULL);

#ifdef FEATURE_INTERPRETER
//
// Experiment Comment:
// Step 3: GetMultiCallableAddrOfCode will eventually try to interpret the entry point
// as a Precode. For that, we simply ignore that work.
//
PCODE pDest = (PCODE)((uint64_t)pMD & 0x03);
#else
PCODE pDest = pMD->GetMultiCallableAddrOfCode();
#endif

ECall::DynamicallyAssignFCallImpl(pDest, ECallCtor_First + i);
}
Expand Down
Loading