Alloy Hosted Models
When Alloy hosts your custom model, you will still be in complete control of your model code, however the model will run in an Alloy deployed AWS Lambda.
You may use any of the languages currently supported by AWS Lambda.
This custom model Lambda is isolated from both the internet and the rest of Alloy's infrastructure, however certain exceptions can be made for accessing customer controlled endpoints to obtain proprietary data.
Members of your organization will be invited to a shared GitHub repository containing a folder for each custom model. Your team can make changes to your custom models & submit changes to Alloy for a security review.
The shared GitHub repository will also include mechanisms for deploying approved custom model code to dev & prod AWS Lambdas for use within Alloy workflows & journeys.
Alloy will ensure that the custom model Lambda is invoked correctly, however any errors that occur in the custom model code will be treated as an external service failure.
Alloy will not be reviewing the custom model's functionality, but will be verifying that it meets our security requirements before deployment on our infrastructure.
Custom Model ID's
Alloy will provide 2 custom model ID's for each of your custom models, one for the dev model version & one for the prod model version. You can use these model versions to test your model changes and roll them out in a staged manner.
Updating Model Code
The following steps may be applied to both new models & changes to existing models.
-
Development
Develop using your preferred tools.
-
Local testing
Test locally using Docker, following the steps documented below.
-
Code review
Open a pull request with the code changes. Document the successful local testing in the pull request description.
Alloy will review the pull request to ensure it meets our security standards.
-
Merge to
main
& deploy dev modelThere will be an automatic deploy of the dev model after code changes are merged.
-
Use testing endpoints
Both unit testing & testing using existing evaluations will be useful. See Developing & Testing Your Custom Model.
-
Test within Workflow and/or Journey
Work with your SA/TAM to ensure the model is included in the correct Workflows and/or Journeys.
-
Deploy latest model code to prod model
Use the provided GitHub action.
Local Testing
As a final step before submitting a Custom Model code change for review, it is important to run a local test of the model using Docker. This will closely match the AWS Lambda environment that the model will be running in & therefore catch many issues that could be missed when only testing with other local development tools.
Local development environments, including Jupyter notebooks, may use different dependencies than the container image that is deployed to AWS Lambda. These local environments may also allow variables to persist across model invocations in a way that is not consistent with the isolated AWS Lambda invocation environment.
Steps
These steps follow the local testing instructions from AWS.
-
cd
into the directory containing the model code &Dockerfile
-
Build the Docker image
docker build -t IMAGE_NAME .
-
Start the Docker image
docker run -p 9000:8080 IMAGE_NAME
-
From a new terminal window,
POST
a custom model payload usingcurl
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
-
Ensure that the model response is as expected
Example
The following example uses an image name of my_custom_model
.
cd path_to_model_repo/my_custom_model
docker build -t my_custom_model .
docker run -p 9000:8080 my_custom_model
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" \
-d '{"Alloy Data Supplied":[{"attributeName":"name_first","attributeValue":"true"}]}'
Further Reading on AWS Lambda
Model Specifications
Input Payload Format
The custom model will receive service data attributes for services that have run prior to the Custom Models service in the workflow.
- The payload format consists of a serialized JSON string containing arrays of
attribute_name
/attribute_value
pairs for each service. You may need to deserialize this string in your custom model code (ex:json.loads()
in Python). - If any data is passed to the evaluation under the entity's
meta
section, it will be included in the custom model payload - Any output attributes calculated before running the custom model will be included in the payload under the
Output Attributes
section- It is important to make the custom model service dependent on the output attributes in some way to guarantee that they will be calculated before the custom model is run
- The
Data Supplied
section indicates whether certain PII was supplied or not (true
/false
).
Example:
{
// Org name will be the organization name in the dashboard
"[Org Name] Data Supplied":
[
{
"attributeName": "birth_date",
"attributeValue": "true"
},
{
"attributeName": "name_last",
"attributeValue": "true"
},
{
"attributeName": "name_first",
"attributeValue": "true"
},
...
],
"[Org Name] Meta":
[
{
"attributeName": "income",
"attributeValue": "100000"
},
...
],
"[Org Name] Output Attributes":
[
{
"attributeName": "my_output_attribute",
"attributeValue": "100"
},
...
],
// These sections depend on which services run prior to the custom model in the workflow
"Experian":
[
{
"attributeName": "ex",
"attributeValue": "100"
},
...
],
"Lexis Nexis":
[
{
"attributeName": "ex",
"attributeValue": "100"
},
...
],
"Another workflow service":
[
{
"attributeName": "ex",
"attributeValue": "100"
},
...
],
...
}
Output Format
Success
If successful, the custom model endpoint should return the following JSON response:
{
"statusCode": 200,
"body": {
"score": number,
...Additional optional data...
}
}
Please also encode your response body as a JSON string. Example python code:
body = {
"score": 150,
"extraDataToUse": ['reason1', 'reason2'],
}
response = {
"statusCode": 200,
"body": json.dumps(body)
}
return response
Failure
If there is an error within the custom model code that can be handled, this can be returned to Alloy using the following JSON response:
{
"statusCode": 500, // or some other non-200 response
"body": {
message: "This is an optional error message"
}
}
Please also encode your response body as a JSON string. Example python code:
body = {
"message": "Failed to compute score",
}
response = {
"statusCode": 500,
"body": json.dumps(body)
}
return response
Updated over 1 year ago