Interacting with Tesseracts¶
Viewing Running Tesseracts¶
In order to view all Tesseracts that are running, you can run the following command:
$ tesseract ps
The output will be a table of Tesseracts containers with their ID, name, version, host port, project id, and description:
┏━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ ID ┃ Name ┃ Version ┃ Host Port ┃ Project ID ┃ Description ┃
┡━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ 997fca92ea37 │ vectoradd │ 1.2.3 │ 56434 │ tesseract-afn60xa27hih │ Simple tesseract that adds two vectors.\n │
└──────────────┴───────────┴─────────┴───────────┴────────────────────────┴───────────────────────────────────────────┘
The host port
is the port used in the Tesseract address to interact with the different endpoints of the hosted Tesseract.
The project id
is what is used in the tesseract teardown
command and will kill all Tesseracts associated with that project id.
Invoking a Tesseract¶
The main operation which a Tesseract implements is called apply
. This could be
a forward pass of a neural network applied to some input data,
a simulation of a quantity of interest which accepts its initial conditions as input,
and so on.
$ tesseract run vectoradd apply @examples/vectoradd/example_inputs.json
{"result":{"object_type":"array","shape":[3],"dtype":"float64","data":{"buffer":[5.0,7.0,9.0],"encoding":"json"}}}
Where the example_inputs.json
passed as input just contains the following:
{
"inputs": {
"a":{
"object_type":"array",
"shape":[3],
"dtype":"int64",
"data":{
"buffer":[1, 2, 3],
"encoding":"json"
}
},
"b":{
"object_type":"array",
"shape":[3],
"dtype":"int64",
"data":{
"buffer":[4, 5, 6],
"encoding":"json"
}
}
}
}
Notice the @
before the filename in the command payload. This tells the CLI to read the file and
use it as input to the Tesseract. You can also provide a JSON string in place of this:
$ tesseract run vectoradd apply '{"inputs": {"a": ..., "b": ...}}'
This can be useful for small input payloads, but it can become quite cumbersome very quickly.
Make sure that the Tesseract is running as a service (docker ps
). Otherwise, launch
it via tesseract serve vectoradd
.
You can then simply curl its /apply
endpoint
$ curl http://<tesseract-address>:<port>/apply \ # Replace with actual address
-H "Content-Type: application/json" \
-d @examples/vectoradd/example_inputs.json
{"result":{"object_type":"array","shape":[3],"dtype":"float64","data":{"buffer":[5.0,7.0,9.0],"encoding":"json"}}}
Where the payload example_inputs.json
we POST to the /apply
endpoint is the following:
{
"inputs": {
"a":{
"object_type":"array",
"shape":[3],
"dtype":"int64",
"data":{
"buffer":[1, 2, 3],
"encoding":"json"
}
},
"b":{
"object_type":"array",
"shape":[3],
"dtype":"int64",
"data":{
"buffer":[4, 5, 6],
"encoding":"json"
}
}
}
}
>>> import numpy as np
>>> from tesseract_core import Tesseract
>>>
>>> a = np.array([1.0, 2.0, 3.0])
>>> b = np.array([4.0, 5.0, 6.0])
>>>
>>> with Tesseract.from_image(image="vectoradd") as vectoradd:
>>> vectoradd.apply({"a": a, "b": b})
{'result': array([5., 7., 9.])}
The Tesseract context manager will spin up a Tesseract locally, and tear it down once the context is exited.
Tip
You can also instantiate a Tesseract object which connects to
a remote Tesseract via Tesseract.from_url(...)
.```
This Tesseract, which accepts two vectors, is returning an object which has the vector sum
a + b
in the result
output field.
Optional endpoints and differentiation¶
If the Tesseract you are using is differentiable, the endpoints which return derivatives can be called in a similar
fashion; for instance, here is how one would calculate the Jacobian of the result
output field, partial only
on the a
vector, at \(a = (1,2,3)\), \(b = (4,5,6)\):
$ tesseract run vectoradd jacobian @examples/vectoradd/example_jacobian_inputs.json
{"result":{"a":{"object_type":"array","shape":[3,3],"dtype":"float64","data":{"buffer":[[3.0,0.0,0.0],[0.0,3.0,0.0],[0.0,0.0,3.0]],"encoding":"json"}}}}
$ curl -d @examples/vectoradd/example_jacobian_inputs.json \
-H "Content-Type: application/json" \
http://<tesseract-address>:<port>/jacobian
{"result":{"a":{"object_type":"array","shape":[3,3],"dtype":"float64","data":{"buffer":[[1.0,0.0,0.0],[0.0,1.0,0.0],[0.0,0.0,1.0]],"encoding":"json"}}}}
The payload we posted contains information about which inputs and outputs we want to consider when computing derivatives:
{
"inputs": {
"a":{
"object_type": "array",
"shape":[3],
"dtype":"int64",
"data":{
"buffer":[1, 2, 3],
"encoding":"json"
}
},
"b":{
"object_type":"array",
"shape":[3],
"dtype":"int64",
"data":{
"buffer":[4, 5, 6],
"encoding":"json"
}
}
},
"jac_inputs": ["a"],
"jac_outputs": ["result"]
}
>>> import numpy as np
>>> from tesseract_core import Tesseract
>>>
>>> a = np.array([1.0, 2.0, 3.0])
>>> b = np.array([4.0, 5.0, 6.0])
>>>
>>> with Tesseract.from_image("vectoradd") as vectoradd:
>>> vectoradd.jacobian({"a": a, "b": b}, jac_inputs=["a"], jac_outputs=["result"])
{'result': {'a': array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])}}
Now the output is a 3x3 matrix, as expected.
To know which optional endpoints such as jacobian
, jacobian_vector_product
, or vector_jacobian_product
are
available in a given Tesseract, you can look at the docs at the /docs
endpoint of a running Tesseract service, use tesseract apidoc
, or use the Tesseract Python API:
>>> with Tesseract(image="vectoradd") as vectoradd:
... print(vectoradd.available_endpoints)
['apply', 'jacobian', 'health']
OpenAPI schemas for programmatic parsing¶
Since they wrap arbitrary computation, each Tesseract has a unique input/output signature. To make it easier to programmatically know each specific Tesseract’s input and output schema, you can use the following:
$ tesseract run vectoradd openapi-schema
$ curl <tesseract-address>:<port>/openapi.json
>>> from tesseract_core import Tesseract
>>> with Tesseract(image="vectoradd") as vectoradd:
>>> schema = vectoradd.openapi_schema
Schemas are returned in the OpenAPI Schema format,
which is intended for programmatic parsing rather than human
readability. If you want a more human-readable version of the schema,
your best option is instead to look at the docs in the /docs
endpoint of a Tesseract
service or running tesseract apidoc <tesseract-name>
.
The OpenAPI schema contains the schemas of all endpoints, even though they are all derived from
the inputs / outputs of /apply
.