This endpoint follows the OpenAI API format for generating vector embeddings from input text. The handler receives pre-processed metadata from middleware and forwards the request to the selected node.
Ok(Response) - The embeddings response from the processing nodeErr(AtomaProxyError) - An error status code if any step failsINTERNAL_SERVER_ERROR - Processing or node communication failuresBearer authentication header of the form Bearer <token>, where <token> is your auth token.
Request object for creating embeddings
Input text to get embeddings for. Can be a string or array of strings. Each input must not exceed the max input tokens for the model
"The quick brown fox jumped over the lazy dog"
ID of the model to use.
"intfloat/multilingual-e5-large-instruct"
The number of dimensions the resulting output embeddings should have.
x >= 0The format to return the embeddings in. Can be "float" or "base64". Defaults to "float"
"float"
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
"user-1234"
Embeddings generated successfully
Response object from creating embeddings