Welcome to mljet’s documentation!#
MLJET#
MLJET - a minimalistic tool for automatic mljetnt of machine learning models.
Key Features#
Light-weight, cross-platform
Simple pythonic interface
Ability to wrap models locally in a service or in a docker container
Support for multiple model formats
Support for difference web-frameworks
Independence of final projects from this tool
Pipeline#
First, we initialize the project directory for the next steps;
Next, we serialize your machine learning models (for example, with Joblib or Pickle);
Next, we create a final .py file based on the templates that contains the endpoint handlers. Handlers are chosen based on models, and templates based on your preferences (templates are also .py files using, for example, Sanic or Flask);
Copy or additionally generate the necessary files (e.g. Dockerfile);
The next step is to compile the API documentation for your project;
After these steps, we build a Docker container, or a Python package, or we just leave the final directory and then we can deploy your project in Kubernetes, or in Heroku.
Code example#
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from mljet import cook
X, y = load_iris(return_X_y=True, as_frame=True)
clf = RandomForestClassifier()
clf.fit(X, y)
cook(strategy="docker", model=clf, port=5001)
After running script you can see new Docker container. To interact with service simply use CURL:
curl -X POST "http://127.0.0.1:5001/predict" -H "accept: application/json" -H "Content-Type: application/json" -d '{\"data\":[[5.8, 2.7, 3.9, 1.2]]}'
Communication#
GitHub Issues for bug reports, feature requests and questions.
License#
MIT License (see LICENSE).