Superai

Latest version: v0.3.0

Safety actively analyzes 681790 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.1.5

**Full Changelog**: https://github.com/mysuperai/superai-sdk/compare/v0.1.4...v0.1.5

0.1.4

- Many Bugfixes for Data Program and AI trainer functionality.
- Adds metrics to AI trainer output
- Adds model download functionality
- Adds batch prediction entrypoint

**Full Changelog**: https://github.com/mysuperai/superai-sdk/compare/v.0.1.3...v0.1.4

0.1.0.beta5

Short Description

New Features:
- AI Training container creation, starting the training process from the SDK, loading local datasets
- Addition of post-process endpoint to DP server
- Starting, retrieving and deleting training
- Handling root id child versions
- Workflow deletion rail guards
- Support for weight loading and tar files

Fixes:
- Enabled CLI command failures in the SDK
- Adding compatibility of 3-hero methods for the new schema
- Improved training for the new schema
- Remove image age check
- Credential and region specification improvements for CI
- Installing the latest release of superai-SDK in AI base images and conda environments

Long description

Create a training using the BaseModel interface like the following
python
class MnistModel(BaseModel):
def __init__(self, **kwargs):
super(MnistModel, self).__init__(**kwargs)
self.model = None

def train(
self,
model_save_path,
training_data,
validation_data=None,
test_data=None,
production_data=None,
encoder_trainable: bool = True,
decoder_trainable: bool = True,
hyperparameters: HyperParameterSpec = None,
model_parameters: ModelParameters = None,
callbacks=None,
random_seed=default_random_seed,
):


Deploy a training job
The following instruction creates a training job on Kubernetes, where the training_data folder contains the training data and the hyperparameters and model parameters can be initiated as below.
python
ai.training_deploy(
orchestrator=TrainingOrchestrator.AWS_EKS,
training_data_dir="./training_data",
build_all_layers=False,
training_parameters=TrainingParameters(
hyperparameters=HyperParameterSpec(trainable=True, optimizer="adam", log_learning_rate=-3, epochs=10),
model_parameter=ModelParameters(conv1_size=32, conv2_size=64, hidden1_size=500, dropout=0.8),
),
)


Also added a CLI method to run training locally using the SDK
bash
superai ai method train

0.1.0.beta4

Short description of the change and how it benefits the user

- Enables the showing of a progress bar with rich.progress for tracking the pushed Docker layers
Checks whether the Docker daemon is running when creating a Docker client, raising an error otherwise
- Local modules imported from the model file were not imported from the seldon-core-microservice. This change addresses the problem
- Fixed port allocation to allow local testing of K8S containers using AI SDK functions.
- Documentation: Python notebook for AI changes.
- New `prediction view` CLI command.
- Allow DP to implement its own metric calculation logic and provides endpoints for other services.
- Enable explicit model lineage between model versions by having a shared root_id. Allows querying a sorted list by version.
- Add support for showing prediction exceptions and extended timestamps
-
Long description for major new features

Prediction CLI
New `prediction view` command:
bash
$ superai ai prediction view cfa58bcf-679a-4934-9143-de424a233c09
{
'id': 'cfa58bcf-679a-4934-9143-de424a233c09',
'state': 'COMPLETED',
'createdAt': '2022-01-14T15:18:29.336142+00:00',
'model': {'id': 'ea3d1cc0-bc87-4140-b164-539cd78d24f1', 'name': 'visa_merchant_bert_matcher', 'version': 5},
'instances': [
{'id': 0, 'output': {'label': 'CIRCLE', 'score': 0.6789388656616211}, 'score': 0.678938865661621},
{'id': 1, 'output': {'label': 'CIRCLE K', 'score': 0.6621372699737549}, 'score': 0.662137269973755},
{'id': 2, 'output': {'label': 'CIRCLE CRAFT', 'score': 0.5772594213485718}, 'score': 0.577259421348572},
{'id': 3, 'output': {'label': 'FULL CIRCLE', 'score': 0.5494794845581055}, 'score': 0.549479484558105},
{'id': 4, 'output': {'label': 'RING CENTRAL', 'score': 0.5052887201309204}, 'score': 0.50528872013092},
{'id': 5, 'output': {'label': 'YELLOW CARD SERVICES', 'score': 0.49029654264450073}, 'score': 0.490296542644501},
{'id': 6, 'output': {'label': 'CIRCLE K SUNKUS', 'score': 0.48903560638427734}, 'score': 0.489035606384277},
{'id': 7, 'output': {'label': 'GATEWAY', 'score': 0.4817895293235779}, 'score': 0.481789529323578},
{'id': 8, 'output': {'label': '91 EXPRESS LANES', 'score': 0.4658350348472595}, 'score': 0.46583503484726},
{'id': 9, 'output': {'label': 'HK EXPRESS', 'score': 0.45856574177742004}, 'score': 0.45856574177742}
]
}


Existing `deployment predict` command:
bash
$ superai ai deployment predict ea3d1cc0-bc87-4140-b164-539cd78d24f1 --timeout 20 '{
"merchant_name": "CIRCLE 7 EXPRESS",
"line_of_business": "SERVICE STATIONS",
"city": "Rio Grande City",
"state": "TX",
"zipcode": 78582
}'
[01/17/22 11:10:42] INFO Submitted prediction request with id: 60102ada-57f0-447c-a93a-72a1c9c1a9be - MainThread logger.py:84
[01/17/22 11:10:50] INFO Prediction 60102ada-57f0-447c-a93a-72a1c9c1a9be completed with state=meta_ai_prediction(state=COMPLETED) - MainThread logger.py:84
[
({'label': 'CIRCLE', 'score': 0.6789388656616211}, 0.678938865661621),
({'label': 'CIRCLE K', 'score': 0.6621372699737549}, 0.662137269973755),
({'label': 'CIRCLE CRAFT', 'score': 0.5772594213485718}, 0.577259421348572),
({'label': 'FULL CIRCLE', 'score': 0.5494794845581055}, 0.549479484558105),
({'label': 'RING CENTRAL', 'score': 0.5052887201309204}, 0.50528872013092),
({'label': 'YELLOW CARD SERVICES', 'score': 0.49029654264450073}, 0.490296542644501),
({'label': 'CIRCLE K SUNKUS', 'score': 0.48903560638427734}, 0.489035606384277),
({'label': 'GATEWAY', 'score': 0.4817895293235779}, 0.481789529323578),
({'label': '91 EXPRESS LANES', 'score': 0.4658350348472595}, 0.46583503484726),
({'label': 'HK EXPRESS', 'score': 0.45856574177742004}, 0.45856574177742)
]


Model Lineage
python
client= Client()

Create a new model
parent_id = client.add_model(name="my_model", version=1)

Create child model with parent_id as root_id
child_id = client.add_model(name="my_model", version=2, root_id=parent_id)

Get the latest model in a lineage
latest_model = client.get_latest_model(parent_id)

Get the root model in a lineage
root_model = client.get_root_model(child_id)

List all models in a lineage
models_in_lineage = client.list_model_versions(child_id, sort_by_version=True)


View Prediction exceptions

superai ai deployment predict ea3d1cc0-bc87-4140-b164-539cd78d24f1 --timeout 30 '{
"merchant_name_BAD_KEY_CAUSING_EXCEPTION": "CIRCLE 7 EXPRESS",
"line_of_business": "SERVICE STATIONS",
"city": "Rio Grande City",
"state": "TX",
"zipcode": 78582
}'
[01/28/22 10:42:19] INFO Submitted prediction request with id: 082c6fc0-0dbf-4ac8-986f-3d760a0c068e -
[01/28/22 10:42:21] WARNING Prediction failed while waiting for completion:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (503) from primary with message "{
"code": 503,
"type": "InternalServerException",
"message": "Prediction failed"
}
". See https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#logEventViewer:group=/aws/sagemaker/Endpoints/model-dev-ea3d1cc0-bc87-4140-b164-539cd78d24f1-visa-merchant-be in account 185169359328 for more information.
Prediction failed. Check the logs for more information.

0.1.0.beta3

Features:
- Kubernetes Orchestrator
- Async Sagemaker Orchestrator
- Schema updates
- CLI and performance improvements

Long description for major new features

- Examples of Kubernetes Orchestrator

ai_template = AITemplate(...)
ai = AI(...)
Introduced new Orchestrator Mode AWS_EKS, including passing of kubernetes config as follows.
predictor: AWSPredictor = ai.deploy(
orchestrator=Orchestrator.AWS_EKS,
redeploy=True,
properties={"kubernetes_config": {"cooldownPeriod": 300}},
)

More information about the usage of these properties will be added in the next release containing the documentation.

- BaseModel updates
BaseModel class has been changed to implement a `load` method that downloads weights from S3. The interface remains the same. This method is used by seldon during pod initialization.
The signature of predict method in BaseModel class has been changed to `def predict(self, input, context=None)`. The usages have to be updated to reflect this.
For seldon, a `predict_raw` method is added which refers to the predict method implementation.

- Async Sagemaker Orchestrator
The async sagemaker orchestrator can be used as below

predictor: AWSPredictor = ai.deploy(
orchestrator=Orchestrator.AWS_SAGEMAKER_ASYNC,
enable_cuda=True,
redeploy=True,
)


- CLI updates
New CLI commands have been added

superai ai list --name <name> --version <version>
superai ai update id <uuid> --name <name> --description <description> --visibilitiy <PRIVATE/PUBLIC>

0.1.0.beta2

New features
- During building the container, now the build process success is checked.

Enhancements
- Fixed secret passing during the source-to-image build process
- Replaced print calls by logger calls.
- Fixed the conda environment creation and server process initialization during the deployment of AI models.
- Fixed the setup script run during deployment
- Fixed initialization of BaseModel during AI class creation. Now the model is initialized only when predict method is called explicitly.
- Fixed bug in orchestrator selection during deploying.

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.