Python CLI

There are two major parts of the Python CLI:

YhatModel: used for constructing the model

Yhat: used for authentication, deployment, and sending requests for predictions

Before beginning building a model, be sure to import the Yhat package:

from yhat import Yhat, YhatModel, preprocess

YhatModel()

The YhatModel class is used to define the API endpoint for a model. It requires an execute function, that will be executed when the model is called. This is the core of the API endpoint

from yhat import Yhat, YhatModel, preprocess

#first define our class/name our model
class ReturnSentData(YhatModel):
    def execute(self, data):
        return {"data_sent": data}

To test locally and then deploy:

#instantiate the class and execute locally:
ReturnSentData().execute('{"data_key":"value"}')
Out[1]:
#{'data_sent': '{"data_key":"value"}'}

#specify the auth and then deploy
yh = Yhat("USERNAME", "API_KEY", "https://sandbox.yhathq.com/")
yh.deploy("SimpleModel", ReturnSentData, globals(), sure=True)

#Send the request to the model
yh.predict("SimpleModel",{"data_key":"value"})
Out[10]:
#{u'result': {u'data_sent': {u'data_key': u'value'}},
# u'version': 1,
# u'yhat_id': u'9fbcb56a89bf318d8b50c41654554e3c',
# u'yhat_model': u'HelloWorldDocs1'}

@preprocess

Preprocess is an optional decorator that can be useful for transforming data as it comes into and exits the model.

The data argument in the execute statement will be transformed. A dict type is the default.

Usage

@preprocess(in_type=dict, out_type=dict)

Arguments

  • in_type: either a dict or pd.DataFrame
  • out_type: either a dict or pd.DataFrame
from yhat import Yhat, YhatModel, preprocess

#first define our class/name our model
class ReturnSentData(YhatModel):
    @preprocess(in_type=pd.DataFrame, out_type=dict)
    def execute(self, data):
        return {"data_sent": data}

# If our example json looks like:
{
  "key1": "value1",
  "key2": "value2"
}


# out_type=DataFrame results in:
{
"result": {
  "data_sent": [
    "value1",
    "value2"
  ]
}

# out_type=dict results in:

{
"result": {
  "data_sent": {
    "key1": "value1",
    "key2": "value2"
  }
}

Setting the Auth

To deploy models, you'll need to add your username, API key, and URL of the ScienceOps master a new instance of the class Yhat

yh = Yhat("YOUR_USERNAME", "YOUR_APIKEY", "https://scienceops.url.com/")

Yhat

The Yhat module has 3 methods:

Yhat.deploy

Deploy a model to ScienceOps

This function captures the model.predict function and deploys a model on ScienceOps which can be called from any programming language via a REST API.

Usage

yhat.deploy(name, model, session, sure=False, patch=None, dry_run=False, verbose=0, autodetect=True)

yh.deploy("HelloWorld", HelloWorld, globals())

Arguments

  • name(string): the name of the model to deploy to ScienceOps
  • model: the name of the YhatModel class
  • session(globals()): your Python's session variables, must be set to "globals()")
  • sure (boolean, optional): If True, then user will be prompted to confirm deployment
  • patch (string, optional): A python command (string) to prepend to the model deployment code
  • dry_run (boolean, optional): If True, tests to see that the model can be pickled; model does not deploy
  • verbose (int, optional): Integer, determines verbosity of logging during deployment.
  • autodetect (boolean, optional): If False, only packages listed in the REQUIREMENTS array will be installed

Note: Arguments patch and verbose are not typically needed for most model deployments.

Examples

Deploy the "LPOptimizer_model" and don't require confirmation on deployment.

yh.deploy("LPOptimizer_model", LPOptimizer, globals(),sure=True)

Below, dry_run=True builds the model, prints the dependencies but does not deploy the model.

yh.deploy("LPOptimizer_model", LPOptimizer, globals(),dry_run=True)

#  extracting model
#  model specified requirements
#   [+] PuLP (warning: unversioned)
#  requirements automatically detected
#   [+] yhat==1.4.1
Out[12]: {'info': 'dry run complete', 'status': 'ok'}

Install the Ubuntu packages "liblapack-dev" and "liblapack-doc-man" on model build

yh.deploy("LPOptimizer_model", LPOptimizer, globals(), packages=["liblapack-dev", "liblapack-doc-man"], dry_run=False)

Yhat.deploy_spark()

Deploy a Spark model to ScienceOps

Deploys a Spark model to a Yhat server. This is a special case of deploy.

Usage

Yhat.deploy_spark(name, model, session, sc, sure=False, packages=[], patch=None, dry_run=False, verbose=0, autodetect=True)

yh.deploy_spark("SparkModel", SparkModelClass, globals())

Arguments

  • name(string): name of your model
  • model: YhatModel an instance of a Yhat model
  • session (globals()): your Python session variables (i.e. "globals()")
  • sc: SparkContext your SparkContext. this is typically sc
  • packages (list): (deprecated in ScienceOps 2.7.x) this is being deprecated in favor of custom runtime images
  • sure(boolean): if true, then this will force a deployment (like -y in apt-get). If false or blank, deployment must be confirmed.
  • verbose(int): Relative amount of logging info to display (higher = more logs)
  • autodetect(boolean): flag for using the requirement auto-detection feature. if False, you should explicitly state the packages required for your model.

Yhat.deploy_tensorflow()

Deploy a TensorFlow model to ScienceOps

Deploys a TensorFlow model to a ScienceOps server. This is a special case of deploy.

Usage

Yhat.deploy_tensorflow(name, model, session, sess, sure=False, packages=[], patch=None, dry_run=False, verbose=0, autodetect=True)

Arguments

  • name (string): name of your model
  • model (YhatModel): an instance of a Yhat model
  • session (globals()): your Python's session variables (i.e. "globals()")
  • sess (tensorflow.Session, tensorflow.InteractiveSession): your tensorflow session variable. this is typically sess
  • packages (list): (deprecated in ScienceOps 2.7.x) this is being deprecated in favor of custom runtime images
  • sure (boolean): If true, then this will force a deployment (like -y in apt-get). If false or blank, deployment must be confirmed
  • verbose (int): Relative amount of logging info to display (higher = more logs)
  • autodetect (boolean): flag for using the requirement auto-detection feature. If False, you should explicitly state the packages required for your model.

Examples

See the TensorFlow example model

Yhat.predict()

Send data to a model via REST API request from R for a prediction.

Usage

Yhat.predict(model, data, model_owner=None, raw_input=False)

Arguments

  • model(string): the name of your model
  • data: data required to make a single prediction. This can be a dict or dataframe
  • model_owner(string, optional): username of the model owner for shared models.

Examples

yh.predict("LPOptimizer_model", {"activities": ["sleep", "work", "leisure"], "required_hours": [7, 10, 0], "happiness_per_hour": [1.5, 1, 2]})

Request a prediction from the user 'Brandon'. Note that this user must have shared their model with you for this request to succeed.

yh.predict("HelloWorld", {"name":"Colin"}, model_owner='brandon')

results matching ""

    No results matching ""