![]() To run predictions on models on a specific IP address, specify the IP address and port. To run predictions on models on a public IP address, specify the IP address as 0.0.0.0. Metrics_address: Metrics API binding address. Management_address: Management API binding address. Inference_address: Inference API binding address. The management API is listening on port 8081. The inference API is listening on port 8080. To avoid unauthorized access, TorchServe only allows localhost access by default. TorchServe doesn’t support authentication natively. Configure TorchServe listening address and port ¶ ![]() Note: model_store and load_models properties are overridden by command line parameters, if specified. Pathname: The model store location is specified by the value of pathname. Standalone: default: N/A, Loading models from the local disk is disabled. Model1=model1.mar, model2=model2.mar: Load models with the specified names and MAR files from model_store. Model1.mar, model2.mar: Load models in the specified MAR files from model_store. Standalone: default: N/A, No models are loaded at start up.Īll: Load all models present in model_store. You can configure TorchServe to load models during startup by setting the model_store and load_models properties. To control TorchServe frontend memory footprint, configure the vmargs property in the config.properties fileĪdjust JVM options to fit your memory requirement. If none of the above is specified, TorchServe loads a built-in configuration with default values. If there is a config.properties in the folder where you call torchserve, TorchServe loads the config.properties file from the current working directory. If -ts-config parameter is passed to torchserve, TorchServe loads the configuration from the path specified by the parameter. If the TS_CONFIG_FILE environment variable is set, TorchServe loads the configuration from the path specified by the environment variable. ![]() TorchServe uses following, in order of priority, to locate this config.properties file: TorchServe uses a config.properties file to store configurations. ![]() If this option isĭisabled, TorchServe runs in the backgroundįor more detailed information about torchserve command line options, see Serve Models with TorchServe. –foreground Runs TorchServe in the foreground. –log-config Overrides the default log4j2.xml –models Overrides the load_models property in config.properties –model-store Overrides the model_store property in config.properties file –ts-config TorchServe loads the specified configuration file if TS_CONFIG_FILE environment variable is not set Command line parameters ¶Ĭustomize TorchServe behavior by using the following command line arguments when you call torchserve: The value of an environment variable overrides other property values. Note: Environment variables have higher priority than command line or config.properties. You can change TorchServe behavior by setting the following environment variables: The value of a command line argument overridesĪ value in the configuration file. In order of priority, they are:įor example, the value of an environment variable overrides both command line arguments andĪ property in the configuration file. There are three ways to configure TorchServe. However, if you want to customize TorchServe, the configuration options described in this topic are available. The default settings form TorchServe should be sufficient for most use cases.
0 Comments
Leave a Reply. |