Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.aiplatform/v1beta1.getModelDeploymentMonitoringJob
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets a ModelDeploymentMonitoringJob.
Using getModelDeploymentMonitoringJob
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getModelDeploymentMonitoringJob(args: GetModelDeploymentMonitoringJobArgs, opts?: InvokeOptions): Promise<GetModelDeploymentMonitoringJobResult>
function getModelDeploymentMonitoringJobOutput(args: GetModelDeploymentMonitoringJobOutputArgs, opts?: InvokeOptions): Output<GetModelDeploymentMonitoringJobResult>
def get_model_deployment_monitoring_job(location: Optional[str] = None,
model_deployment_monitoring_job_id: Optional[str] = None,
project: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetModelDeploymentMonitoringJobResult
def get_model_deployment_monitoring_job_output(location: Optional[pulumi.Input[str]] = None,
model_deployment_monitoring_job_id: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetModelDeploymentMonitoringJobResult]
func LookupModelDeploymentMonitoringJob(ctx *Context, args *LookupModelDeploymentMonitoringJobArgs, opts ...InvokeOption) (*LookupModelDeploymentMonitoringJobResult, error)
func LookupModelDeploymentMonitoringJobOutput(ctx *Context, args *LookupModelDeploymentMonitoringJobOutputArgs, opts ...InvokeOption) LookupModelDeploymentMonitoringJobResultOutput
> Note: This function is named LookupModelDeploymentMonitoringJob
in the Go SDK.
public static class GetModelDeploymentMonitoringJob
{
public static Task<GetModelDeploymentMonitoringJobResult> InvokeAsync(GetModelDeploymentMonitoringJobArgs args, InvokeOptions? opts = null)
public static Output<GetModelDeploymentMonitoringJobResult> Invoke(GetModelDeploymentMonitoringJobInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetModelDeploymentMonitoringJobResult> getModelDeploymentMonitoringJob(GetModelDeploymentMonitoringJobArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
fn::invoke:
function: google-native:aiplatform/v1beta1:getModelDeploymentMonitoringJob
arguments:
# arguments dictionary
The following arguments are supported:
- Location string
- Model
Deployment stringMonitoring Job Id - Project string
- Location string
- Model
Deployment stringMonitoring Job Id - Project string
- location String
- model
Deployment StringMonitoring Job Id - project String
- location string
- model
Deployment stringMonitoring Job Id - project string
- location String
- model
Deployment StringMonitoring Job Id - project String
getModelDeploymentMonitoringJob Result
The following output properties are available:
- Analysis
Instance stringSchema Uri - YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- Bigquery
Tables List<Pulumi.Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Model Deployment Monitoring Big Query Table Response> - The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- Create
Time string - Timestamp when this ModelDeploymentMonitoringJob was created.
- Display
Name string - The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- Enable
Monitoring boolPipeline Logs - If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- Encryption
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Encryption Spec Response - Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- Endpoint string
- Endpoint resource name. Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- Error
Pulumi.
Google Native. Aiplatform. V1Beta1. Outputs. Google Rpc Status Response - Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
. - Labels Dictionary<string, string>
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- Latest
Monitoring Pulumi.Pipeline Metadata Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response - Latest triggered monitoring pipeline metadata.
- Log
Ttl string - The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- Logging
Sampling Pulumi.Strategy Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Sampling Strategy Response - Sample Strategy for logging.
- Model
Deployment List<Pulumi.Monitoring Objective Configs Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Model Deployment Monitoring Objective Config Response> - The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- Model
Deployment Pulumi.Monitoring Schedule Config Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Model Deployment Monitoring Schedule Config Response - Schedule config for running the monitoring job.
- Model
Monitoring Pulumi.Alert Config Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Model Monitoring Alert Config Response - Alert config for model monitoring.
- Name string
- Resource name of a ModelDeploymentMonitoringJob.
- Next
Schedule stringTime - Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- Predict
Instance stringSchema Uri - YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- Sample
Predict objectInstance - Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- Schedule
State string - Schedule state when the monitoring job is in Running state.
- State string
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- Stats
Anomalies Pulumi.Base Directory Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Gcs Destination Response - Stats anomalies base folder path.
- Update
Time string - Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- Analysis
Instance stringSchema Uri - YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- Bigquery
Tables []GoogleCloud Aiplatform V1beta1Model Deployment Monitoring Big Query Table Response - The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- Create
Time string - Timestamp when this ModelDeploymentMonitoringJob was created.
- Display
Name string - The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- Enable
Monitoring boolPipeline Logs - If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- Encryption
Spec GoogleCloud Aiplatform V1beta1Encryption Spec Response - Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- Endpoint string
- Endpoint resource name. Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- Error
Google
Rpc Status Response - Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
. - Labels map[string]string
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- Latest
Monitoring GooglePipeline Metadata Cloud Aiplatform V1beta1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response - Latest triggered monitoring pipeline metadata.
- Log
Ttl string - The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- Logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy Response - Sample Strategy for logging.
- Model
Deployment []GoogleMonitoring Objective Configs Cloud Aiplatform V1beta1Model Deployment Monitoring Objective Config Response - The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- Model
Deployment GoogleMonitoring Schedule Config Cloud Aiplatform V1beta1Model Deployment Monitoring Schedule Config Response - Schedule config for running the monitoring job.
- Model
Monitoring GoogleAlert Config Cloud Aiplatform V1beta1Model Monitoring Alert Config Response - Alert config for model monitoring.
- Name string
- Resource name of a ModelDeploymentMonitoringJob.
- Next
Schedule stringTime - Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- Predict
Instance stringSchema Uri - YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- Sample
Predict interface{}Instance - Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- Schedule
State string - Schedule state when the monitoring job is in Running state.
- State string
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- Stats
Anomalies GoogleBase Directory Cloud Aiplatform V1beta1Gcs Destination Response - Stats anomalies base folder path.
- Update
Time string - Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- analysis
Instance StringSchema Uri - YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- bigquery
Tables List<GoogleCloud Aiplatform V1beta1Model Deployment Monitoring Big Query Table Response> - The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- create
Time String - Timestamp when this ModelDeploymentMonitoringJob was created.
- display
Name String - The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- enable
Monitoring BooleanPipeline Logs - If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- encryption
Spec GoogleCloud Aiplatform V1beta1Encryption Spec Response - Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- endpoint String
- Endpoint resource name. Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- error
Google
Rpc Status Response - Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
. - labels Map<String,String>
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- latest
Monitoring GooglePipeline Metadata Cloud Aiplatform V1beta1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response - Latest triggered monitoring pipeline metadata.
- log
Ttl String - The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy Response - Sample Strategy for logging.
- model
Deployment List<GoogleMonitoring Objective Configs Cloud Aiplatform V1beta1Model Deployment Monitoring Objective Config Response> - The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- model
Deployment GoogleMonitoring Schedule Config Cloud Aiplatform V1beta1Model Deployment Monitoring Schedule Config Response - Schedule config for running the monitoring job.
- model
Monitoring GoogleAlert Config Cloud Aiplatform V1beta1Model Monitoring Alert Config Response - Alert config for model monitoring.
- name String
- Resource name of a ModelDeploymentMonitoringJob.
- next
Schedule StringTime - Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- predict
Instance StringSchema Uri - YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- sample
Predict ObjectInstance - Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- schedule
State String - Schedule state when the monitoring job is in Running state.
- state String
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- stats
Anomalies GoogleBase Directory Cloud Aiplatform V1beta1Gcs Destination Response - Stats anomalies base folder path.
- update
Time String - Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- analysis
Instance stringSchema Uri - YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- bigquery
Tables GoogleCloud Aiplatform V1beta1Model Deployment Monitoring Big Query Table Response[] - The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- create
Time string - Timestamp when this ModelDeploymentMonitoringJob was created.
- display
Name string - The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- enable
Monitoring booleanPipeline Logs - If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- encryption
Spec GoogleCloud Aiplatform V1beta1Encryption Spec Response - Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- endpoint string
- Endpoint resource name. Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- error
Google
Rpc Status Response - Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
. - labels {[key: string]: string}
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- latest
Monitoring GooglePipeline Metadata Cloud Aiplatform V1beta1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response - Latest triggered monitoring pipeline metadata.
- log
Ttl string - The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy Response - Sample Strategy for logging.
- model
Deployment GoogleMonitoring Objective Configs Cloud Aiplatform V1beta1Model Deployment Monitoring Objective Config Response[] - The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- model
Deployment GoogleMonitoring Schedule Config Cloud Aiplatform V1beta1Model Deployment Monitoring Schedule Config Response - Schedule config for running the monitoring job.
- model
Monitoring GoogleAlert Config Cloud Aiplatform V1beta1Model Monitoring Alert Config Response - Alert config for model monitoring.
- name string
- Resource name of a ModelDeploymentMonitoringJob.
- next
Schedule stringTime - Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- predict
Instance stringSchema Uri - YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- sample
Predict anyInstance - Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- schedule
State string - Schedule state when the monitoring job is in Running state.
- state string
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- stats
Anomalies GoogleBase Directory Cloud Aiplatform V1beta1Gcs Destination Response - Stats anomalies base folder path.
- update
Time string - Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- analysis_
instance_ strschema_ uri - YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- bigquery_
tables Sequence[GoogleCloud Aiplatform V1beta1Model Deployment Monitoring Big Query Table Response] - The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- create_
time str - Timestamp when this ModelDeploymentMonitoringJob was created.
- display_
name str - The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- enable_
monitoring_ boolpipeline_ logs - If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- encryption_
spec GoogleCloud Aiplatform V1beta1Encryption Spec Response - Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- endpoint str
- Endpoint resource name. Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- error
Google
Rpc Status Response - Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
. - labels Mapping[str, str]
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- latest_
monitoring_ Googlepipeline_ metadata Cloud Aiplatform V1beta1Model Deployment Monitoring Job Latest Monitoring Pipeline Metadata Response - Latest triggered monitoring pipeline metadata.
- log_
ttl str - The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- logging_
sampling_ Googlestrategy Cloud Aiplatform V1beta1Sampling Strategy Response - Sample Strategy for logging.
- model_
deployment_ Sequence[Googlemonitoring_ objective_ configs Cloud Aiplatform V1beta1Model Deployment Monitoring Objective Config Response] - The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- model_
deployment_ Googlemonitoring_ schedule_ config Cloud Aiplatform V1beta1Model Deployment Monitoring Schedule Config Response - Schedule config for running the monitoring job.
- model_
monitoring_ Googlealert_ config Cloud Aiplatform V1beta1Model Monitoring Alert Config Response - Alert config for model monitoring.
- name str
- Resource name of a ModelDeploymentMonitoringJob.
- next_
schedule_ strtime - Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- predict_
instance_ strschema_ uri - YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- sample_
predict_ Anyinstance - Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- schedule_
state str - Schedule state when the monitoring job is in Running state.
- state str
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- stats_
anomalies_ Googlebase_ directory Cloud Aiplatform V1beta1Gcs Destination Response - Stats anomalies base folder path.
- update_
time str - Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
- analysis
Instance StringSchema Uri - YAML schema file uri describing the format of a single instance that you want Tensorflow Data Validation (TFDV) to analyze. If this field is empty, all the feature data types are inferred from predict_instance_schema_uri, meaning that TFDV will use the data in the exact format(data type) as prediction request/response. If there are any data type differences between predict instance and TFDV instance, this field can be used to override the schema. For models trained with Vertex AI, this field must be set as all the fields in predict instance formatted as string.
- bigquery
Tables List<Property Map> - The created bigquery tables for the job under customer project. Customer could do their own query & analysis. There could be 4 log tables in maximum: 1. Training data logging predict request/response 2. Serving data logging predict request/response
- create
Time String - Timestamp when this ModelDeploymentMonitoringJob was created.
- display
Name String - The user-defined name of the ModelDeploymentMonitoringJob. The name can be up to 128 characters long and can consist of any UTF-8 characters. Display name of a ModelDeploymentMonitoringJob.
- enable
Monitoring BooleanPipeline Logs - If true, the scheduled monitoring pipeline logs are sent to Google Cloud Logging, including pipeline status and anomalies detected. Please note the logs incur cost, which are subject to Cloud Logging pricing.
- encryption
Spec Property Map - Customer-managed encryption key spec for a ModelDeploymentMonitoringJob. If set, this ModelDeploymentMonitoringJob and all sub-resources of this ModelDeploymentMonitoringJob will be secured by this key.
- endpoint String
- Endpoint resource name. Format:
projects/{project}/locations/{location}/endpoints/{endpoint}
- error Property Map
- Only populated when the job's state is
JOB_STATE_FAILED
orJOB_STATE_CANCELLED
. - labels Map<String>
- The labels with user-defined metadata to organize your ModelDeploymentMonitoringJob. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- latest
Monitoring Property MapPipeline Metadata - Latest triggered monitoring pipeline metadata.
- log
Ttl String - The TTL of BigQuery tables in user projects which stores logs. A day is the basic unit of the TTL and we take the ceil of TTL/86400(a day). e.g. { second: 3600} indicates ttl = 1 day.
- logging
Sampling Property MapStrategy - Sample Strategy for logging.
- model
Deployment List<Property Map>Monitoring Objective Configs - The config for monitoring objectives. This is a per DeployedModel config. Each DeployedModel needs to be configured separately.
- model
Deployment Property MapMonitoring Schedule Config - Schedule config for running the monitoring job.
- model
Monitoring Property MapAlert Config - Alert config for model monitoring.
- name String
- Resource name of a ModelDeploymentMonitoringJob.
- next
Schedule StringTime - Timestamp when this monitoring pipeline will be scheduled to run for the next round.
- predict
Instance StringSchema Uri - YAML schema file uri describing the format of a single instance, which are given to format this Endpoint's prediction (and explanation). If not set, we will generate predict schema from collected predict requests.
- sample
Predict AnyInstance - Sample Predict instance, same format as PredictRequest.instances, this can be set as a replacement of ModelDeploymentMonitoringJob.predict_instance_schema_uri. If not set, we will generate predict schema from collected predict requests.
- schedule
State String - Schedule state when the monitoring job is in Running state.
- state String
- The detailed state of the monitoring job. When the job is still creating, the state will be 'PENDING'. Once the job is successfully created, the state will be 'RUNNING'. Pause the job, the state will be 'PAUSED'. Resume the job, the state will return to 'RUNNING'.
- stats
Anomalies Property MapBase Directory - Stats anomalies base folder path.
- update
Time String - Timestamp when this ModelDeploymentMonitoringJob was updated most recently.
Supporting Types
GoogleCloudAiplatformV1beta1BigQueryDestinationResponse
- Output
Uri string - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- Output
Uri string - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- output
Uri String - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- output
Uri string - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- output_
uri str - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
- output
Uri String - BigQuery URI to a project or table, up to 2000 characters long. When only the project is specified, the Dataset and Table is created. When the full table reference is specified, the Dataset must exist and table must not exist. Accepted forms: * BigQuery path. For example:
bq://projectId
orbq://projectId.bqDatasetId
orbq://projectId.bqDatasetId.bqTableId
.
GoogleCloudAiplatformV1beta1BigQuerySourceResponse
- Input
Uri string - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- Input
Uri string - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- input
Uri String - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- input
Uri string - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- input_
uri str - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
- input
Uri String - BigQuery URI to a table, up to 2000 characters long. Accepted forms: * BigQuery path. For example:
bq://projectId.bqDatasetId.bqTableId
.
GoogleCloudAiplatformV1beta1EncryptionSpecResponse
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms_
key_ strname - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
GoogleCloudAiplatformV1beta1GcsDestinationResponse
- Output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- Output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri StringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri stringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output_
uri_ strprefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
- output
Uri StringPrefix - Google Cloud Storage URI to output directory. If the uri doesn't end with '/', a '/' will be automatically appended. The directory is created if it doesn't exist.
GoogleCloudAiplatformV1beta1GcsSourceResponse
- Uris List<string>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- Uris []string
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris List<String>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris string[]
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris Sequence[str]
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
- uris List<String>
- Google Cloud Storage URI(-s) to the input file(s). May contain wildcards. For more information on wildcards, see https://cloud.google.com/storage/docs/gsutil/addlhelp/WildcardNames.
GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringBigQueryTableResponse
- Bigquery
Table stringPath - The created BigQuery table to store logs. Customer could do their own query & analysis. Format:
bq://.model_deployment_monitoring_._
- Log
Source string - The source of log.
- Log
Type string - The type of log.
- Bigquery
Table stringPath - The created BigQuery table to store logs. Customer could do their own query & analysis. Format:
bq://.model_deployment_monitoring_._
- Log
Source string - The source of log.
- Log
Type string - The type of log.
- bigquery
Table StringPath - The created BigQuery table to store logs. Customer could do their own query & analysis. Format:
bq://.model_deployment_monitoring_._
- log
Source String - The source of log.
- log
Type String - The type of log.
- bigquery
Table stringPath - The created BigQuery table to store logs. Customer could do their own query & analysis. Format:
bq://.model_deployment_monitoring_._
- log
Source string - The source of log.
- log
Type string - The type of log.
- bigquery_
table_ strpath - The created BigQuery table to store logs. Customer could do their own query & analysis. Format:
bq://.model_deployment_monitoring_._
- log_
source str - The source of log.
- log_
type str - The type of log.
- bigquery
Table StringPath - The created BigQuery table to store logs. Customer could do their own query & analysis. Format:
bq://.model_deployment_monitoring_._
- log
Source String - The source of log.
- log
Type String - The type of log.
GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringJobLatestMonitoringPipelineMetadataResponse
- Run
Time string - The time that most recent monitoring pipelines that is related to this run.
- Status
Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Rpc Status Response - The status of the most recent monitoring pipeline.
- Run
Time string - The time that most recent monitoring pipelines that is related to this run.
- Status
Google
Rpc Status Response - The status of the most recent monitoring pipeline.
- run
Time String - The time that most recent monitoring pipelines that is related to this run.
- status
Google
Rpc Status Response - The status of the most recent monitoring pipeline.
- run
Time string - The time that most recent monitoring pipelines that is related to this run.
- status
Google
Rpc Status Response - The status of the most recent monitoring pipeline.
- run_
time str - The time that most recent monitoring pipelines that is related to this run.
- status
Google
Rpc Status Response - The status of the most recent monitoring pipeline.
- run
Time String - The time that most recent monitoring pipelines that is related to this run.
- status Property Map
- The status of the most recent monitoring pipeline.
GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringObjectiveConfigResponse
- Deployed
Model stringId - The DeployedModel ID of the objective config.
- Objective
Config Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Response - The objective config of for the modelmonitoring job of this deployed model.
- Deployed
Model stringId - The DeployedModel ID of the objective config.
- Objective
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Response - The objective config of for the modelmonitoring job of this deployed model.
- deployed
Model StringId - The DeployedModel ID of the objective config.
- objective
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Response - The objective config of for the modelmonitoring job of this deployed model.
- deployed
Model stringId - The DeployedModel ID of the objective config.
- objective
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Response - The objective config of for the modelmonitoring job of this deployed model.
- deployed_
model_ strid - The DeployedModel ID of the objective config.
- objective_
config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Response - The objective config of for the modelmonitoring job of this deployed model.
- deployed
Model StringId - The DeployedModel ID of the objective config.
- objective
Config Property Map - The objective config of for the modelmonitoring job of this deployed model.
GoogleCloudAiplatformV1beta1ModelDeploymentMonitoringScheduleConfigResponse
- Monitor
Interval string - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- Monitor
Window string - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- Monitor
Interval string - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- Monitor
Window string - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor
Interval String - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor
Window String - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor
Interval string - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor
Window string - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor_
interval str - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor_
window str - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
- monitor
Interval String - The model monitoring job scheduling interval. It will be rounded up to next full hour. This defines how often the monitoring jobs are triggered.
- monitor
Window String - The time window of the prediction data being included in each prediction dataset. This window specifies how long the data should be collected from historical model results for each run. If not set, ModelDeploymentMonitoringScheduleConfig.monitor_interval will be used. e.g. If currently the cutoff time is 2022-01-08 14:30:00 and the monitor_window is set to be 3600, then data from 2022-01-08 13:30:00 to 2022-01-08 14:30:00 will be retrieved and aggregated to calculate the monitoring statistics.
GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigEmailAlertConfigResponse
- User
Emails List<string> - The email addresses to send the alert.
- User
Emails []string - The email addresses to send the alert.
- user
Emails List<String> - The email addresses to send the alert.
- user
Emails string[] - The email addresses to send the alert.
- user_
emails Sequence[str] - The email addresses to send the alert.
- user
Emails List<String> - The email addresses to send the alert.
GoogleCloudAiplatformV1beta1ModelMonitoringAlertConfigResponse
- Email
Alert Pulumi.Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config Response - Email alert config.
- Enable
Logging bool - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- Notification
Channels List<string> - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- Email
Alert GoogleConfig Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config Response - Email alert config.
- Enable
Logging bool - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- Notification
Channels []string - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- email
Alert GoogleConfig Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config Response - Email alert config.
- enable
Logging Boolean - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification
Channels List<String> - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- email
Alert GoogleConfig Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config Response - Email alert config.
- enable
Logging boolean - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification
Channels string[] - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- email_
alert_ Googleconfig Cloud Aiplatform V1beta1Model Monitoring Alert Config Email Alert Config Response - Email alert config.
- enable_
logging bool - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification_
channels Sequence[str] - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
- email
Alert Property MapConfig - Email alert config.
- enable
Logging Boolean - Dump the anomalies to Cloud Logging. The anomalies will be put to json payload encoded from proto google.cloud.aiplatform.logging.ModelMonitoringAnomaliesLogEntry. This can be further sinked to Pub/Sub or any other services supported by Cloud Logging.
- notification
Channels List<String> - Resource names of the NotificationChannels to send alert. Must be of the format
projects//notificationChannels/
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigExplanationBaselineResponse
- Bigquery
Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Big Query Destination Response - BigQuery location for BatchExplain output.
- Gcs
Pulumi.
Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Destination Response - Cloud Storage location for BatchExplain output.
- Prediction
Format string - The storage format of the predictions generated BatchPrediction job.
- Bigquery
Google
Cloud Aiplatform V1beta1Big Query Destination Response - BigQuery location for BatchExplain output.
- Gcs
Google
Cloud Aiplatform V1beta1Gcs Destination Response - Cloud Storage location for BatchExplain output.
- Prediction
Format string - The storage format of the predictions generated BatchPrediction job.
- bigquery
Google
Cloud Aiplatform V1beta1Big Query Destination Response - BigQuery location for BatchExplain output.
- gcs
Google
Cloud Aiplatform V1beta1Gcs Destination Response - Cloud Storage location for BatchExplain output.
- prediction
Format String - The storage format of the predictions generated BatchPrediction job.
- bigquery
Google
Cloud Aiplatform V1beta1Big Query Destination Response - BigQuery location for BatchExplain output.
- gcs
Google
Cloud Aiplatform V1beta1Gcs Destination Response - Cloud Storage location for BatchExplain output.
- prediction
Format string - The storage format of the predictions generated BatchPrediction job.
- bigquery
Google
Cloud Aiplatform V1beta1Big Query Destination Response - BigQuery location for BatchExplain output.
- gcs
Google
Cloud Aiplatform V1beta1Gcs Destination Response - Cloud Storage location for BatchExplain output.
- prediction_
format str - The storage format of the predictions generated BatchPrediction job.
- bigquery Property Map
- BigQuery location for BatchExplain output.
- gcs Property Map
- Cloud Storage location for BatchExplain output.
- prediction
Format String - The storage format of the predictions generated BatchPrediction job.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigExplanationConfigResponse
- Enable
Feature boolAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- Explanation
Baseline Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Response - Predictions generated by the BatchPredictionJob using baseline dataset.
- Enable
Feature boolAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- Explanation
Baseline GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Response - Predictions generated by the BatchPredictionJob using baseline dataset.
- enable
Feature BooleanAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation
Baseline GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Response - Predictions generated by the BatchPredictionJob using baseline dataset.
- enable
Feature booleanAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation
Baseline GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Response - Predictions generated by the BatchPredictionJob using baseline dataset.
- enable_
feature_ boolattributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation_
baseline GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Explanation Baseline Response - Predictions generated by the BatchPredictionJob using baseline dataset.
- enable
Feature BooleanAttributes - If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
- explanation
Baseline Property Map - Predictions generated by the BatchPredictionJob using baseline dataset.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigPredictionDriftDetectionConfigResponse
- Attribution
Score Dictionary<string, string>Drift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- Default
Drift Pulumi.Threshold Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Threshold Config Response - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- Drift
Thresholds Dictionary<string, string> - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- Attribution
Score map[string]stringDrift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- Default
Drift GoogleThreshold Cloud Aiplatform V1beta1Threshold Config Response - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- Drift
Thresholds map[string]string - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution
Score Map<String,String>Drift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default
Drift GoogleThreshold Cloud Aiplatform V1beta1Threshold Config Response - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift
Thresholds Map<String,String> - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution
Score {[key: string]: string}Drift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default
Drift GoogleThreshold Cloud Aiplatform V1beta1Threshold Config Response - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift
Thresholds {[key: string]: string} - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution_
score_ Mapping[str, str]drift_ thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default_
drift_ Googlethreshold Cloud Aiplatform V1beta1Threshold Config Response - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift_
thresholds Mapping[str, str] - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
- attribution
Score Map<String>Drift Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between different time windows.
- default
Drift Property MapThreshold - Drift anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- drift
Thresholds Map<String> - Key is the feature name and value is the threshold. If a feature needs to be monitored for drift, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between different time windws.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigResponse
- Explanation
Config Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Response - The config for integrating with Vertex Explainable AI.
- Prediction
Drift Pulumi.Detection Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config Response - The config for drift of prediction data.
- Training
Dataset Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset Response - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- Training
Prediction Pulumi.Skew Detection Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config Response - The config for skew between training data and prediction data.
- Explanation
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Response - The config for integrating with Vertex Explainable AI.
- Prediction
Drift GoogleDetection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config Response - The config for drift of prediction data.
- Training
Dataset GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset Response - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- Training
Prediction GoogleSkew Detection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config Response - The config for skew between training data and prediction data.
- explanation
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Response - The config for integrating with Vertex Explainable AI.
- prediction
Drift GoogleDetection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config Response - The config for drift of prediction data.
- training
Dataset GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset Response - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training
Prediction GoogleSkew Detection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config Response - The config for skew between training data and prediction data.
- explanation
Config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Response - The config for integrating with Vertex Explainable AI.
- prediction
Drift GoogleDetection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config Response - The config for drift of prediction data.
- training
Dataset GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset Response - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training
Prediction GoogleSkew Detection Config Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config Response - The config for skew between training data and prediction data.
- explanation_
config GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Explanation Config Response - The config for integrating with Vertex Explainable AI.
- prediction_
drift_ Googledetection_ config Cloud Aiplatform V1beta1Model Monitoring Objective Config Prediction Drift Detection Config Response - The config for drift of prediction data.
- training_
dataset GoogleCloud Aiplatform V1beta1Model Monitoring Objective Config Training Dataset Response - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training_
prediction_ Googleskew_ detection_ config Cloud Aiplatform V1beta1Model Monitoring Objective Config Training Prediction Skew Detection Config Response - The config for skew between training data and prediction data.
- explanation
Config Property Map - The config for integrating with Vertex Explainable AI.
- prediction
Drift Property MapDetection Config - The config for drift of prediction data.
- training
Dataset Property Map - Training dataset for models. This field has to be set only if TrainingPredictionSkewDetectionConfig is specified.
- training
Prediction Property MapSkew Detection Config - The config for skew between training data and prediction data.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingDatasetResponse
- Bigquery
Source Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Big Query Source Response - The BigQuery table of the unmanaged Dataset used to train this Model.
- Data
Format string - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- Dataset string
- The resource name of the Dataset used to train this Model.
- Gcs
Source Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Gcs Source Response - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- Logging
Sampling Pulumi.Strategy Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Sampling Strategy Response - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- Target
Field string - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- Bigquery
Source GoogleCloud Aiplatform V1beta1Big Query Source Response - The BigQuery table of the unmanaged Dataset used to train this Model.
- Data
Format string - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- Dataset string
- The resource name of the Dataset used to train this Model.
- Gcs
Source GoogleCloud Aiplatform V1beta1Gcs Source Response - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- Logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy Response - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- Target
Field string - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery
Source GoogleCloud Aiplatform V1beta1Big Query Source Response - The BigQuery table of the unmanaged Dataset used to train this Model.
- data
Format String - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset String
- The resource name of the Dataset used to train this Model.
- gcs
Source GoogleCloud Aiplatform V1beta1Gcs Source Response - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy Response - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target
Field String - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery
Source GoogleCloud Aiplatform V1beta1Big Query Source Response - The BigQuery table of the unmanaged Dataset used to train this Model.
- data
Format string - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset string
- The resource name of the Dataset used to train this Model.
- gcs
Source GoogleCloud Aiplatform V1beta1Gcs Source Response - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging
Sampling GoogleStrategy Cloud Aiplatform V1beta1Sampling Strategy Response - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target
Field string - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery_
source GoogleCloud Aiplatform V1beta1Big Query Source Response - The BigQuery table of the unmanaged Dataset used to train this Model.
- data_
format str - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset str
- The resource name of the Dataset used to train this Model.
- gcs_
source GoogleCloud Aiplatform V1beta1Gcs Source Response - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging_
sampling_ Googlestrategy Cloud Aiplatform V1beta1Sampling Strategy Response - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target_
field str - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
- bigquery
Source Property Map - The BigQuery table of the unmanaged Dataset used to train this Model.
- data
Format String - Data format of the dataset, only applicable if the input is from Google Cloud Storage. The possible formats are: "tf-record" The source file is a TFRecord file. "csv" The source file is a CSV file. "jsonl" The source file is a JSONL file.
- dataset String
- The resource name of the Dataset used to train this Model.
- gcs
Source Property Map - The Google Cloud Storage uri of the unmanaged Dataset used to train this Model.
- logging
Sampling Property MapStrategy - Strategy to sample data from Training Dataset. If not set, we process the whole dataset.
- target
Field String - The target field name the model is to predict. This field will be excluded when doing Predict and (or) Explain for the training data.
GoogleCloudAiplatformV1beta1ModelMonitoringObjectiveConfigTrainingPredictionSkewDetectionConfigResponse
- Attribution
Score Dictionary<string, string>Skew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- Default
Skew Pulumi.Threshold Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Threshold Config Response - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- Skew
Thresholds Dictionary<string, string> - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- Attribution
Score map[string]stringSkew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- Default
Skew GoogleThreshold Cloud Aiplatform V1beta1Threshold Config Response - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- Skew
Thresholds map[string]string - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution
Score Map<String,String>Skew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default
Skew GoogleThreshold Cloud Aiplatform V1beta1Threshold Config Response - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew
Thresholds Map<String,String> - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution
Score {[key: string]: string}Skew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default
Skew GoogleThreshold Cloud Aiplatform V1beta1Threshold Config Response - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew
Thresholds {[key: string]: string} - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution_
score_ Mapping[str, str]skew_ thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default_
skew_ Googlethreshold Cloud Aiplatform V1beta1Threshold Config Response - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew_
thresholds Mapping[str, str] - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
- attribution
Score Map<String>Skew Thresholds - Key is the feature name and value is the threshold. The threshold here is against attribution score distance between the training and prediction feature.
- default
Skew Property MapThreshold - Skew anomaly detection threshold used by all features. When the per-feature thresholds are not set, this field can be used to specify a threshold for all features.
- skew
Thresholds Map<String> - Key is the feature name and value is the threshold. If a feature needs to be monitored for skew, a value threshold must be configured for that feature. The threshold here is against feature distribution distance between the training and prediction feature.
GoogleCloudAiplatformV1beta1SamplingStrategyRandomSampleConfigResponse
- Sample
Rate double - Sample rate (0, 1]
- Sample
Rate float64 - Sample rate (0, 1]
- sample
Rate Double - Sample rate (0, 1]
- sample
Rate number - Sample rate (0, 1]
- sample_
rate float - Sample rate (0, 1]
- sample
Rate Number - Sample rate (0, 1]
GoogleCloudAiplatformV1beta1SamplingStrategyResponse
- Random
Sample Pulumi.Config Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config Response - Random sample config. Will support more sampling strategies later.
- Random
Sample GoogleConfig Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config Response - Random sample config. Will support more sampling strategies later.
- random
Sample GoogleConfig Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config Response - Random sample config. Will support more sampling strategies later.
- random
Sample GoogleConfig Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config Response - Random sample config. Will support more sampling strategies later.
- random_
sample_ Googleconfig Cloud Aiplatform V1beta1Sampling Strategy Random Sample Config Response - Random sample config. Will support more sampling strategies later.
- random
Sample Property MapConfig - Random sample config. Will support more sampling strategies later.
GoogleCloudAiplatformV1beta1ThresholdConfigResponse
- Value double
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- Value float64
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value Double
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value number
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value float
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
- value Number
- Specify a threshold value that can trigger the alert. If this threshold config is for feature distribution distance: 1. For categorical feature, the distribution distance is calculated by L-inifinity norm. 2. For numerical feature, the distribution distance is calculated by Jensen–Shannon divergence. Each feature must have a non-zero threshold if they need to be monitored. Otherwise no alert will be triggered for that feature.
GoogleRpcStatusResponse
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details
List<Immutable
Dictionary<string, string>> - A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details []map[string]string
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Integer
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String,String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code number
- The status code, which should be an enum value of google.rpc.Code.
- details {[key: string]: string}[]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code int
- The status code, which should be an enum value of google.rpc.Code.
- details Sequence[Mapping[str, str]]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message str
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Number
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi