Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.aiplatform/v1beta1.PersistentResource
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a PersistentResource.
Create PersistentResource Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new PersistentResource(name: string, args: PersistentResourceArgs, opts?: CustomResourceOptions);
@overload
def PersistentResource(resource_name: str,
args: PersistentResourceArgs,
opts: Optional[ResourceOptions] = None)
@overload
def PersistentResource(resource_name: str,
opts: Optional[ResourceOptions] = None,
persistent_resource_id: Optional[str] = None,
resource_pools: Optional[Sequence[GoogleCloudAiplatformV1beta1ResourcePoolArgs]] = None,
display_name: Optional[str] = None,
encryption_spec: Optional[GoogleCloudAiplatformV1beta1EncryptionSpecArgs] = None,
labels: Optional[Mapping[str, str]] = None,
location: Optional[str] = None,
name: Optional[str] = None,
network: Optional[str] = None,
project: Optional[str] = None,
reserved_ip_ranges: Optional[Sequence[str]] = None,
resource_runtime_spec: Optional[GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs] = None)
func NewPersistentResource(ctx *Context, name string, args PersistentResourceArgs, opts ...ResourceOption) (*PersistentResource, error)
public PersistentResource(string name, PersistentResourceArgs args, CustomResourceOptions? opts = null)
public PersistentResource(String name, PersistentResourceArgs args)
public PersistentResource(String name, PersistentResourceArgs args, CustomResourceOptions options)
type: google-native:aiplatform/v1beta1:PersistentResource
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args PersistentResourceArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args PersistentResourceArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args PersistentResourceArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args PersistentResourceArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args PersistentResourceArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var persistentResourceResource = new GoogleNative.Aiplatform.V1Beta1.PersistentResource("persistentResourceResource", new()
{
PersistentResourceId = "string",
ResourcePools = new[]
{
new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ResourcePoolArgs
{
MachineSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1MachineSpecArgs
{
AcceleratorCount = 0,
AcceleratorType = GoogleNative.Aiplatform.V1Beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.AcceleratorTypeUnspecified,
MachineType = "string",
TpuTopology = "string",
},
AutoscalingSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecArgs
{
MaxReplicaCount = "string",
MinReplicaCount = "string",
},
DiskSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1DiskSpecArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
},
Id = "string",
ReplicaCount = "string",
},
},
DisplayName = "string",
EncryptionSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1EncryptionSpecArgs
{
KmsKeyName = "string",
},
Labels =
{
{ "string", "string" },
},
Location = "string",
Name = "string",
Network = "string",
Project = "string",
ReservedIpRanges = new[]
{
"string",
},
ResourceRuntimeSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs
{
RaySpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1RaySpecArgs
{
HeadNodeResourcePoolId = "string",
ImageUri = "string",
ResourcePoolImages =
{
{ "string", "string" },
},
},
ServiceAccountSpec = new GoogleNative.Aiplatform.V1Beta1.Inputs.GoogleCloudAiplatformV1beta1ServiceAccountSpecArgs
{
EnableCustomServiceAccount = false,
ServiceAccount = "string",
},
},
});
example, err := aiplatformv1beta1.NewPersistentResource(ctx, "persistentResourceResource", &aiplatformv1beta1.PersistentResourceArgs{
PersistentResourceId: pulumi.String("string"),
ResourcePools: aiplatform.GoogleCloudAiplatformV1beta1ResourcePoolArray{
&aiplatform.GoogleCloudAiplatformV1beta1ResourcePoolArgs{
MachineSpec: &aiplatform.GoogleCloudAiplatformV1beta1MachineSpecArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorType: aiplatformv1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeAcceleratorTypeUnspecified,
MachineType: pulumi.String("string"),
TpuTopology: pulumi.String("string"),
},
AutoscalingSpec: &aiplatform.GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecArgs{
MaxReplicaCount: pulumi.String("string"),
MinReplicaCount: pulumi.String("string"),
},
DiskSpec: &aiplatform.GoogleCloudAiplatformV1beta1DiskSpecArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
},
Id: pulumi.String("string"),
ReplicaCount: pulumi.String("string"),
},
},
DisplayName: pulumi.String("string"),
EncryptionSpec: &aiplatform.GoogleCloudAiplatformV1beta1EncryptionSpecArgs{
KmsKeyName: pulumi.String("string"),
},
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Location: pulumi.String("string"),
Name: pulumi.String("string"),
Network: pulumi.String("string"),
Project: pulumi.String("string"),
ReservedIpRanges: pulumi.StringArray{
pulumi.String("string"),
},
ResourceRuntimeSpec: &aiplatform.GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs{
RaySpec: &aiplatform.GoogleCloudAiplatformV1beta1RaySpecArgs{
HeadNodeResourcePoolId: pulumi.String("string"),
ImageUri: pulumi.String("string"),
ResourcePoolImages: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
ServiceAccountSpec: &aiplatform.GoogleCloudAiplatformV1beta1ServiceAccountSpecArgs{
EnableCustomServiceAccount: pulumi.Bool(false),
ServiceAccount: pulumi.String("string"),
},
},
})
var persistentResourceResource = new PersistentResource("persistentResourceResource", PersistentResourceArgs.builder()
.persistentResourceId("string")
.resourcePools(GoogleCloudAiplatformV1beta1ResourcePoolArgs.builder()
.machineSpec(GoogleCloudAiplatformV1beta1MachineSpecArgs.builder()
.acceleratorCount(0)
.acceleratorType("ACCELERATOR_TYPE_UNSPECIFIED")
.machineType("string")
.tpuTopology("string")
.build())
.autoscalingSpec(GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecArgs.builder()
.maxReplicaCount("string")
.minReplicaCount("string")
.build())
.diskSpec(GoogleCloudAiplatformV1beta1DiskSpecArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.build())
.id("string")
.replicaCount("string")
.build())
.displayName("string")
.encryptionSpec(GoogleCloudAiplatformV1beta1EncryptionSpecArgs.builder()
.kmsKeyName("string")
.build())
.labels(Map.of("string", "string"))
.location("string")
.name("string")
.network("string")
.project("string")
.reservedIpRanges("string")
.resourceRuntimeSpec(GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs.builder()
.raySpec(GoogleCloudAiplatformV1beta1RaySpecArgs.builder()
.headNodeResourcePoolId("string")
.imageUri("string")
.resourcePoolImages(Map.of("string", "string"))
.build())
.serviceAccountSpec(GoogleCloudAiplatformV1beta1ServiceAccountSpecArgs.builder()
.enableCustomServiceAccount(false)
.serviceAccount("string")
.build())
.build())
.build());
persistent_resource_resource = google_native.aiplatform.v1beta1.PersistentResource("persistentResourceResource",
persistent_resource_id="string",
resource_pools=[google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1ResourcePoolArgs(
machine_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecArgs(
accelerator_count=0,
accelerator_type=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.ACCELERATOR_TYPE_UNSPECIFIED,
machine_type="string",
tpu_topology="string",
),
autoscaling_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecArgs(
max_replica_count="string",
min_replica_count="string",
),
disk_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1DiskSpecArgs(
boot_disk_size_gb=0,
boot_disk_type="string",
),
id="string",
replica_count="string",
)],
display_name="string",
encryption_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1EncryptionSpecArgs(
kms_key_name="string",
),
labels={
"string": "string",
},
location="string",
name="string",
network="string",
project="string",
reserved_ip_ranges=["string"],
resource_runtime_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs(
ray_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1RaySpecArgs(
head_node_resource_pool_id="string",
image_uri="string",
resource_pool_images={
"string": "string",
},
),
service_account_spec=google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1ServiceAccountSpecArgs(
enable_custom_service_account=False,
service_account="string",
),
))
const persistentResourceResource = new google_native.aiplatform.v1beta1.PersistentResource("persistentResourceResource", {
persistentResourceId: "string",
resourcePools: [{
machineSpec: {
acceleratorCount: 0,
acceleratorType: google_native.aiplatform.v1beta1.GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType.AcceleratorTypeUnspecified,
machineType: "string",
tpuTopology: "string",
},
autoscalingSpec: {
maxReplicaCount: "string",
minReplicaCount: "string",
},
diskSpec: {
bootDiskSizeGb: 0,
bootDiskType: "string",
},
id: "string",
replicaCount: "string",
}],
displayName: "string",
encryptionSpec: {
kmsKeyName: "string",
},
labels: {
string: "string",
},
location: "string",
name: "string",
network: "string",
project: "string",
reservedIpRanges: ["string"],
resourceRuntimeSpec: {
raySpec: {
headNodeResourcePoolId: "string",
imageUri: "string",
resourcePoolImages: {
string: "string",
},
},
serviceAccountSpec: {
enableCustomServiceAccount: false,
serviceAccount: "string",
},
},
});
type: google-native:aiplatform/v1beta1:PersistentResource
properties:
displayName: string
encryptionSpec:
kmsKeyName: string
labels:
string: string
location: string
name: string
network: string
persistentResourceId: string
project: string
reservedIpRanges:
- string
resourcePools:
- autoscalingSpec:
maxReplicaCount: string
minReplicaCount: string
diskSpec:
bootDiskSizeGb: 0
bootDiskType: string
id: string
machineSpec:
acceleratorCount: 0
acceleratorType: ACCELERATOR_TYPE_UNSPECIFIED
machineType: string
tpuTopology: string
replicaCount: string
resourceRuntimeSpec:
raySpec:
headNodeResourcePoolId: string
imageUri: string
resourcePoolImages:
string: string
serviceAccountSpec:
enableCustomServiceAccount: false
serviceAccount: string
PersistentResource Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The PersistentResource resource accepts the following input properties:
- Persistent
Resource stringId - Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name. The maximum length is 63 characters, and valid characters are
/^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/
. - Resource
Pools List<Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Resource Pool> - The spec of the pools of different resources.
- Display
Name string - Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- Encryption
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Encryption Spec - Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
- Labels Dictionary<string, string>
- Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- Location string
- Name string
- Immutable. Resource name of a PersistentResource.
- Network string
- Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network. - Project string
- Reserved
Ip List<string>Ranges - Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- Resource
Runtime Pulumi.Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Resource Runtime Spec - Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.
- Persistent
Resource stringId - Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name. The maximum length is 63 characters, and valid characters are
/^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/
. - Resource
Pools []GoogleCloud Aiplatform V1beta1Resource Pool Args - The spec of the pools of different resources.
- Display
Name string - Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- Encryption
Spec GoogleCloud Aiplatform V1beta1Encryption Spec Args - Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
- Labels map[string]string
- Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- Location string
- Name string
- Immutable. Resource name of a PersistentResource.
- Network string
- Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network. - Project string
- Reserved
Ip []stringRanges - Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- Resource
Runtime GoogleSpec Cloud Aiplatform V1beta1Resource Runtime Spec Args - Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.
- persistent
Resource StringId - Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name. The maximum length is 63 characters, and valid characters are
/^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/
. - resource
Pools List<GoogleCloud Aiplatform V1beta1Resource Pool> - The spec of the pools of different resources.
- display
Name String - Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- encryption
Spec GoogleCloud Aiplatform V1beta1Encryption Spec - Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
- labels Map<String,String>
- Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location String
- name String
- Immutable. Resource name of a PersistentResource.
- network String
- Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network. - project String
- reserved
Ip List<String>Ranges - Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- resource
Runtime GoogleSpec Cloud Aiplatform V1beta1Resource Runtime Spec - Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.
- persistent
Resource stringId - Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name. The maximum length is 63 characters, and valid characters are
/^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/
. - resource
Pools GoogleCloud Aiplatform V1beta1Resource Pool[] - The spec of the pools of different resources.
- display
Name string - Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- encryption
Spec GoogleCloud Aiplatform V1beta1Encryption Spec - Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
- labels {[key: string]: string}
- Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location string
- name string
- Immutable. Resource name of a PersistentResource.
- network string
- Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network. - project string
- reserved
Ip string[]Ranges - Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- resource
Runtime GoogleSpec Cloud Aiplatform V1beta1Resource Runtime Spec - Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.
- persistent_
resource_ strid - Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name. The maximum length is 63 characters, and valid characters are
/^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/
. - resource_
pools Sequence[GoogleCloud Aiplatform V1beta1Resource Pool Args] - The spec of the pools of different resources.
- display_
name str - Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- encryption_
spec GoogleCloud Aiplatform V1beta1Encryption Spec Args - Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
- labels Mapping[str, str]
- Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location str
- name str
- Immutable. Resource name of a PersistentResource.
- network str
- Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network. - project str
- reserved_
ip_ Sequence[str]ranges - Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- resource_
runtime_ Googlespec Cloud Aiplatform V1beta1Resource Runtime Spec Args - Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.
- persistent
Resource StringId - Required. The ID to use for the PersistentResource, which become the final component of the PersistentResource's resource name. The maximum length is 63 characters, and valid characters are
/^[a-z]([a-z0-9-]{0,61}[a-z0-9])?$/
. - resource
Pools List<Property Map> - The spec of the pools of different resources.
- display
Name String - Optional. The display name of the PersistentResource. The name can be up to 128 characters long and can consist of any UTF-8 characters.
- encryption
Spec Property Map - Optional. Customer-managed encryption key spec for a PersistentResource. If set, this PersistentResource and all sub-resources of this PersistentResource will be secured by this key.
- labels Map<String>
- Optional. The labels with user-defined metadata to organize PersistentResource. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
- location String
- name String
- Immutable. Resource name of a PersistentResource.
- network String
- Optional. The full name of the Compute Engine network to peered with Vertex AI to host the persistent resources. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where {project} is a project number, as in12345
, and {network} is a network name. To specify this field, you must have already configured VPC Network Peering for Vertex AI. If this field is left unspecified, the resources aren't peered with any network. - project String
- reserved
Ip List<String>Ranges - Optional. A list of names for the reserved IP ranges under the VPC network that can be used for this persistent resource. If set, we will deploy the persistent resource within the provided IP ranges. Otherwise, the persistent resource is deployed to any IP ranges under the provided VPC network. Example: ['vertex-ai-ip-range'].
- resource
Runtime Property MapSpec - Optional. Persistent Resource runtime spec. For example, used for Ray cluster configuration.
Outputs
All input properties are implicitly available as output properties. Additionally, the PersistentResource resource produces the following output properties:
- Create
Time string - Time when the PersistentResource was created.
- Error
Pulumi.
Google Native. Aiplatform. V1Beta1. Outputs. Google Rpc Status Response - Only populated when persistent resource's state is
STOPPING
orERROR
. - Id string
- The provider-assigned unique ID for this managed resource.
- Resource
Runtime Pulumi.Google Native. Aiplatform. V1Beta1. Outputs. Google Cloud Aiplatform V1beta1Resource Runtime Response - Runtime information of the Persistent Resource.
- Start
Time string - Time when the PersistentResource for the first time entered the
RUNNING
state. - State string
- The detailed state of a Study.
- Update
Time string - Time when the PersistentResource was most recently updated.
- Create
Time string - Time when the PersistentResource was created.
- Error
Google
Rpc Status Response - Only populated when persistent resource's state is
STOPPING
orERROR
. - Id string
- The provider-assigned unique ID for this managed resource.
- Resource
Runtime GoogleCloud Aiplatform V1beta1Resource Runtime Response - Runtime information of the Persistent Resource.
- Start
Time string - Time when the PersistentResource for the first time entered the
RUNNING
state. - State string
- The detailed state of a Study.
- Update
Time string - Time when the PersistentResource was most recently updated.
- create
Time String - Time when the PersistentResource was created.
- error
Google
Rpc Status Response - Only populated when persistent resource's state is
STOPPING
orERROR
. - id String
- The provider-assigned unique ID for this managed resource.
- resource
Runtime GoogleCloud Aiplatform V1beta1Resource Runtime Response - Runtime information of the Persistent Resource.
- start
Time String - Time when the PersistentResource for the first time entered the
RUNNING
state. - state String
- The detailed state of a Study.
- update
Time String - Time when the PersistentResource was most recently updated.
- create
Time string - Time when the PersistentResource was created.
- error
Google
Rpc Status Response - Only populated when persistent resource's state is
STOPPING
orERROR
. - id string
- The provider-assigned unique ID for this managed resource.
- resource
Runtime GoogleCloud Aiplatform V1beta1Resource Runtime Response - Runtime information of the Persistent Resource.
- start
Time string - Time when the PersistentResource for the first time entered the
RUNNING
state. - state string
- The detailed state of a Study.
- update
Time string - Time when the PersistentResource was most recently updated.
- create_
time str - Time when the PersistentResource was created.
- error
Google
Rpc Status Response - Only populated when persistent resource's state is
STOPPING
orERROR
. - id str
- The provider-assigned unique ID for this managed resource.
- resource_
runtime GoogleCloud Aiplatform V1beta1Resource Runtime Response - Runtime information of the Persistent Resource.
- start_
time str - Time when the PersistentResource for the first time entered the
RUNNING
state. - state str
- The detailed state of a Study.
- update_
time str - Time when the PersistentResource was most recently updated.
- create
Time String - Time when the PersistentResource was created.
- error Property Map
- Only populated when persistent resource's state is
STOPPING
orERROR
. - id String
- The provider-assigned unique ID for this managed resource.
- resource
Runtime Property Map - Runtime information of the Persistent Resource.
- start
Time String - Time when the PersistentResource for the first time entered the
RUNNING
state. - state String
- The detailed state of a Study.
- update
Time String - Time when the PersistentResource was most recently updated.
Supporting Types
GoogleCloudAiplatformV1beta1DiskSpec, GoogleCloudAiplatformV1beta1DiskSpecArgs
- Boot
Disk intSize Gb - Size in GB of the boot disk (default is 100GB).
- Boot
Disk stringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- Boot
Disk intSize Gb - Size in GB of the boot disk (default is 100GB).
- Boot
Disk stringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot
Disk IntegerSize Gb - Size in GB of the boot disk (default is 100GB).
- boot
Disk StringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot
Disk numberSize Gb - Size in GB of the boot disk (default is 100GB).
- boot
Disk stringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot_
disk_ intsize_ gb - Size in GB of the boot disk (default is 100GB).
- boot_
disk_ strtype - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot
Disk NumberSize Gb - Size in GB of the boot disk (default is 100GB).
- boot
Disk StringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
GoogleCloudAiplatformV1beta1DiskSpecResponse, GoogleCloudAiplatformV1beta1DiskSpecResponseArgs
- Boot
Disk intSize Gb - Size in GB of the boot disk (default is 100GB).
- Boot
Disk stringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- Boot
Disk intSize Gb - Size in GB of the boot disk (default is 100GB).
- Boot
Disk stringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot
Disk IntegerSize Gb - Size in GB of the boot disk (default is 100GB).
- boot
Disk StringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot
Disk numberSize Gb - Size in GB of the boot disk (default is 100GB).
- boot
Disk stringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot_
disk_ intsize_ gb - Size in GB of the boot disk (default is 100GB).
- boot_
disk_ strtype - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
- boot
Disk NumberSize Gb - Size in GB of the boot disk (default is 100GB).
- boot
Disk StringType - Type of the boot disk (default is "pd-ssd"). Valid values: "pd-ssd" (Persistent Disk Solid State Drive) or "pd-standard" (Persistent Disk Hard Disk Drive).
GoogleCloudAiplatformV1beta1EncryptionSpec, GoogleCloudAiplatformV1beta1EncryptionSpecArgs
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms_
key_ strname - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
GoogleCloudAiplatformV1beta1EncryptionSpecResponse, GoogleCloudAiplatformV1beta1EncryptionSpecResponseArgs
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- Kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key stringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms_
key_ strname - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
- kms
Key StringName - The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form:
projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key
. The key needs to be in the same region as where the compute resource is created.
GoogleCloudAiplatformV1beta1MachineSpec, GoogleCloudAiplatformV1beta1MachineSpecArgs
- Accelerator
Count int - The number of accelerators to attach to the machine.
- Accelerator
Type Pulumi.Google Native. Aiplatform. V1Beta1. Google Cloud Aiplatform V1beta1Machine Spec Accelerator Type - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- Machine
Type string - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - Tpu
Topology string - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- Accelerator
Count int - The number of accelerators to attach to the machine.
- Accelerator
Type GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- Machine
Type string - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - Tpu
Topology string - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator
Count Integer - The number of accelerators to attach to the machine.
- accelerator
Type GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine
Type String - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu
Topology String - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator
Count number - The number of accelerators to attach to the machine.
- accelerator
Type GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine
Type string - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu
Topology string - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator_
count int - The number of accelerators to attach to the machine.
- accelerator_
type GoogleCloud Aiplatform V1beta1Machine Spec Accelerator Type - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine_
type str - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu_
topology str - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator
Count Number - The number of accelerators to attach to the machine.
- accelerator
Type "ACCELERATOR_TYPE_UNSPECIFIED" | "NVIDIA_TESLA_K80" | "NVIDIA_TESLA_P100" | "NVIDIA_TESLA_V100" | "NVIDIA_TESLA_P4" | "NVIDIA_TESLA_T4" | "NVIDIA_TESLA_A100" | "NVIDIA_A100_80GB" | "NVIDIA_L4" | "NVIDIA_H100_80GB" | "TPU_V2" | "TPU_V3" | "TPU_V4_POD" | "TPU_V5_LITEPOD" - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine
Type String - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu
Topology String - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
GoogleCloudAiplatformV1beta1MachineSpecAcceleratorType, GoogleCloudAiplatformV1beta1MachineSpecAcceleratorTypeArgs
- Accelerator
Type Unspecified - ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Nvidia
A10080gb - NVIDIA_A100_80GBNvidia A100 80GB GPU.
- Nvidia
L4 - NVIDIA_L4Nvidia L4 GPU.
- Nvidia
H10080gb - NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Tpu
V4Pod - TPU_V4_PODTPU v4.
- Tpu
V5Litepod - TPU_V5_LITEPODTPU v5.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Accelerator Type Unspecified - ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia A10080gb - NVIDIA_A100_80GBNvidia A100 80GB GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia L4 - NVIDIA_L4Nvidia L4 GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Nvidia H10080gb - NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Tpu V2 - TPU_V2TPU v2.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Tpu V3 - TPU_V3TPU v3.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Tpu V4Pod - TPU_V4_PODTPU v4.
- Google
Cloud Aiplatform V1beta1Machine Spec Accelerator Type Tpu V5Litepod - TPU_V5_LITEPODTPU v5.
- Accelerator
Type Unspecified - ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Nvidia
A10080gb - NVIDIA_A100_80GBNvidia A100 80GB GPU.
- Nvidia
L4 - NVIDIA_L4Nvidia L4 GPU.
- Nvidia
H10080gb - NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Tpu
V4Pod - TPU_V4_PODTPU v4.
- Tpu
V5Litepod - TPU_V5_LITEPODTPU v5.
- Accelerator
Type Unspecified - ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- Nvidia
Tesla K80 - NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- Nvidia
Tesla P100 - NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- Nvidia
Tesla V100 - NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- Nvidia
Tesla P4 - NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- Nvidia
Tesla T4 - NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- Nvidia
Tesla A100 - NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- Nvidia
A10080gb - NVIDIA_A100_80GBNvidia A100 80GB GPU.
- Nvidia
L4 - NVIDIA_L4Nvidia L4 GPU.
- Nvidia
H10080gb - NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- Tpu
V2 - TPU_V2TPU v2.
- Tpu
V3 - TPU_V3TPU v3.
- Tpu
V4Pod - TPU_V4_PODTPU v4.
- Tpu
V5Litepod - TPU_V5_LITEPODTPU v5.
- ACCELERATOR_TYPE_UNSPECIFIED
- ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- NVIDIA_TESLA_K80
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- NVIDIA_TESLA_P100
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- NVIDIA_TESLA_V100
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- NVIDIA_TESLA_P4
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- NVIDIA_TESLA_T4
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- NVIDIA_TESLA_A100
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- NVIDIA_A10080GB
- NVIDIA_A100_80GBNvidia A100 80GB GPU.
- NVIDIA_L4
- NVIDIA_L4Nvidia L4 GPU.
- NVIDIA_H10080GB
- NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- TPU_V2
- TPU_V2TPU v2.
- TPU_V3
- TPU_V3TPU v3.
- TPU_V4_POD
- TPU_V4_PODTPU v4.
- TPU_V5_LITEPOD
- TPU_V5_LITEPODTPU v5.
- "ACCELERATOR_TYPE_UNSPECIFIED"
- ACCELERATOR_TYPE_UNSPECIFIEDUnspecified accelerator type, which means no accelerator.
- "NVIDIA_TESLA_K80"
- NVIDIA_TESLA_K80Nvidia Tesla K80 GPU.
- "NVIDIA_TESLA_P100"
- NVIDIA_TESLA_P100Nvidia Tesla P100 GPU.
- "NVIDIA_TESLA_V100"
- NVIDIA_TESLA_V100Nvidia Tesla V100 GPU.
- "NVIDIA_TESLA_P4"
- NVIDIA_TESLA_P4Nvidia Tesla P4 GPU.
- "NVIDIA_TESLA_T4"
- NVIDIA_TESLA_T4Nvidia Tesla T4 GPU.
- "NVIDIA_TESLA_A100"
- NVIDIA_TESLA_A100Nvidia Tesla A100 GPU.
- "NVIDIA_A100_80GB"
- NVIDIA_A100_80GBNvidia A100 80GB GPU.
- "NVIDIA_L4"
- NVIDIA_L4Nvidia L4 GPU.
- "NVIDIA_H100_80GB"
- NVIDIA_H100_80GBNvidia H100 80Gb GPU.
- "TPU_V2"
- TPU_V2TPU v2.
- "TPU_V3"
- TPU_V3TPU v3.
- "TPU_V4_POD"
- TPU_V4_PODTPU v4.
- "TPU_V5_LITEPOD"
- TPU_V5_LITEPODTPU v5.
GoogleCloudAiplatformV1beta1MachineSpecResponse, GoogleCloudAiplatformV1beta1MachineSpecResponseArgs
- Accelerator
Count int - The number of accelerators to attach to the machine.
- Accelerator
Type string - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- Machine
Type string - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - Tpu
Topology string - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- Accelerator
Count int - The number of accelerators to attach to the machine.
- Accelerator
Type string - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- Machine
Type string - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - Tpu
Topology string - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator
Count Integer - The number of accelerators to attach to the machine.
- accelerator
Type String - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine
Type String - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu
Topology String - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator
Count number - The number of accelerators to attach to the machine.
- accelerator
Type string - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine
Type string - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu
Topology string - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator_
count int - The number of accelerators to attach to the machine.
- accelerator_
type str - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine_
type str - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu_
topology str - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
- accelerator
Count Number - The number of accelerators to attach to the machine.
- accelerator
Type String - Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
- machine
Type String - Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is
n1-standard-2
. For BatchPredictionJob or as part of WorkerPoolSpec this field is required. - tpu
Topology String - Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
GoogleCloudAiplatformV1beta1RaySpec, GoogleCloudAiplatformV1beta1RaySpecArgs
- Head
Node stringResource Pool Id - Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
- Image
Uri string - Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
- Resource
Pool Dictionary<string, string>Images - Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
- Head
Node stringResource Pool Id - Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
- Image
Uri string - Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
- Resource
Pool map[string]stringImages - Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
- head
Node StringResource Pool Id - Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
- image
Uri String - Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
- resource
Pool Map<String,String>Images - Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
- head
Node stringResource Pool Id - Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
- image
Uri string - Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
- resource
Pool {[key: string]: string}Images - Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
- head_
node_ strresource_ pool_ id - Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
- image_
uri str - Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
- resource_
pool_ Mapping[str, str]images - Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
- head
Node StringResource Pool Id - Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
- image
Uri String - Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
- resource
Pool Map<String>Images - Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
GoogleCloudAiplatformV1beta1RaySpecResponse, GoogleCloudAiplatformV1beta1RaySpecResponseArgs
- Head
Node stringResource Pool Id - Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
- Image
Uri string - Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
- Resource
Pool Dictionary<string, string>Images - Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
- Head
Node stringResource Pool Id - Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
- Image
Uri string - Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
- Resource
Pool map[string]stringImages - Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
- head
Node StringResource Pool Id - Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
- image
Uri String - Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
- resource
Pool Map<String,String>Images - Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
- head
Node stringResource Pool Id - Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
- image
Uri string - Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
- resource
Pool {[key: string]: string}Images - Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
- head_
node_ strresource_ pool_ id - Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
- image_
uri str - Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
- resource_
pool_ Mapping[str, str]images - Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
- head
Node StringResource Pool Id - Optional. This will be used to indicate which resource pool will serve as the Ray head node(the first node within that pool). Will use the machine from the first workerpool as the head node by default if this field isn't set.
- image
Uri String - Optional. Default image for user to choose a preferred ML framework (for example, TensorFlow or Pytorch) by choosing from Vertex prebuilt images. Either this or the resource_pool_images is required. Use this field if you need all the resource pools to have the same Ray image. Otherwise, use the {@code resource_pool_images} field.
- resource
Pool Map<String>Images - Optional. Required if image_uri isn't set. A map of resource_pool_id to prebuild Ray image if user need to use different images for different head/worker pools. This map needs to cover all the resource pool ids. Example: { "ray_head_node_pool": "head image" "ray_worker_node_pool1": "worker image" "ray_worker_node_pool2": "another worker image" }
GoogleCloudAiplatformV1beta1ResourcePool, GoogleCloudAiplatformV1beta1ResourcePoolArgs
- Machine
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Machine Spec - Immutable. The specification of a single machine.
- Autoscaling
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Resource Pool Autoscaling Spec - Optional. Optional spec to configure GKE autoscaling
- Disk
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Disk Spec - Optional. Disk spec for the machine in this node pool.
- Id string
- Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
- Replica
Count string - Optional. The total number of machines to use for this resource pool.
- Machine
Spec GoogleCloud Aiplatform V1beta1Machine Spec - Immutable. The specification of a single machine.
- Autoscaling
Spec GoogleCloud Aiplatform V1beta1Resource Pool Autoscaling Spec - Optional. Optional spec to configure GKE autoscaling
- Disk
Spec GoogleCloud Aiplatform V1beta1Disk Spec - Optional. Disk spec for the machine in this node pool.
- Id string
- Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
- Replica
Count string - Optional. The total number of machines to use for this resource pool.
- machine
Spec GoogleCloud Aiplatform V1beta1Machine Spec - Immutable. The specification of a single machine.
- autoscaling
Spec GoogleCloud Aiplatform V1beta1Resource Pool Autoscaling Spec - Optional. Optional spec to configure GKE autoscaling
- disk
Spec GoogleCloud Aiplatform V1beta1Disk Spec - Optional. Disk spec for the machine in this node pool.
- id String
- Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
- replica
Count String - Optional. The total number of machines to use for this resource pool.
- machine
Spec GoogleCloud Aiplatform V1beta1Machine Spec - Immutable. The specification of a single machine.
- autoscaling
Spec GoogleCloud Aiplatform V1beta1Resource Pool Autoscaling Spec - Optional. Optional spec to configure GKE autoscaling
- disk
Spec GoogleCloud Aiplatform V1beta1Disk Spec - Optional. Disk spec for the machine in this node pool.
- id string
- Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
- replica
Count string - Optional. The total number of machines to use for this resource pool.
- machine_
spec GoogleCloud Aiplatform V1beta1Machine Spec - Immutable. The specification of a single machine.
- autoscaling_
spec GoogleCloud Aiplatform V1beta1Resource Pool Autoscaling Spec - Optional. Optional spec to configure GKE autoscaling
- disk_
spec GoogleCloud Aiplatform V1beta1Disk Spec - Optional. Disk spec for the machine in this node pool.
- id str
- Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
- replica_
count str - Optional. The total number of machines to use for this resource pool.
- machine
Spec Property Map - Immutable. The specification of a single machine.
- autoscaling
Spec Property Map - Optional. Optional spec to configure GKE autoscaling
- disk
Spec Property Map - Optional. Disk spec for the machine in this node pool.
- id String
- Immutable. The unique ID in a PersistentResource for referring to this resource pool. User can specify it if necessary. Otherwise, it's generated automatically.
- replica
Count String - Optional. The total number of machines to use for this resource pool.
GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpec, GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecArgs
- Max
Replica stringCount - Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
- Min
Replica stringCount - Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
- Max
Replica stringCount - Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
- Min
Replica stringCount - Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
- max
Replica StringCount - Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
- min
Replica StringCount - Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
- max
Replica stringCount - Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
- min
Replica stringCount - Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
- max_
replica_ strcount - Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
- min_
replica_ strcount - Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
- max
Replica StringCount - Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
- min
Replica StringCount - Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecResponse, GoogleCloudAiplatformV1beta1ResourcePoolAutoscalingSpecResponseArgs
- Max
Replica stringCount - Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
- Min
Replica stringCount - Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
- Max
Replica stringCount - Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
- Min
Replica stringCount - Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
- max
Replica StringCount - Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
- min
Replica StringCount - Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
- max
Replica stringCount - Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
- min
Replica stringCount - Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
- max_
replica_ strcount - Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
- min_
replica_ strcount - Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
- max
Replica StringCount - Optional. max replicas in the node pool, must be ≥ replica_count and > min_replica_count or will throw error
- min
Replica StringCount - Optional. min replicas in the node pool, must be ≤ replica_count and < max_replica_count or will throw error
GoogleCloudAiplatformV1beta1ResourcePoolResponse, GoogleCloudAiplatformV1beta1ResourcePoolResponseArgs
- Autoscaling
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Resource Pool Autoscaling Spec Response - Optional. Optional spec to configure GKE autoscaling
- Disk
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Disk Spec Response - Optional. Disk spec for the machine in this node pool.
- Machine
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Machine Spec Response - Immutable. The specification of a single machine.
- Replica
Count string - Optional. The total number of machines to use for this resource pool.
- Used
Replica stringCount - The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.
- Autoscaling
Spec GoogleCloud Aiplatform V1beta1Resource Pool Autoscaling Spec Response - Optional. Optional spec to configure GKE autoscaling
- Disk
Spec GoogleCloud Aiplatform V1beta1Disk Spec Response - Optional. Disk spec for the machine in this node pool.
- Machine
Spec GoogleCloud Aiplatform V1beta1Machine Spec Response - Immutable. The specification of a single machine.
- Replica
Count string - Optional. The total number of machines to use for this resource pool.
- Used
Replica stringCount - The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.
- autoscaling
Spec GoogleCloud Aiplatform V1beta1Resource Pool Autoscaling Spec Response - Optional. Optional spec to configure GKE autoscaling
- disk
Spec GoogleCloud Aiplatform V1beta1Disk Spec Response - Optional. Disk spec for the machine in this node pool.
- machine
Spec GoogleCloud Aiplatform V1beta1Machine Spec Response - Immutable. The specification of a single machine.
- replica
Count String - Optional. The total number of machines to use for this resource pool.
- used
Replica StringCount - The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.
- autoscaling
Spec GoogleCloud Aiplatform V1beta1Resource Pool Autoscaling Spec Response - Optional. Optional spec to configure GKE autoscaling
- disk
Spec GoogleCloud Aiplatform V1beta1Disk Spec Response - Optional. Disk spec for the machine in this node pool.
- machine
Spec GoogleCloud Aiplatform V1beta1Machine Spec Response - Immutable. The specification of a single machine.
- replica
Count string - Optional. The total number of machines to use for this resource pool.
- used
Replica stringCount - The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.
- autoscaling_
spec GoogleCloud Aiplatform V1beta1Resource Pool Autoscaling Spec Response - Optional. Optional spec to configure GKE autoscaling
- disk_
spec GoogleCloud Aiplatform V1beta1Disk Spec Response - Optional. Disk spec for the machine in this node pool.
- machine_
spec GoogleCloud Aiplatform V1beta1Machine Spec Response - Immutable. The specification of a single machine.
- replica_
count str - Optional. The total number of machines to use for this resource pool.
- used_
replica_ strcount - The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.
- autoscaling
Spec Property Map - Optional. Optional spec to configure GKE autoscaling
- disk
Spec Property Map - Optional. Disk spec for the machine in this node pool.
- machine
Spec Property Map - Immutable. The specification of a single machine.
- replica
Count String - Optional. The total number of machines to use for this resource pool.
- used
Replica StringCount - The number of machines currently in use by training jobs for this resource pool. Will replace idle_replica_count.
GoogleCloudAiplatformV1beta1ResourceRuntimeResponse, GoogleCloudAiplatformV1beta1ResourceRuntimeResponseArgs
- Access
Uris Dictionary<string, string> - URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
- Notebook
Runtime stringTemplate - The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"
- Access
Uris map[string]string - URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
- Notebook
Runtime stringTemplate - The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"
- access
Uris Map<String,String> - URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
- notebook
Runtime StringTemplate - The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"
- access
Uris {[key: string]: string} - URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
- notebook
Runtime stringTemplate - The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"
- access_
uris Mapping[str, str] - URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
- notebook_
runtime_ strtemplate - The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"
- access
Uris Map<String> - URIs for user to connect to the Cluster. Example: { "RAY_HEAD_NODE_INTERNAL_IP": "head-node-IP:10001" "RAY_DASHBOARD_URI": "ray-dashboard-address:8888" }
- notebook
Runtime StringTemplate - The resource name of NotebookRuntimeTemplate for the RoV Persistent Cluster The NotebokRuntimeTemplate is created in the same VPC (if set), and with the same Ray and Python version as the Persistent Cluster. Example: "projects/1000/locations/us-central1/notebookRuntimeTemplates/abc123"
GoogleCloudAiplatformV1beta1ResourceRuntimeSpec, GoogleCloudAiplatformV1beta1ResourceRuntimeSpecArgs
- Ray
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Ray Spec - Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
- Service
Account Pulumi.Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Service Account Spec - Optional. Configure the use of workload identity on the PersistentResource
- Ray
Spec GoogleCloud Aiplatform V1beta1Ray Spec - Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
- Service
Account GoogleSpec Cloud Aiplatform V1beta1Service Account Spec - Optional. Configure the use of workload identity on the PersistentResource
- ray
Spec GoogleCloud Aiplatform V1beta1Ray Spec - Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
- service
Account GoogleSpec Cloud Aiplatform V1beta1Service Account Spec - Optional. Configure the use of workload identity on the PersistentResource
- ray
Spec GoogleCloud Aiplatform V1beta1Ray Spec - Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
- service
Account GoogleSpec Cloud Aiplatform V1beta1Service Account Spec - Optional. Configure the use of workload identity on the PersistentResource
- ray_
spec GoogleCloud Aiplatform V1beta1Ray Spec - Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
- service_
account_ Googlespec Cloud Aiplatform V1beta1Service Account Spec - Optional. Configure the use of workload identity on the PersistentResource
- ray
Spec Property Map - Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
- service
Account Property MapSpec - Optional. Configure the use of workload identity on the PersistentResource
GoogleCloudAiplatformV1beta1ResourceRuntimeSpecResponse, GoogleCloudAiplatformV1beta1ResourceRuntimeSpecResponseArgs
- Ray
Spec Pulumi.Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Ray Spec Response - Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
- Service
Account Pulumi.Spec Google Native. Aiplatform. V1Beta1. Inputs. Google Cloud Aiplatform V1beta1Service Account Spec Response - Optional. Configure the use of workload identity on the PersistentResource
- Ray
Spec GoogleCloud Aiplatform V1beta1Ray Spec Response - Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
- Service
Account GoogleSpec Cloud Aiplatform V1beta1Service Account Spec Response - Optional. Configure the use of workload identity on the PersistentResource
- ray
Spec GoogleCloud Aiplatform V1beta1Ray Spec Response - Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
- service
Account GoogleSpec Cloud Aiplatform V1beta1Service Account Spec Response - Optional. Configure the use of workload identity on the PersistentResource
- ray
Spec GoogleCloud Aiplatform V1beta1Ray Spec Response - Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
- service
Account GoogleSpec Cloud Aiplatform V1beta1Service Account Spec Response - Optional. Configure the use of workload identity on the PersistentResource
- ray_
spec GoogleCloud Aiplatform V1beta1Ray Spec Response - Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
- service_
account_ Googlespec Cloud Aiplatform V1beta1Service Account Spec Response - Optional. Configure the use of workload identity on the PersistentResource
- ray
Spec Property Map - Optional. Ray cluster configuration. Required when creating a dedicated RayCluster on the PersistentResource.
- service
Account Property MapSpec - Optional. Configure the use of workload identity on the PersistentResource
GoogleCloudAiplatformV1beta1ServiceAccountSpec, GoogleCloudAiplatformV1beta1ServiceAccountSpecArgs
- Enable
Custom boolService Account - If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
- Service
Account string - Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via
ResourceRuntimeSpec
on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have theiam.serviceAccounts.actAs
permission on this service account. Required if any containers are specified inResourceRuntimeSpec
.
- Enable
Custom boolService Account - If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
- Service
Account string - Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via
ResourceRuntimeSpec
on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have theiam.serviceAccounts.actAs
permission on this service account. Required if any containers are specified inResourceRuntimeSpec
.
- enable
Custom BooleanService Account - If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
- service
Account String - Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via
ResourceRuntimeSpec
on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have theiam.serviceAccounts.actAs
permission on this service account. Required if any containers are specified inResourceRuntimeSpec
.
- enable
Custom booleanService Account - If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
- service
Account string - Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via
ResourceRuntimeSpec
on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have theiam.serviceAccounts.actAs
permission on this service account. Required if any containers are specified inResourceRuntimeSpec
.
- enable_
custom_ boolservice_ account - If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
- service_
account str - Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via
ResourceRuntimeSpec
on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have theiam.serviceAccounts.actAs
permission on this service account. Required if any containers are specified inResourceRuntimeSpec
.
- enable
Custom BooleanService Account - If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
- service
Account String - Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via
ResourceRuntimeSpec
on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have theiam.serviceAccounts.actAs
permission on this service account. Required if any containers are specified inResourceRuntimeSpec
.
GoogleCloudAiplatformV1beta1ServiceAccountSpecResponse, GoogleCloudAiplatformV1beta1ServiceAccountSpecResponseArgs
- Enable
Custom boolService Account - If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
- Service
Account string - Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via
ResourceRuntimeSpec
on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have theiam.serviceAccounts.actAs
permission on this service account. Required if any containers are specified inResourceRuntimeSpec
.
- Enable
Custom boolService Account - If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
- Service
Account string - Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via
ResourceRuntimeSpec
on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have theiam.serviceAccounts.actAs
permission on this service account. Required if any containers are specified inResourceRuntimeSpec
.
- enable
Custom BooleanService Account - If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
- service
Account String - Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via
ResourceRuntimeSpec
on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have theiam.serviceAccounts.actAs
permission on this service account. Required if any containers are specified inResourceRuntimeSpec
.
- enable
Custom booleanService Account - If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
- service
Account string - Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via
ResourceRuntimeSpec
on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have theiam.serviceAccounts.actAs
permission on this service account. Required if any containers are specified inResourceRuntimeSpec
.
- enable_
custom_ boolservice_ account - If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
- service_
account str - Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via
ResourceRuntimeSpec
on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have theiam.serviceAccounts.actAs
permission on this service account. Required if any containers are specified inResourceRuntimeSpec
.
- enable
Custom BooleanService Account - If true, custom user-managed service account is enforced to run any workloads (for example, Vertex Jobs) on the resource. Otherwise, uses the Vertex AI Custom Code Service Agent.
- service
Account String - Optional. Default service account that this PersistentResource's workloads run as. The workloads include: * Any runtime specified via
ResourceRuntimeSpec
on creation time, for example, Ray. * Jobs submitted to PersistentResource, if no other service account specified in the job specs. Only works when custom service account is enabled and users have theiam.serviceAccounts.actAs
permission on this service account. Required if any containers are specified inResourceRuntimeSpec
.
GoogleRpcStatusResponse, GoogleRpcStatusResponseArgs
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details
List<Immutable
Dictionary<string, string>> - A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details []map[string]string
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Integer
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String,String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code number
- The status code, which should be an enum value of google.rpc.Code.
- details {[key: string]: string}[]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code int
- The status code, which should be an enum value of google.rpc.Code.
- details Sequence[Mapping[str, str]]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message str
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Number
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.