Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.storagetransfer/v1.getTransferJob
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets a transfer job.
Using getTransferJob
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getTransferJob(args: GetTransferJobArgs, opts?: InvokeOptions): Promise<GetTransferJobResult>
function getTransferJobOutput(args: GetTransferJobOutputArgs, opts?: InvokeOptions): Output<GetTransferJobResult>
def get_transfer_job(project_id: Optional[str] = None,
transfer_job_id: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetTransferJobResult
def get_transfer_job_output(project_id: Optional[pulumi.Input[str]] = None,
transfer_job_id: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetTransferJobResult]
func LookupTransferJob(ctx *Context, args *LookupTransferJobArgs, opts ...InvokeOption) (*LookupTransferJobResult, error)
func LookupTransferJobOutput(ctx *Context, args *LookupTransferJobOutputArgs, opts ...InvokeOption) LookupTransferJobResultOutput
> Note: This function is named LookupTransferJob
in the Go SDK.
public static class GetTransferJob
{
public static Task<GetTransferJobResult> InvokeAsync(GetTransferJobArgs args, InvokeOptions? opts = null)
public static Output<GetTransferJobResult> Invoke(GetTransferJobInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetTransferJobResult> getTransferJob(GetTransferJobArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
fn::invoke:
function: google-native:storagetransfer/v1:getTransferJob
arguments:
# arguments dictionary
The following arguments are supported:
- Project
Id string - Transfer
Job stringId
- Project
Id string - Transfer
Job stringId
- project
Id String - transfer
Job StringId
- project
Id string - transfer
Job stringId
- project_
id str - transfer_
job_ strid
- project
Id String - transfer
Job StringId
getTransferJob Result
The following output properties are available:
- Creation
Time string - The time that the transfer job was created.
- Deletion
Time string - The time that the transfer job was deleted.
- Description string
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
- Event
Stream Pulumi.Google Native. Storage Transfer. V1. Outputs. Event Stream Response - Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored.
- Last
Modification stringTime - The time that the transfer job was last modified.
- Latest
Operation stringName - The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
- Logging
Config Pulumi.Google Native. Storage Transfer. V1. Outputs. Logging Config Response - Logging configuration.
- Name string
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with
"transferJobs/"
prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPI
specifically. For all other transfer types, this name must not start withtransferJobs/OPI
. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"
PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"
Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error. - Notification
Config Pulumi.Google Native. Storage Transfer. V1. Outputs. Notification Config Response - Notification configuration. This is not supported for transfers involving PosixFilesystem.
- Project string
- The ID of the Google Cloud project that owns the job.
- Schedule
Pulumi.
Google Native. Storage Transfer. V1. Outputs. Schedule Response - Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule.
- Status string
- Status of the job. This value MUST be specified for
CreateTransferJobRequests
. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation. - Transfer
Spec Pulumi.Google Native. Storage Transfer. V1. Outputs. Transfer Spec Response - Transfer specification.
- Creation
Time string - The time that the transfer job was created.
- Deletion
Time string - The time that the transfer job was deleted.
- Description string
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
- Event
Stream EventStream Response - Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored.
- Last
Modification stringTime - The time that the transfer job was last modified.
- Latest
Operation stringName - The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
- Logging
Config LoggingConfig Response - Logging configuration.
- Name string
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with
"transferJobs/"
prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPI
specifically. For all other transfer types, this name must not start withtransferJobs/OPI
. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"
PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"
Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error. - Notification
Config NotificationConfig Response - Notification configuration. This is not supported for transfers involving PosixFilesystem.
- Project string
- The ID of the Google Cloud project that owns the job.
- Schedule
Schedule
Response - Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule.
- Status string
- Status of the job. This value MUST be specified for
CreateTransferJobRequests
. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation. - Transfer
Spec TransferSpec Response - Transfer specification.
- creation
Time String - The time that the transfer job was created.
- deletion
Time String - The time that the transfer job was deleted.
- description String
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
- event
Stream EventStream Response - Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored.
- last
Modification StringTime - The time that the transfer job was last modified.
- latest
Operation StringName - The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
- logging
Config LoggingConfig Response - Logging configuration.
- name String
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with
"transferJobs/"
prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPI
specifically. For all other transfer types, this name must not start withtransferJobs/OPI
. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"
PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"
Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error. - notification
Config NotificationConfig Response - Notification configuration. This is not supported for transfers involving PosixFilesystem.
- project String
- The ID of the Google Cloud project that owns the job.
- schedule
Schedule
Response - Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule.
- status String
- Status of the job. This value MUST be specified for
CreateTransferJobRequests
. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation. - transfer
Spec TransferSpec Response - Transfer specification.
- creation
Time string - The time that the transfer job was created.
- deletion
Time string - The time that the transfer job was deleted.
- description string
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
- event
Stream EventStream Response - Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored.
- last
Modification stringTime - The time that the transfer job was last modified.
- latest
Operation stringName - The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
- logging
Config LoggingConfig Response - Logging configuration.
- name string
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with
"transferJobs/"
prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPI
specifically. For all other transfer types, this name must not start withtransferJobs/OPI
. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"
PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"
Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error. - notification
Config NotificationConfig Response - Notification configuration. This is not supported for transfers involving PosixFilesystem.
- project string
- The ID of the Google Cloud project that owns the job.
- schedule
Schedule
Response - Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule.
- status string
- Status of the job. This value MUST be specified for
CreateTransferJobRequests
. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation. - transfer
Spec TransferSpec Response - Transfer specification.
- creation_
time str - The time that the transfer job was created.
- deletion_
time str - The time that the transfer job was deleted.
- description str
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
- event_
stream EventStream Response - Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored.
- last_
modification_ strtime - The time that the transfer job was last modified.
- latest_
operation_ strname - The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
- logging_
config LoggingConfig Response - Logging configuration.
- name str
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with
"transferJobs/"
prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPI
specifically. For all other transfer types, this name must not start withtransferJobs/OPI
. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"
PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"
Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error. - notification_
config NotificationConfig Response - Notification configuration. This is not supported for transfers involving PosixFilesystem.
- project str
- The ID of the Google Cloud project that owns the job.
- schedule
Schedule
Response - Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule.
- status str
- Status of the job. This value MUST be specified for
CreateTransferJobRequests
. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation. - transfer_
spec TransferSpec Response - Transfer specification.
- creation
Time String - The time that the transfer job was created.
- deletion
Time String - The time that the transfer job was deleted.
- description String
- A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded.
- event
Stream Property Map - Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored.
- last
Modification StringTime - The time that the transfer job was last modified.
- latest
Operation StringName - The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig.
- logging
Config Property Map - Logging configuration.
- name String
- A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an ALREADY_EXISTS error. This name must start with
"transferJobs/"
prefix and end with a letter or a number, and should be no more than 128 characters. For transfers involving PosixFilesystem, this name must start withtransferJobs/OPI
specifically. For all other transfer types, this name must not start withtransferJobs/OPI
. Non-PosixFilesystem example:"transferJobs/^(?!OPI)[A-Za-z0-9-._~]*[A-Za-z0-9]$"
PosixFilesystem example:"transferJobs/OPI^[A-Za-z0-9-._~]*[A-Za-z0-9]$"
Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an INVALID_ARGUMENT error. - notification
Config Property Map - Notification configuration. This is not supported for transfers involving PosixFilesystem.
- project String
- The ID of the Google Cloud project that owns the job.
- schedule Property Map
- Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule.
- status String
- Status of the job. This value MUST be specified for
CreateTransferJobRequests
. Note: The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from ENABLED to DISABLED, and an operation spawned by the transfer is running, the status change would not affect the current operation. - transfer
Spec Property Map - Transfer specification.
Supporting Types
AwsAccessKeyResponse
- Access
Key stringId - AWS access key ID.
- Secret
Access stringKey - AWS secret access key. This field is not returned in RPC responses.
- Access
Key stringId - AWS access key ID.
- Secret
Access stringKey - AWS secret access key. This field is not returned in RPC responses.
- access
Key StringId - AWS access key ID.
- secret
Access StringKey - AWS secret access key. This field is not returned in RPC responses.
- access
Key stringId - AWS access key ID.
- secret
Access stringKey - AWS secret access key. This field is not returned in RPC responses.
- access_
key_ strid - AWS access key ID.
- secret_
access_ strkey - AWS secret access key. This field is not returned in RPC responses.
- access
Key StringId - AWS access key ID.
- secret
Access StringKey - AWS secret access key. This field is not returned in RPC responses.
AwsS3CompatibleDataResponse
- Bucket
Name string - Specifies the name of the bucket.
- Endpoint string
- Specifies the endpoint of the storage service.
- Path string
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- Region string
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
- S3Metadata
Pulumi.
Google Native. Storage Transfer. V1. Inputs. S3Compatible Metadata Response - A S3 compatible metadata.
- Bucket
Name string - Specifies the name of the bucket.
- Endpoint string
- Specifies the endpoint of the storage service.
- Path string
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- Region string
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
- S3Metadata
S3Compatible
Metadata Response - A S3 compatible metadata.
- bucket
Name String - Specifies the name of the bucket.
- endpoint String
- Specifies the endpoint of the storage service.
- path String
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- region String
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
- s3Metadata
S3Compatible
Metadata Response - A S3 compatible metadata.
- bucket
Name string - Specifies the name of the bucket.
- endpoint string
- Specifies the endpoint of the storage service.
- path string
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- region string
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
- s3Metadata
S3Compatible
Metadata Response - A S3 compatible metadata.
- bucket_
name str - Specifies the name of the bucket.
- endpoint str
- Specifies the endpoint of the storage service.
- path str
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- region str
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
- s3_
metadata S3CompatibleMetadata Response - A S3 compatible metadata.
- bucket
Name String - Specifies the name of the bucket.
- endpoint String
- Specifies the endpoint of the storage service.
- path String
- Specifies the root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- region String
- Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
- s3Metadata Property Map
- A S3 compatible metadata.
AwsS3DataResponse
- Aws
Access Pulumi.Key Google Native. Storage Transfer. V1. Inputs. Aws Access Key Response - Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see User credentials.
- Bucket
Name string - S3 Bucket name (see Creating a bucket).
- Cloudfront
Domain string - Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format:
https://{id}.cloudfront.net
or any valid custom domainhttps://...
- Credentials
Secret string - Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- Path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- Role
Arn string - The Amazon Resource Name (ARN) of the role to support temporary credentials via
AssumeRoleWithWebIdentity
. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentity
call for the provided role using the GoogleServiceAccount for this project.
- Aws
Access AwsKey Access Key Response - Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see User credentials.
- Bucket
Name string - S3 Bucket name (see Creating a bucket).
- Cloudfront
Domain string - Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format:
https://{id}.cloudfront.net
or any valid custom domainhttps://...
- Credentials
Secret string - Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- Path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- Role
Arn string - The Amazon Resource Name (ARN) of the role to support temporary credentials via
AssumeRoleWithWebIdentity
. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentity
call for the provided role using the GoogleServiceAccount for this project.
- aws
Access AwsKey Access Key Response - Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see User credentials.
- bucket
Name String - S3 Bucket name (see Creating a bucket).
- cloudfront
Domain String - Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format:
https://{id}.cloudfront.net
or any valid custom domainhttps://...
- credentials
Secret String - Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path String
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- role
Arn String - The Amazon Resource Name (ARN) of the role to support temporary credentials via
AssumeRoleWithWebIdentity
. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentity
call for the provided role using the GoogleServiceAccount for this project.
- aws
Access AwsKey Access Key Response - Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see User credentials.
- bucket
Name string - S3 Bucket name (see Creating a bucket).
- cloudfront
Domain string - Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format:
https://{id}.cloudfront.net
or any valid custom domainhttps://...
- credentials
Secret string - Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- role
Arn string - The Amazon Resource Name (ARN) of the role to support temporary credentials via
AssumeRoleWithWebIdentity
. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentity
call for the provided role using the GoogleServiceAccount for this project.
- aws_
access_ Awskey Access Key Response - Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see User credentials.
- bucket_
name str - S3 Bucket name (see Creating a bucket).
- cloudfront_
domain str - Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format:
https://{id}.cloudfront.net
or any valid custom domainhttps://...
- credentials_
secret str - Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path str
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- role_
arn str - The Amazon Resource Name (ARN) of the role to support temporary credentials via
AssumeRoleWithWebIdentity
. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentity
call for the provided role using the GoogleServiceAccount for this project.
- aws
Access Property MapKey - Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key. For information on our data retention policy for user credentials, see User credentials.
- bucket
Name String - S3 Bucket name (see Creating a bucket).
- cloudfront
Domain String - Optional. Cloudfront domain name pointing to this bucket (as origin), to use when fetching. Format:
https://{id}.cloudfront.net
or any valid custom domainhttps://...
- credentials
Secret String - Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path String
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- role
Arn String - The Amazon Resource Name (ARN) of the role to support temporary credentials via
AssumeRoleWithWebIdentity
. For more information about ARNs, see IAM ARNs. When a role ARN is provided, Transfer Service fetches temporary credentials for the session using aAssumeRoleWithWebIdentity
call for the provided role using the GoogleServiceAccount for this project.
AzureBlobStorageDataResponse
- Azure
Credentials Pulumi.Google Native. Storage Transfer. V1. Inputs. Azure Credentials Response - Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials.
- Container string
- The container to transfer from the Azure Storage account.
- Credentials
Secret string - Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- Path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- Storage
Account string - The name of the Azure Storage account.
- Azure
Credentials AzureCredentials Response - Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials.
- Container string
- The container to transfer from the Azure Storage account.
- Credentials
Secret string - Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- Path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- Storage
Account string - The name of the Azure Storage account.
- azure
Credentials AzureCredentials Response - Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials.
- container String
- The container to transfer from the Azure Storage account.
- credentials
Secret String - Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path String
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- storage
Account String - The name of the Azure Storage account.
- azure
Credentials AzureCredentials Response - Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials.
- container string
- The container to transfer from the Azure Storage account.
- credentials
Secret string - Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- storage
Account string - The name of the Azure Storage account.
- azure_
credentials AzureCredentials Response - Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials.
- container str
- The container to transfer from the Azure Storage account.
- credentials_
secret str - Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path str
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- storage_
account str - The name of the Azure Storage account.
- azure
Credentials Property Map - Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials.
- container String
- The container to transfer from the Azure Storage account.
- credentials
Secret String - Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } GoogleServiceAccount must be granted
roles/secretmanager.secretAccessor
for the resource. See [Configure access to a source: Microsoft Azure Blob Storage] (https://cloud.google.com/storage-transfer/docs/source-microsoft-azure#secret_manager) for more information. Ifcredentials_secret
is specified, do not specify azure_credentials. This feature is in preview. Format:projects/{project_number}/secrets/{secret_name}
- path String
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
- storage
Account String - The name of the Azure Storage account.
AzureCredentialsResponse
- Sas
Token string - Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
- Sas
Token string - Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
- sas
Token String - Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
- sas
Token string - Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
- sas_
token str - Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
- sas
Token String - Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS).
DateResponse
- Day int
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
- Month int
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
- Year int
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
- Day int
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
- Month int
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
- Year int
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
- day Integer
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
- month Integer
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
- year Integer
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
- day number
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
- month number
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
- year number
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
- day int
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
- month int
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
- year int
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
- day Number
- Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn't significant.
- month Number
- Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
- year Number
- Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
EventStreamResponse
- Event
Stream stringExpiration Time - Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
- Event
Stream stringStart Time - Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
- Name string
- Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
- Event
Stream stringExpiration Time - Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
- Event
Stream stringStart Time - Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
- Name string
- Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
- event
Stream StringExpiration Time - Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
- event
Stream StringStart Time - Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
- name String
- Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
- event
Stream stringExpiration Time - Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
- event
Stream stringStart Time - Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
- name string
- Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
- event_
stream_ strexpiration_ time - Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
- event_
stream_ strstart_ time - Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
- name str
- Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
- event
Stream StringExpiration Time - Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated.
- event
Stream StringStart Time - Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately.
- name String
- Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'.
GcsDataResponse
- Bucket
Name string - Cloud Storage bucket name. Must meet Bucket Name Requirements.
- Path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
- Bucket
Name string - Cloud Storage bucket name. Must meet Bucket Name Requirements.
- Path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
- bucket
Name String - Cloud Storage bucket name. Must meet Bucket Name Requirements.
- path String
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
- bucket
Name string - Cloud Storage bucket name. Must meet Bucket Name Requirements.
- path string
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
- bucket_
name str - Cloud Storage bucket name. Must meet Bucket Name Requirements.
- path str
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
- bucket
Name String - Cloud Storage bucket name. Must meet Bucket Name Requirements.
- path String
- Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. The root path value must meet Object Name Requirements.
HttpDataResponse
- List
Url string - The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
- List
Url string - The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
- list
Url String - The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
- list
Url string - The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
- list_
url str - The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
- list
Url String - The URL that points to the file that stores the object list entries. This file must allow public access. Currently, only URLs with HTTP and HTTPS schemes are supported.
LoggingConfigResponse
- Enable
Onprem boolGcs Transfer Logs - For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
- Log
Action List<string>States - States in which
log_actions
are logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead. - Log
Actions List<string> - Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- Enable
Onprem boolGcs Transfer Logs - For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
- Log
Action []stringStates - States in which
log_actions
are logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead. - Log
Actions []string - Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- enable
Onprem BooleanGcs Transfer Logs - For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
- log
Action List<String>States - States in which
log_actions
are logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead. - log
Actions List<String> - Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- enable
Onprem booleanGcs Transfer Logs - For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
- log
Action string[]States - States in which
log_actions
are logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead. - log
Actions string[] - Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- enable_
onprem_ boolgcs_ transfer_ logs - For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
- log_
action_ Sequence[str]states - States in which
log_actions
are logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead. - log_
actions Sequence[str] - Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
- enable
Onprem BooleanGcs Transfer Logs - For transfers with a PosixFilesystem source, this option enables the Cloud Storage transfer logs for this transfer.
- log
Action List<String>States - States in which
log_actions
are logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead. - log
Actions List<String> - Specifies the actions to be logged. If empty, no logs are generated. Not supported for transfers with PosixFilesystem data sources; use enable_onprem_gcs_transfer_logs instead.
MetadataOptionsResponse
- Acl string
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
- Gid string
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- Kms
Key string - Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
- Mode string
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- Storage
Class string - Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
- Symlink string
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- Temporary
Hold string - Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
- Time
Created string - Specifies how each object's
timeCreated
metadata is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TIME_CREATED_SKIP. - Uid string
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- Acl string
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
- Gid string
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- Kms
Key string - Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
- Mode string
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- Storage
Class string - Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
- Symlink string
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- Temporary
Hold string - Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
- Time
Created string - Specifies how each object's
timeCreated
metadata is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TIME_CREATED_SKIP. - Uid string
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- acl String
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
- gid String
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- kms
Key String - Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
- mode String
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- storage
Class String - Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
- symlink String
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- temporary
Hold String - Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
- time
Created String - Specifies how each object's
timeCreated
metadata is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TIME_CREATED_SKIP. - uid String
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- acl string
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
- gid string
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- kms
Key string - Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
- mode string
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- storage
Class string - Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
- symlink string
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- temporary
Hold string - Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
- time
Created string - Specifies how each object's
timeCreated
metadata is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TIME_CREATED_SKIP. - uid string
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- acl str
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
- gid str
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- kms_
key str - Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
- mode str
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- storage_
class str - Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
- symlink str
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- temporary_
hold str - Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
- time_
created str - Specifies how each object's
timeCreated
metadata is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TIME_CREATED_SKIP. - uid str
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- acl String
- Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT.
- gid String
- Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- kms
Key String - Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as KMS_KEY_DESTINATION_BUCKET_DEFAULT.
- mode String
- Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- storage
Class String - Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT.
- symlink String
- Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
- temporary
Hold String - Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TEMPORARY_HOLD_PRESERVE.
- time
Created String - Specifies how each object's
timeCreated
metadata is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as TIME_CREATED_SKIP. - uid String
- Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers.
NotificationConfigResponse
- Event
Types List<string> - Event types for which a notification is desired. If empty, send notifications for all event types.
- Payload
Format string - The desired format of the notification message payloads.
- Pubsub
Topic string - The
Topic.name
of the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}
. Not matching this format results in an INVALID_ARGUMENT error.
- Event
Types []string - Event types for which a notification is desired. If empty, send notifications for all event types.
- Payload
Format string - The desired format of the notification message payloads.
- Pubsub
Topic string - The
Topic.name
of the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}
. Not matching this format results in an INVALID_ARGUMENT error.
- event
Types List<String> - Event types for which a notification is desired. If empty, send notifications for all event types.
- payload
Format String - The desired format of the notification message payloads.
- pubsub
Topic String - The
Topic.name
of the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}
. Not matching this format results in an INVALID_ARGUMENT error.
- event
Types string[] - Event types for which a notification is desired. If empty, send notifications for all event types.
- payload
Format string - The desired format of the notification message payloads.
- pubsub
Topic string - The
Topic.name
of the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}
. Not matching this format results in an INVALID_ARGUMENT error.
- event_
types Sequence[str] - Event types for which a notification is desired. If empty, send notifications for all event types.
- payload_
format str - The desired format of the notification message payloads.
- pubsub_
topic str - The
Topic.name
of the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}
. Not matching this format results in an INVALID_ARGUMENT error.
- event
Types List<String> - Event types for which a notification is desired. If empty, send notifications for all event types.
- payload
Format String - The desired format of the notification message payloads.
- pubsub
Topic String - The
Topic.name
of the Pub/Sub topic to which to publish notifications. Must be of the format:projects/{project}/topics/{topic}
. Not matching this format results in an INVALID_ARGUMENT error.
ObjectConditionsResponse
- Exclude
Prefixes List<string> - If you specify
exclude_prefixes
, Storage Transfer Service uses the items in theexclude_prefixes
array to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixes
for inclusion in a transfer. The following are requirements ofexclude_prefixes
: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the exclude-prefix aslogs/y=2015/requests.gz
. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes
. The max size ofexclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - Include
Prefixes List<string> - If you specify
include_prefixes
, Storage Transfer Service uses the items in theinclude_prefixes
array to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixes
for inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixes
specified for inclusion in the transfer. The following are requirements ofinclude_prefixes
: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the include-prefix aslogs/y=2015/requests.gz
. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - Last
Modified stringBefore - If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
- Last
Modified stringSince - If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The
last_modified_since
andlast_modified_before
fields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_since
to the start of the day *last_modified_before
to the end of the day - Max
Time stringElapsed Since Last Modification - Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred. - Min
Time stringElapsed Since Last Modification - Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- Exclude
Prefixes []string - If you specify
exclude_prefixes
, Storage Transfer Service uses the items in theexclude_prefixes
array to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixes
for inclusion in a transfer. The following are requirements ofexclude_prefixes
: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the exclude-prefix aslogs/y=2015/requests.gz
. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes
. The max size ofexclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - Include
Prefixes []string - If you specify
include_prefixes
, Storage Transfer Service uses the items in theinclude_prefixes
array to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixes
for inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixes
specified for inclusion in the transfer. The following are requirements ofinclude_prefixes
: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the include-prefix aslogs/y=2015/requests.gz
. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - Last
Modified stringBefore - If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
- Last
Modified stringSince - If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The
last_modified_since
andlast_modified_before
fields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_since
to the start of the day *last_modified_before
to the end of the day - Max
Time stringElapsed Since Last Modification - Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred. - Min
Time stringElapsed Since Last Modification - Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- exclude
Prefixes List<String> - If you specify
exclude_prefixes
, Storage Transfer Service uses the items in theexclude_prefixes
array to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixes
for inclusion in a transfer. The following are requirements ofexclude_prefixes
: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the exclude-prefix aslogs/y=2015/requests.gz
. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes
. The max size ofexclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - include
Prefixes List<String> - If you specify
include_prefixes
, Storage Transfer Service uses the items in theinclude_prefixes
array to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixes
for inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixes
specified for inclusion in the transfer. The following are requirements ofinclude_prefixes
: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the include-prefix aslogs/y=2015/requests.gz
. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - last
Modified StringBefore - If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
- last
Modified StringSince - If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The
last_modified_since
andlast_modified_before
fields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_since
to the start of the day *last_modified_before
to the end of the day - max
Time StringElapsed Since Last Modification - Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred. - min
Time StringElapsed Since Last Modification - Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- exclude
Prefixes string[] - If you specify
exclude_prefixes
, Storage Transfer Service uses the items in theexclude_prefixes
array to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixes
for inclusion in a transfer. The following are requirements ofexclude_prefixes
: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the exclude-prefix aslogs/y=2015/requests.gz
. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes
. The max size ofexclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - include
Prefixes string[] - If you specify
include_prefixes
, Storage Transfer Service uses the items in theinclude_prefixes
array to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixes
for inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixes
specified for inclusion in the transfer. The following are requirements ofinclude_prefixes
: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the include-prefix aslogs/y=2015/requests.gz
. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - last
Modified stringBefore - If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
- last
Modified stringSince - If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The
last_modified_since
andlast_modified_before
fields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_since
to the start of the day *last_modified_before
to the end of the day - max
Time stringElapsed Since Last Modification - Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred. - min
Time stringElapsed Since Last Modification - Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- exclude_
prefixes Sequence[str] - If you specify
exclude_prefixes
, Storage Transfer Service uses the items in theexclude_prefixes
array to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixes
for inclusion in a transfer. The following are requirements ofexclude_prefixes
: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the exclude-prefix aslogs/y=2015/requests.gz
. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes
. The max size ofexclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - include_
prefixes Sequence[str] - If you specify
include_prefixes
, Storage Transfer Service uses the items in theinclude_prefixes
array to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixes
for inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixes
specified for inclusion in the transfer. The following are requirements ofinclude_prefixes
: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the include-prefix aslogs/y=2015/requests.gz
. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - last_
modified_ strbefore - If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
- last_
modified_ strsince - If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The
last_modified_since
andlast_modified_before
fields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_since
to the start of the day *last_modified_before
to the end of the day - max_
time_ strelapsed_ since_ last_ modification - Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred. - min_
time_ strelapsed_ since_ last_ modification - Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
- exclude
Prefixes List<String> - If you specify
exclude_prefixes
, Storage Transfer Service uses the items in theexclude_prefixes
array to determine which objects to exclude from a transfer. Objects must not start with one of the matchingexclude_prefixes
for inclusion in a transfer. The following are requirements ofexclude_prefixes
: * Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each exclude-prefix must omit the leading slash. For example, to exclude the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the exclude-prefix aslogs/y=2015/requests.gz
. * None of the exclude-prefix values can be empty, if specified. * Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix. * If include_prefixes is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes
. The max size ofexclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - include
Prefixes List<String> - If you specify
include_prefixes
, Storage Transfer Service uses the items in theinclude_prefixes
array to determine which objects to include in a transfer. Objects must start with one of the matchinginclude_prefixes
for inclusion in the transfer. If exclude_prefixes is specified, objects must not start with any of theexclude_prefixes
specified for inclusion in the transfer. The following are requirements ofinclude_prefixes
: * Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported. * Each include-prefix must omit the leading slash. For example, to include the objects3://my-aws-bucket/logs/y=2015/requests.gz
, specify the include-prefix aslogs/y=2015/requests.gz
. * None of the include-prefix values can be empty, if specified. * Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix. The max size ofinclude_prefixes
is 1000. For more information, see Filtering objects from transfers. - last
Modified StringBefore - If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
- last
Modified StringSince - If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred. The
last_modified_since
andlast_modified_before
fields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows: *last_modified_since
to the start of the day *last_modified_before
to the end of the day - max
Time StringElapsed Since Last Modification - Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred. - min
Time StringElapsed Since Last Modification - Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time of the
TransferOperation
and the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
PosixFilesystemResponse
- Root
Directory string - Root directory path to the filesystem.
- Root
Directory string - Root directory path to the filesystem.
- root
Directory String - Root directory path to the filesystem.
- root
Directory string - Root directory path to the filesystem.
- root_
directory str - Root directory path to the filesystem.
- root
Directory String - Root directory path to the filesystem.
S3CompatibleMetadataResponse
- Auth
Method string - Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
- List
Api string - The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
- Protocol string
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
- Request
Model string - Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
- Auth
Method string - Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
- List
Api string - The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
- Protocol string
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
- Request
Model string - Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
- auth
Method String - Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
- list
Api String - The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
- protocol String
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
- request
Model String - Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
- auth
Method string - Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
- list
Api string - The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
- protocol string
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
- request
Model string - Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
- auth_
method str - Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
- list_
api str - The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
- protocol str
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
- request_
model str - Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
- auth
Method String - Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use.
- list
Api String - The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use.
- protocol String
- Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used.
- request
Model String - Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used.
ScheduleResponse
- End
Time Pulumi.Of Day Google Native. Storage Transfer. V1. Inputs. Time Of Day Response - The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date,
end_time_of_day
specifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * Ifend_time_of_day
is not set andschedule_end_date
is set, then a default value of23:59:59
is used forend_time_of_day
. * Ifend_time_of_day
is set andschedule_end_date
is not set, then INVALID_ARGUMENT is returned. - Repeat
Interval string - Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
- Schedule
End Pulumi.Date Google Native. Storage Transfer. V1. Inputs. Date Response - The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If
schedule_end_date
and schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * Ifschedule_end_date
is later thanschedule_start_date
andschedule_end_date
is in the future relative to UTC, the job runs each day at start_time_of_day throughschedule_end_date
. - Schedule
Start Pulumi.Date Google Native. Storage Transfer. V1. Inputs. Date Response - The start date of a transfer. Date boundaries are determined relative to UTC time. If
schedule_start_date
and start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. Note: When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob withschedule_start_date
set to June 2 and astart_time_of_day
set to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC. - Start
Time Pulumi.Of Day Google Native. Storage Transfer. V1. Inputs. Time Of Day Response - The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If
start_time_of_day
is not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. Ifstart_time_of_day
is specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, throughschedule_end_date
.
- End
Time TimeOf Day Of Day Response - The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date,
end_time_of_day
specifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * Ifend_time_of_day
is not set andschedule_end_date
is set, then a default value of23:59:59
is used forend_time_of_day
. * Ifend_time_of_day
is set andschedule_end_date
is not set, then INVALID_ARGUMENT is returned. - Repeat
Interval string - Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
- Schedule
End DateDate Response - The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If
schedule_end_date
and schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * Ifschedule_end_date
is later thanschedule_start_date
andschedule_end_date
is in the future relative to UTC, the job runs each day at start_time_of_day throughschedule_end_date
. - Schedule
Start DateDate Response - The start date of a transfer. Date boundaries are determined relative to UTC time. If
schedule_start_date
and start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. Note: When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob withschedule_start_date
set to June 2 and astart_time_of_day
set to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC. - Start
Time TimeOf Day Of Day Response - The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If
start_time_of_day
is not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. Ifstart_time_of_day
is specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, throughschedule_end_date
.
- end
Time TimeOf Day Of Day Response - The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date,
end_time_of_day
specifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * Ifend_time_of_day
is not set andschedule_end_date
is set, then a default value of23:59:59
is used forend_time_of_day
. * Ifend_time_of_day
is set andschedule_end_date
is not set, then INVALID_ARGUMENT is returned. - repeat
Interval String - Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
- schedule
End DateDate Response - The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If
schedule_end_date
and schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * Ifschedule_end_date
is later thanschedule_start_date
andschedule_end_date
is in the future relative to UTC, the job runs each day at start_time_of_day throughschedule_end_date
. - schedule
Start DateDate Response - The start date of a transfer. Date boundaries are determined relative to UTC time. If
schedule_start_date
and start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. Note: When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob withschedule_start_date
set to June 2 and astart_time_of_day
set to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC. - start
Time TimeOf Day Of Day Response - The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If
start_time_of_day
is not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. Ifstart_time_of_day
is specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, throughschedule_end_date
.
- end
Time TimeOf Day Of Day Response - The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date,
end_time_of_day
specifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * Ifend_time_of_day
is not set andschedule_end_date
is set, then a default value of23:59:59
is used forend_time_of_day
. * Ifend_time_of_day
is set andschedule_end_date
is not set, then INVALID_ARGUMENT is returned. - repeat
Interval string - Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
- schedule
End DateDate Response - The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If
schedule_end_date
and schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * Ifschedule_end_date
is later thanschedule_start_date
andschedule_end_date
is in the future relative to UTC, the job runs each day at start_time_of_day throughschedule_end_date
. - schedule
Start DateDate Response - The start date of a transfer. Date boundaries are determined relative to UTC time. If
schedule_start_date
and start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. Note: When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob withschedule_start_date
set to June 2 and astart_time_of_day
set to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC. - start
Time TimeOf Day Of Day Response - The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If
start_time_of_day
is not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. Ifstart_time_of_day
is specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, throughschedule_end_date
.
- end_
time_ Timeof_ day Of Day Response - The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date,
end_time_of_day
specifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * Ifend_time_of_day
is not set andschedule_end_date
is set, then a default value of23:59:59
is used forend_time_of_day
. * Ifend_time_of_day
is set andschedule_end_date
is not set, then INVALID_ARGUMENT is returned. - repeat_
interval str - Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
- schedule_
end_ Datedate Response - The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If
schedule_end_date
and schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * Ifschedule_end_date
is later thanschedule_start_date
andschedule_end_date
is in the future relative to UTC, the job runs each day at start_time_of_day throughschedule_end_date
. - schedule_
start_ Datedate Response - The start date of a transfer. Date boundaries are determined relative to UTC time. If
schedule_start_date
and start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. Note: When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob withschedule_start_date
set to June 2 and astart_time_of_day
set to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC. - start_
time_ Timeof_ day Of Day Response - The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If
start_time_of_day
is not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. Ifstart_time_of_day
is specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, throughschedule_end_date
.
- end
Time Property MapOf Day - The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date,
end_time_of_day
specifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combintation of schedule_start_date and start_time_of_day, and is subject to the following: * Ifend_time_of_day
is not set andschedule_end_date
is set, then a default value of23:59:59
is used forend_time_of_day
. * Ifend_time_of_day
is set andschedule_end_date
is not set, then INVALID_ARGUMENT is returned. - repeat
Interval String - Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
- schedule
End Property MapDate - The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines: * If
schedule_end_date
and schedule_start_date are the same and in the future relative to UTC, the transfer is executed only one time. * Ifschedule_end_date
is later thanschedule_start_date
andschedule_end_date
is in the future relative to UTC, the job runs each day at start_time_of_day throughschedule_end_date
. - schedule
Start Property MapDate - The start date of a transfer. Date boundaries are determined relative to UTC time. If
schedule_start_date
and start_time_of_day are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request. Note: When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob withschedule_start_date
set to June 2 and astart_time_of_day
set to midnight UTC. The first scheduled TransferOperation takes place on June 3 at midnight UTC. - start
Time Property MapOf Day - The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time. If
start_time_of_day
is not specified: * One-time transfers run immediately. * Recurring transfers run immediately, and each day at midnight UTC, through schedule_end_date. Ifstart_time_of_day
is specified: * One-time transfers run at the specified time. * Recurring transfers run at the specified time each day, throughschedule_end_date
.
TimeOfDayResponse
- Hours int
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
- Minutes int
- Minutes of hour of day. Must be from 0 to 59.
- Nanos int
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
- Seconds int
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
- Hours int
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
- Minutes int
- Minutes of hour of day. Must be from 0 to 59.
- Nanos int
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
- Seconds int
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
- hours Integer
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
- minutes Integer
- Minutes of hour of day. Must be from 0 to 59.
- nanos Integer
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
- seconds Integer
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
- hours number
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
- minutes number
- Minutes of hour of day. Must be from 0 to 59.
- nanos number
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
- seconds number
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
- hours int
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
- minutes int
- Minutes of hour of day. Must be from 0 to 59.
- nanos int
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
- seconds int
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
- hours Number
- Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value "24:00:00" for scenarios like business closing time.
- minutes Number
- Minutes of hour of day. Must be from 0 to 59.
- nanos Number
- Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
- seconds Number
- Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
TransferManifestResponse
- Location string
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have
storage.objects.get
permission for this object. An example path isgs://bucket_name/path/manifest.csv
.
- Location string
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have
storage.objects.get
permission for this object. An example path isgs://bucket_name/path/manifest.csv
.
- location String
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have
storage.objects.get
permission for this object. An example path isgs://bucket_name/path/manifest.csv
.
- location string
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have
storage.objects.get
permission for this object. An example path isgs://bucket_name/path/manifest.csv
.
- location str
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have
storage.objects.get
permission for this object. An example path isgs://bucket_name/path/manifest.csv
.
- location String
- Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have
storage.objects.get
permission for this object. An example path isgs://bucket_name/path/manifest.csv
.
TransferOptionsResponse
- Delete
Objects boolFrom Source After Transfer - Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
- Delete
Objects boolUnique In Sink - Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
- Metadata
Options Pulumi.Google Native. Storage Transfer. V1. Inputs. Metadata Options Response - Represents the selected metadata options for a transfer job.
- Overwrite
Objects boolAlready Existing In Sink - When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
- Overwrite
When string - When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
- Delete
Objects boolFrom Source After Transfer - Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
- Delete
Objects boolUnique In Sink - Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
- Metadata
Options MetadataOptions Response - Represents the selected metadata options for a transfer job.
- Overwrite
Objects boolAlready Existing In Sink - When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
- Overwrite
When string - When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
- delete
Objects BooleanFrom Source After Transfer - Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
- delete
Objects BooleanUnique In Sink - Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
- metadata
Options MetadataOptions Response - Represents the selected metadata options for a transfer job.
- overwrite
Objects BooleanAlready Existing In Sink - When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
- overwrite
When String - When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
- delete
Objects booleanFrom Source After Transfer - Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
- delete
Objects booleanUnique In Sink - Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
- metadata
Options MetadataOptions Response - Represents the selected metadata options for a transfer job.
- overwrite
Objects booleanAlready Existing In Sink - When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
- overwrite
When string - When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
- delete_
objects_ boolfrom_ source_ after_ transfer - Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
- delete_
objects_ boolunique_ in_ sink - Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
- metadata_
options MetadataOptions Response - Represents the selected metadata options for a transfer job.
- overwrite_
objects_ boolalready_ existing_ in_ sink - When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
- overwrite_
when str - When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
- delete
Objects BooleanFrom Source After Transfer - Whether objects should be deleted from the source after they are transferred to the sink. Note: This option and delete_objects_unique_in_sink are mutually exclusive.
- delete
Objects BooleanUnique In Sink - Whether objects that exist only in the sink should be deleted. Note: This option and delete_objects_from_source_after_transfer are mutually exclusive.
- metadata
Options Property Map - Represents the selected metadata options for a transfer job.
- overwrite
Objects BooleanAlready Existing In Sink - When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are ovewritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object.
- overwrite
When String - When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by overwrite_objects_already_existing_in_sink.
TransferSpecResponse
- Aws
S3Compatible Pulumi.Data Source Google Native. Storage Transfer. V1. Inputs. Aws S3Compatible Data Response - An AWS S3 compatible data source.
- Aws
S3Data Pulumi.Source Google Native. Storage Transfer. V1. Inputs. Aws S3Data Response - An AWS S3 data source.
- Azure
Blob Pulumi.Storage Data Source Google Native. Storage Transfer. V1. Inputs. Azure Blob Storage Data Response - An Azure Blob Storage data source.
- Gcs
Data Pulumi.Sink Google Native. Storage Transfer. V1. Inputs. Gcs Data Response - A Cloud Storage data sink.
- Gcs
Data Pulumi.Source Google Native. Storage Transfer. V1. Inputs. Gcs Data Response - A Cloud Storage data source.
- Gcs
Intermediate Pulumi.Data Location Google Native. Storage Transfer. V1. Inputs. Gcs Data Response - For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data. See Transfer data between file systems for more information.
- Http
Data Pulumi.Source Google Native. Storage Transfer. V1. Inputs. Http Data Response - An HTTP URL data source.
- Object
Conditions Pulumi.Google Native. Storage Transfer. V1. Inputs. Object Conditions Response - Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
- Posix
Data Pulumi.Sink Google Native. Storage Transfer. V1. Inputs. Posix Filesystem Response - A POSIX Filesystem data sink.
- Posix
Data Pulumi.Source Google Native. Storage Transfer. V1. Inputs. Posix Filesystem Response - A POSIX Filesystem data source.
- Sink
Agent stringPool Name - Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
- Source
Agent stringPool Name - Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
- Transfer
Manifest Pulumi.Google Native. Storage Transfer. V1. Inputs. Transfer Manifest Response - A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
- Transfer
Options Pulumi.Google Native. Storage Transfer. V1. Inputs. Transfer Options Response - If the option delete_objects_unique_in_sink is
true
and time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
- Aws
S3Compatible AwsData Source S3Compatible Data Response - An AWS S3 compatible data source.
- Aws
S3Data AwsSource S3Data Response - An AWS S3 data source.
- Azure
Blob AzureStorage Data Source Blob Storage Data Response - An Azure Blob Storage data source.
- Gcs
Data GcsSink Data Response - A Cloud Storage data sink.
- Gcs
Data GcsSource Data Response - A Cloud Storage data source.
- Gcs
Intermediate GcsData Location Data Response - For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data. See Transfer data between file systems for more information.
- Http
Data HttpSource Data Response - An HTTP URL data source.
- Object
Conditions ObjectConditions Response - Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
- Posix
Data PosixSink Filesystem Response - A POSIX Filesystem data sink.
- Posix
Data PosixSource Filesystem Response - A POSIX Filesystem data source.
- Sink
Agent stringPool Name - Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
- Source
Agent stringPool Name - Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
- Transfer
Manifest TransferManifest Response - A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
- Transfer
Options TransferOptions Response - If the option delete_objects_unique_in_sink is
true
and time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
- aws
S3Compatible AwsData Source S3Compatible Data Response - An AWS S3 compatible data source.
- aws
S3Data AwsSource S3Data Response - An AWS S3 data source.
- azure
Blob AzureStorage Data Source Blob Storage Data Response - An Azure Blob Storage data source.
- gcs
Data GcsSink Data Response - A Cloud Storage data sink.
- gcs
Data GcsSource Data Response - A Cloud Storage data source.
- gcs
Intermediate GcsData Location Data Response - For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data. See Transfer data between file systems for more information.
- http
Data HttpSource Data Response - An HTTP URL data source.
- object
Conditions ObjectConditions Response - Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
- posix
Data PosixSink Filesystem Response - A POSIX Filesystem data sink.
- posix
Data PosixSource Filesystem Response - A POSIX Filesystem data source.
- sink
Agent StringPool Name - Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
- source
Agent StringPool Name - Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
- transfer
Manifest TransferManifest Response - A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
- transfer
Options TransferOptions Response - If the option delete_objects_unique_in_sink is
true
and time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
- aws
S3Compatible AwsData Source S3Compatible Data Response - An AWS S3 compatible data source.
- aws
S3Data AwsSource S3Data Response - An AWS S3 data source.
- azure
Blob AzureStorage Data Source Blob Storage Data Response - An Azure Blob Storage data source.
- gcs
Data GcsSink Data Response - A Cloud Storage data sink.
- gcs
Data GcsSource Data Response - A Cloud Storage data source.
- gcs
Intermediate GcsData Location Data Response - For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data. See Transfer data between file systems for more information.
- http
Data HttpSource Data Response - An HTTP URL data source.
- object
Conditions ObjectConditions Response - Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
- posix
Data PosixSink Filesystem Response - A POSIX Filesystem data sink.
- posix
Data PosixSource Filesystem Response - A POSIX Filesystem data source.
- sink
Agent stringPool Name - Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
- source
Agent stringPool Name - Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
- transfer
Manifest TransferManifest Response - A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
- transfer
Options TransferOptions Response - If the option delete_objects_unique_in_sink is
true
and time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
- aws_
s3_ Awscompatible_ data_ source S3Compatible Data Response - An AWS S3 compatible data source.
- aws_
s3_ Awsdata_ source S3Data Response - An AWS S3 data source.
- azure_
blob_ Azurestorage_ data_ source Blob Storage Data Response - An Azure Blob Storage data source.
- gcs_
data_ Gcssink Data Response - A Cloud Storage data sink.
- gcs_
data_ Gcssource Data Response - A Cloud Storage data source.
- gcs_
intermediate_ Gcsdata_ location Data Response - For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data. See Transfer data between file systems for more information.
- http_
data_ Httpsource Data Response - An HTTP URL data source.
- object_
conditions ObjectConditions Response - Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
- posix_
data_ Posixsink Filesystem Response - A POSIX Filesystem data sink.
- posix_
data_ Posixsource Filesystem Response - A POSIX Filesystem data source.
- sink_
agent_ strpool_ name - Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
- source_
agent_ strpool_ name - Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
- transfer_
manifest TransferManifest Response - A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
- transfer_
options TransferOptions Response - If the option delete_objects_unique_in_sink is
true
and time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
- aws
S3Compatible Property MapData Source - An AWS S3 compatible data source.
- aws
S3Data Property MapSource - An AWS S3 data source.
- azure
Blob Property MapStorage Data Source - An Azure Blob Storage data source.
- gcs
Data Property MapSink - A Cloud Storage data sink.
- gcs
Data Property MapSource - A Cloud Storage data source.
- gcs
Intermediate Property MapData Location - For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data. See Transfer data between file systems for more information.
- http
Data Property MapSource - An HTTP URL data source.
- object
Conditions Property Map - Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
- posix
Data Property MapSink - A POSIX Filesystem data sink.
- posix
Data Property MapSource - A POSIX Filesystem data source.
- sink
Agent StringPool Name - Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
- source
Agent StringPool Name - Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
- transfer
Manifest Property Map - A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
- transfer
Options Property Map - If the option delete_objects_unique_in_sink is
true
and time-based object conditions such as 'last modification time' are specified, the request fails with an INVALID_ARGUMENT error.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi