Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.bigquery/v2.Table
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a new, empty table in the dataset. Auto-naming is currently not supported for this resource.
Create Table Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Table(name: string, args: TableArgs, opts?: CustomResourceOptions);
@overload
def Table(resource_name: str,
args: TableArgs,
opts: Optional[ResourceOptions] = None)
@overload
def Table(resource_name: str,
opts: Optional[ResourceOptions] = None,
dataset_id: Optional[str] = None,
max_staleness: Optional[str] = None,
table_reference: Optional[TableReferenceArgs] = None,
description: Optional[str] = None,
encryption_configuration: Optional[EncryptionConfigurationArgs] = None,
expiration_time: Optional[str] = None,
external_data_configuration: Optional[ExternalDataConfigurationArgs] = None,
friendly_name: Optional[str] = None,
model: Optional[ModelDefinitionArgs] = None,
view: Optional[ViewDefinitionArgs] = None,
clustering: Optional[ClusteringArgs] = None,
labels: Optional[Mapping[str, str]] = None,
project: Optional[str] = None,
range_partitioning: Optional[RangePartitioningArgs] = None,
require_partition_filter: Optional[bool] = None,
resource_tags: Optional[Mapping[str, str]] = None,
schema: Optional[TableSchemaArgs] = None,
table_constraints: Optional[TableConstraintsArgs] = None,
biglake_configuration: Optional[BigLakeConfigurationArgs] = None,
time_partitioning: Optional[TimePartitioningArgs] = None,
materialized_view: Optional[MaterializedViewDefinitionArgs] = None)
func NewTable(ctx *Context, name string, args TableArgs, opts ...ResourceOption) (*Table, error)
public Table(string name, TableArgs args, CustomResourceOptions? opts = null)
type: google-native:bigquery/v2:Table
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args TableArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args TableArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args TableArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args TableArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args TableArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var google_nativeTableResource = new GoogleNative.BigQuery.V2.Table("google-nativeTableResource", new()
{
DatasetId = "string",
MaxStaleness = "string",
TableReference = new GoogleNative.BigQuery.V2.Inputs.TableReferenceArgs
{
DatasetId = "string",
Project = "string",
TableId = "string",
},
Description = "string",
EncryptionConfiguration = new GoogleNative.BigQuery.V2.Inputs.EncryptionConfigurationArgs
{
KmsKeyName = "string",
},
ExpirationTime = "string",
ExternalDataConfiguration = new GoogleNative.BigQuery.V2.Inputs.ExternalDataConfigurationArgs
{
Autodetect = false,
AvroOptions = new GoogleNative.BigQuery.V2.Inputs.AvroOptionsArgs
{
UseAvroLogicalTypes = false,
},
BigtableOptions = new GoogleNative.BigQuery.V2.Inputs.BigtableOptionsArgs
{
ColumnFamilies = new[]
{
new GoogleNative.BigQuery.V2.Inputs.BigtableColumnFamilyArgs
{
Columns = new[]
{
new GoogleNative.BigQuery.V2.Inputs.BigtableColumnArgs
{
Encoding = "string",
FieldName = "string",
OnlyReadLatest = false,
QualifierEncoded = "string",
QualifierString = "string",
Type = "string",
},
},
Encoding = "string",
FamilyId = "string",
OnlyReadLatest = false,
Type = "string",
},
},
IgnoreUnspecifiedColumnFamilies = false,
ReadRowkeyAsString = false,
},
Compression = "string",
ConnectionId = "string",
CsvOptions = new GoogleNative.BigQuery.V2.Inputs.CsvOptionsArgs
{
AllowJaggedRows = false,
AllowQuotedNewlines = false,
Encoding = "string",
FieldDelimiter = "string",
NullMarker = "string",
PreserveAsciiControlCharacters = false,
Quote = "string",
SkipLeadingRows = "string",
},
DecimalTargetTypes = new[]
{
"string",
},
FileSetSpecType = "string",
GoogleSheetsOptions = new GoogleNative.BigQuery.V2.Inputs.GoogleSheetsOptionsArgs
{
Range = "string",
SkipLeadingRows = "string",
},
HivePartitioningOptions = new GoogleNative.BigQuery.V2.Inputs.HivePartitioningOptionsArgs
{
Mode = "string",
RequirePartitionFilter = false,
SourceUriPrefix = "string",
},
IgnoreUnknownValues = false,
JsonOptions = new GoogleNative.BigQuery.V2.Inputs.JsonOptionsArgs
{
Encoding = "string",
},
MaxBadRecords = 0,
MetadataCacheMode = "string",
ObjectMetadata = "string",
ParquetOptions = new GoogleNative.BigQuery.V2.Inputs.ParquetOptionsArgs
{
EnableListInference = false,
EnumAsString = false,
},
ReferenceFileSchemaUri = "string",
Schema = new GoogleNative.BigQuery.V2.Inputs.TableSchemaArgs
{
Fields = new[]
{
new GoogleNative.BigQuery.V2.Inputs.TableFieldSchemaArgs
{
Categories = new GoogleNative.BigQuery.V2.Inputs.TableFieldSchemaCategoriesArgs
{
Names = new[]
{
"string",
},
},
Collation = "string",
DefaultValueExpression = "string",
Description = "string",
Fields = new[]
{
tableFieldSchema,
},
MaxLength = "string",
Mode = "string",
Name = "string",
PolicyTags = new GoogleNative.BigQuery.V2.Inputs.TableFieldSchemaPolicyTagsArgs
{
Names = new[]
{
"string",
},
},
Precision = "string",
RangeElementType = new GoogleNative.BigQuery.V2.Inputs.TableFieldSchemaRangeElementTypeArgs
{
Type = "string",
},
RoundingMode = "string",
Scale = "string",
Type = "string",
},
},
},
SourceFormat = "string",
SourceUris = new[]
{
"string",
},
},
FriendlyName = "string",
Model = new GoogleNative.BigQuery.V2.Inputs.ModelDefinitionArgs
{
ModelOptions = new GoogleNative.BigQuery.V2.Inputs.ModelDefinitionModelOptionsArgs
{
Labels = new[]
{
"string",
},
LossType = "string",
ModelType = "string",
},
TrainingRuns = new[]
{
new GoogleNative.BigQuery.V2.Inputs.BqmlTrainingRunArgs
{
IterationResults = new[]
{
new GoogleNative.BigQuery.V2.Inputs.BqmlIterationResultArgs
{
DurationMs = "string",
EvalLoss = 0,
Index = 0,
LearnRate = 0,
TrainingLoss = 0,
},
},
StartTime = "string",
State = "string",
TrainingOptions = new GoogleNative.BigQuery.V2.Inputs.BqmlTrainingRunTrainingOptionsArgs
{
EarlyStop = false,
L1Reg = 0,
L2Reg = 0,
LearnRate = 0,
LearnRateStrategy = "string",
LineSearchInitLearnRate = 0,
MaxIteration = "string",
MinRelProgress = 0,
WarmStart = false,
},
},
},
},
View = new GoogleNative.BigQuery.V2.Inputs.ViewDefinitionArgs
{
Query = "string",
UseExplicitColumnNames = false,
UseLegacySql = false,
UserDefinedFunctionResources = new[]
{
new GoogleNative.BigQuery.V2.Inputs.UserDefinedFunctionResourceArgs
{
InlineCode = "string",
ResourceUri = "string",
},
},
},
Clustering = new GoogleNative.BigQuery.V2.Inputs.ClusteringArgs
{
Fields = new[]
{
"string",
},
},
Labels =
{
{ "string", "string" },
},
Project = "string",
RangePartitioning = new GoogleNative.BigQuery.V2.Inputs.RangePartitioningArgs
{
Field = "string",
Range = new GoogleNative.BigQuery.V2.Inputs.RangePartitioningRangeArgs
{
End = "string",
Interval = "string",
Start = "string",
},
},
RequirePartitionFilter = false,
ResourceTags =
{
{ "string", "string" },
},
Schema = new GoogleNative.BigQuery.V2.Inputs.TableSchemaArgs
{
Fields = new[]
{
tableFieldSchema,
},
},
TableConstraints = new GoogleNative.BigQuery.V2.Inputs.TableConstraintsArgs
{
ForeignKeys = new[]
{
new GoogleNative.BigQuery.V2.Inputs.TableConstraintsForeignKeysItemArgs
{
ColumnReferences = new[]
{
new GoogleNative.BigQuery.V2.Inputs.TableConstraintsForeignKeysItemColumnReferencesItemArgs
{
ReferencedColumn = "string",
ReferencingColumn = "string",
},
},
Name = "string",
ReferencedTable = new GoogleNative.BigQuery.V2.Inputs.TableConstraintsForeignKeysItemReferencedTableArgs
{
DatasetId = "string",
Project = "string",
TableId = "string",
},
},
},
PrimaryKey = new GoogleNative.BigQuery.V2.Inputs.TableConstraintsPrimaryKeyArgs
{
Columns = new[]
{
"string",
},
},
},
BiglakeConfiguration = new GoogleNative.BigQuery.V2.Inputs.BigLakeConfigurationArgs
{
ConnectionId = "string",
FileFormat = "string",
StorageUri = "string",
TableFormat = "string",
},
TimePartitioning = new GoogleNative.BigQuery.V2.Inputs.TimePartitioningArgs
{
ExpirationMs = "string",
Field = "string",
RequirePartitionFilter = false,
Type = "string",
},
MaterializedView = new GoogleNative.BigQuery.V2.Inputs.MaterializedViewDefinitionArgs
{
AllowNonIncrementalDefinition = false,
EnableRefresh = false,
MaxStaleness = "string",
Query = "string",
RefreshIntervalMs = "string",
},
});
example, err := bigquery.NewTable(ctx, "google-nativeTableResource", &bigquery.TableArgs{
DatasetId: pulumi.String("string"),
MaxStaleness: pulumi.String("string"),
TableReference: &bigquery.TableReferenceArgs{
DatasetId: pulumi.String("string"),
Project: pulumi.String("string"),
TableId: pulumi.String("string"),
},
Description: pulumi.String("string"),
EncryptionConfiguration: &bigquery.EncryptionConfigurationArgs{
KmsKeyName: pulumi.String("string"),
},
ExpirationTime: pulumi.String("string"),
ExternalDataConfiguration: &bigquery.ExternalDataConfigurationArgs{
Autodetect: pulumi.Bool(false),
AvroOptions: &bigquery.AvroOptionsArgs{
UseAvroLogicalTypes: pulumi.Bool(false),
},
BigtableOptions: &bigquery.BigtableOptionsArgs{
ColumnFamilies: bigquery.BigtableColumnFamilyArray{
&bigquery.BigtableColumnFamilyArgs{
Columns: bigquery.BigtableColumnArray{
&bigquery.BigtableColumnArgs{
Encoding: pulumi.String("string"),
FieldName: pulumi.String("string"),
OnlyReadLatest: pulumi.Bool(false),
QualifierEncoded: pulumi.String("string"),
QualifierString: pulumi.String("string"),
Type: pulumi.String("string"),
},
},
Encoding: pulumi.String("string"),
FamilyId: pulumi.String("string"),
OnlyReadLatest: pulumi.Bool(false),
Type: pulumi.String("string"),
},
},
IgnoreUnspecifiedColumnFamilies: pulumi.Bool(false),
ReadRowkeyAsString: pulumi.Bool(false),
},
Compression: pulumi.String("string"),
ConnectionId: pulumi.String("string"),
CsvOptions: &bigquery.CsvOptionsArgs{
AllowJaggedRows: pulumi.Bool(false),
AllowQuotedNewlines: pulumi.Bool(false),
Encoding: pulumi.String("string"),
FieldDelimiter: pulumi.String("string"),
NullMarker: pulumi.String("string"),
PreserveAsciiControlCharacters: pulumi.Bool(false),
Quote: pulumi.String("string"),
SkipLeadingRows: pulumi.String("string"),
},
DecimalTargetTypes: pulumi.StringArray{
pulumi.String("string"),
},
FileSetSpecType: pulumi.String("string"),
GoogleSheetsOptions: &bigquery.GoogleSheetsOptionsArgs{
Range: pulumi.String("string"),
SkipLeadingRows: pulumi.String("string"),
},
HivePartitioningOptions: &bigquery.HivePartitioningOptionsArgs{
Mode: pulumi.String("string"),
RequirePartitionFilter: pulumi.Bool(false),
SourceUriPrefix: pulumi.String("string"),
},
IgnoreUnknownValues: pulumi.Bool(false),
JsonOptions: &bigquery.JsonOptionsArgs{
Encoding: pulumi.String("string"),
},
MaxBadRecords: pulumi.Int(0),
MetadataCacheMode: pulumi.String("string"),
ObjectMetadata: pulumi.String("string"),
ParquetOptions: &bigquery.ParquetOptionsArgs{
EnableListInference: pulumi.Bool(false),
EnumAsString: pulumi.Bool(false),
},
ReferenceFileSchemaUri: pulumi.String("string"),
Schema: &bigquery.TableSchemaArgs{
Fields: []bigquery.TableFieldSchemaArgs{
{
Categories: {
Names: pulumi.StringArray{
pulumi.String("string"),
},
},
Collation: pulumi.String("string"),
DefaultValueExpression: pulumi.String("string"),
Description: pulumi.String("string"),
Fields: bigquery.TableFieldSchemaArray{
tableFieldSchema,
},
MaxLength: pulumi.String("string"),
Mode: pulumi.String("string"),
Name: pulumi.String("string"),
PolicyTags: {
Names: pulumi.StringArray{
pulumi.String("string"),
},
},
Precision: pulumi.String("string"),
RangeElementType: {
Type: pulumi.String("string"),
},
RoundingMode: pulumi.String("string"),
Scale: pulumi.String("string"),
Type: pulumi.String("string"),
},
},
},
SourceFormat: pulumi.String("string"),
SourceUris: pulumi.StringArray{
pulumi.String("string"),
},
},
FriendlyName: pulumi.String("string"),
Model: &bigquery.ModelDefinitionArgs{
ModelOptions: &bigquery.ModelDefinitionModelOptionsArgs{
Labels: pulumi.StringArray{
pulumi.String("string"),
},
LossType: pulumi.String("string"),
ModelType: pulumi.String("string"),
},
TrainingRuns: bigquery.BqmlTrainingRunArray{
&bigquery.BqmlTrainingRunArgs{
IterationResults: bigquery.BqmlIterationResultArray{
&bigquery.BqmlIterationResultArgs{
DurationMs: pulumi.String("string"),
EvalLoss: pulumi.Float64(0),
Index: pulumi.Int(0),
LearnRate: pulumi.Float64(0),
TrainingLoss: pulumi.Float64(0),
},
},
StartTime: pulumi.String("string"),
State: pulumi.String("string"),
TrainingOptions: &bigquery.BqmlTrainingRunTrainingOptionsArgs{
EarlyStop: pulumi.Bool(false),
L1Reg: pulumi.Float64(0),
L2Reg: pulumi.Float64(0),
LearnRate: pulumi.Float64(0),
LearnRateStrategy: pulumi.String("string"),
LineSearchInitLearnRate: pulumi.Float64(0),
MaxIteration: pulumi.String("string"),
MinRelProgress: pulumi.Float64(0),
WarmStart: pulumi.Bool(false),
},
},
},
},
View: &bigquery.ViewDefinitionArgs{
Query: pulumi.String("string"),
UseExplicitColumnNames: pulumi.Bool(false),
UseLegacySql: pulumi.Bool(false),
UserDefinedFunctionResources: bigquery.UserDefinedFunctionResourceArray{
&bigquery.UserDefinedFunctionResourceArgs{
InlineCode: pulumi.String("string"),
ResourceUri: pulumi.String("string"),
},
},
},
Clustering: &bigquery.ClusteringArgs{
Fields: pulumi.StringArray{
pulumi.String("string"),
},
},
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Project: pulumi.String("string"),
RangePartitioning: &bigquery.RangePartitioningArgs{
Field: pulumi.String("string"),
Range: &bigquery.RangePartitioningRangeArgs{
End: pulumi.String("string"),
Interval: pulumi.String("string"),
Start: pulumi.String("string"),
},
},
RequirePartitionFilter: pulumi.Bool(false),
ResourceTags: pulumi.StringMap{
"string": pulumi.String("string"),
},
Schema: &bigquery.TableSchemaArgs{
Fields: bigquery.TableFieldSchemaArray{
tableFieldSchema,
},
},
TableConstraints: &bigquery.TableConstraintsArgs{
ForeignKeys: bigquery.TableConstraintsForeignKeysItemArray{
&bigquery.TableConstraintsForeignKeysItemArgs{
ColumnReferences: bigquery.TableConstraintsForeignKeysItemColumnReferencesItemArray{
&bigquery.TableConstraintsForeignKeysItemColumnReferencesItemArgs{
ReferencedColumn: pulumi.String("string"),
ReferencingColumn: pulumi.String("string"),
},
},
Name: pulumi.String("string"),
ReferencedTable: &bigquery.TableConstraintsForeignKeysItemReferencedTableArgs{
DatasetId: pulumi.String("string"),
Project: pulumi.String("string"),
TableId: pulumi.String("string"),
},
},
},
PrimaryKey: &bigquery.TableConstraintsPrimaryKeyArgs{
Columns: pulumi.StringArray{
pulumi.String("string"),
},
},
},
BiglakeConfiguration: &bigquery.BigLakeConfigurationArgs{
ConnectionId: pulumi.String("string"),
FileFormat: pulumi.String("string"),
StorageUri: pulumi.String("string"),
TableFormat: pulumi.String("string"),
},
TimePartitioning: &bigquery.TimePartitioningArgs{
ExpirationMs: pulumi.String("string"),
Field: pulumi.String("string"),
RequirePartitionFilter: pulumi.Bool(false),
Type: pulumi.String("string"),
},
MaterializedView: &bigquery.MaterializedViewDefinitionArgs{
AllowNonIncrementalDefinition: pulumi.Bool(false),
EnableRefresh: pulumi.Bool(false),
MaxStaleness: pulumi.String("string"),
Query: pulumi.String("string"),
RefreshIntervalMs: pulumi.String("string"),
},
})
var google_nativeTableResource = new Table("google-nativeTableResource", TableArgs.builder()
.datasetId("string")
.maxStaleness("string")
.tableReference(TableReferenceArgs.builder()
.datasetId("string")
.project("string")
.tableId("string")
.build())
.description("string")
.encryptionConfiguration(EncryptionConfigurationArgs.builder()
.kmsKeyName("string")
.build())
.expirationTime("string")
.externalDataConfiguration(ExternalDataConfigurationArgs.builder()
.autodetect(false)
.avroOptions(AvroOptionsArgs.builder()
.useAvroLogicalTypes(false)
.build())
.bigtableOptions(BigtableOptionsArgs.builder()
.columnFamilies(BigtableColumnFamilyArgs.builder()
.columns(BigtableColumnArgs.builder()
.encoding("string")
.fieldName("string")
.onlyReadLatest(false)
.qualifierEncoded("string")
.qualifierString("string")
.type("string")
.build())
.encoding("string")
.familyId("string")
.onlyReadLatest(false)
.type("string")
.build())
.ignoreUnspecifiedColumnFamilies(false)
.readRowkeyAsString(false)
.build())
.compression("string")
.connectionId("string")
.csvOptions(CsvOptionsArgs.builder()
.allowJaggedRows(false)
.allowQuotedNewlines(false)
.encoding("string")
.fieldDelimiter("string")
.nullMarker("string")
.preserveAsciiControlCharacters(false)
.quote("string")
.skipLeadingRows("string")
.build())
.decimalTargetTypes("string")
.fileSetSpecType("string")
.googleSheetsOptions(GoogleSheetsOptionsArgs.builder()
.range("string")
.skipLeadingRows("string")
.build())
.hivePartitioningOptions(HivePartitioningOptionsArgs.builder()
.mode("string")
.requirePartitionFilter(false)
.sourceUriPrefix("string")
.build())
.ignoreUnknownValues(false)
.jsonOptions(JsonOptionsArgs.builder()
.encoding("string")
.build())
.maxBadRecords(0)
.metadataCacheMode("string")
.objectMetadata("string")
.parquetOptions(ParquetOptionsArgs.builder()
.enableListInference(false)
.enumAsString(false)
.build())
.referenceFileSchemaUri("string")
.schema(TableSchemaArgs.builder()
.fields(TableFieldSchemaArgs.builder()
.categories(TableFieldSchemaCategoriesArgs.builder()
.names("string")
.build())
.collation("string")
.defaultValueExpression("string")
.description("string")
.fields(tableFieldSchema)
.maxLength("string")
.mode("string")
.name("string")
.policyTags(TableFieldSchemaPolicyTagsArgs.builder()
.names("string")
.build())
.precision("string")
.rangeElementType(TableFieldSchemaRangeElementTypeArgs.builder()
.type("string")
.build())
.roundingMode("string")
.scale("string")
.type("string")
.build())
.build())
.sourceFormat("string")
.sourceUris("string")
.build())
.friendlyName("string")
.model(ModelDefinitionArgs.builder()
.modelOptions(ModelDefinitionModelOptionsArgs.builder()
.labels("string")
.lossType("string")
.modelType("string")
.build())
.trainingRuns(BqmlTrainingRunArgs.builder()
.iterationResults(BqmlIterationResultArgs.builder()
.durationMs("string")
.evalLoss(0)
.index(0)
.learnRate(0)
.trainingLoss(0)
.build())
.startTime("string")
.state("string")
.trainingOptions(BqmlTrainingRunTrainingOptionsArgs.builder()
.earlyStop(false)
.l1Reg(0)
.l2Reg(0)
.learnRate(0)
.learnRateStrategy("string")
.lineSearchInitLearnRate(0)
.maxIteration("string")
.minRelProgress(0)
.warmStart(false)
.build())
.build())
.build())
.view(ViewDefinitionArgs.builder()
.query("string")
.useExplicitColumnNames(false)
.useLegacySql(false)
.userDefinedFunctionResources(UserDefinedFunctionResourceArgs.builder()
.inlineCode("string")
.resourceUri("string")
.build())
.build())
.clustering(ClusteringArgs.builder()
.fields("string")
.build())
.labels(Map.of("string", "string"))
.project("string")
.rangePartitioning(RangePartitioningArgs.builder()
.field("string")
.range(RangePartitioningRangeArgs.builder()
.end("string")
.interval("string")
.start("string")
.build())
.build())
.requirePartitionFilter(false)
.resourceTags(Map.of("string", "string"))
.schema(TableSchemaArgs.builder()
.fields(tableFieldSchema)
.build())
.tableConstraints(TableConstraintsArgs.builder()
.foreignKeys(TableConstraintsForeignKeysItemArgs.builder()
.columnReferences(TableConstraintsForeignKeysItemColumnReferencesItemArgs.builder()
.referencedColumn("string")
.referencingColumn("string")
.build())
.name("string")
.referencedTable(TableConstraintsForeignKeysItemReferencedTableArgs.builder()
.datasetId("string")
.project("string")
.tableId("string")
.build())
.build())
.primaryKey(TableConstraintsPrimaryKeyArgs.builder()
.columns("string")
.build())
.build())
.biglakeConfiguration(BigLakeConfigurationArgs.builder()
.connectionId("string")
.fileFormat("string")
.storageUri("string")
.tableFormat("string")
.build())
.timePartitioning(TimePartitioningArgs.builder()
.expirationMs("string")
.field("string")
.requirePartitionFilter(false)
.type("string")
.build())
.materializedView(MaterializedViewDefinitionArgs.builder()
.allowNonIncrementalDefinition(false)
.enableRefresh(false)
.maxStaleness("string")
.query("string")
.refreshIntervalMs("string")
.build())
.build());
google_native_table_resource = google_native.bigquery.v2.Table("google-nativeTableResource",
dataset_id="string",
max_staleness="string",
table_reference=google_native.bigquery.v2.TableReferenceArgs(
dataset_id="string",
project="string",
table_id="string",
),
description="string",
encryption_configuration=google_native.bigquery.v2.EncryptionConfigurationArgs(
kms_key_name="string",
),
expiration_time="string",
external_data_configuration=google_native.bigquery.v2.ExternalDataConfigurationArgs(
autodetect=False,
avro_options=google_native.bigquery.v2.AvroOptionsArgs(
use_avro_logical_types=False,
),
bigtable_options=google_native.bigquery.v2.BigtableOptionsArgs(
column_families=[google_native.bigquery.v2.BigtableColumnFamilyArgs(
columns=[google_native.bigquery.v2.BigtableColumnArgs(
encoding="string",
field_name="string",
only_read_latest=False,
qualifier_encoded="string",
qualifier_string="string",
type="string",
)],
encoding="string",
family_id="string",
only_read_latest=False,
type="string",
)],
ignore_unspecified_column_families=False,
read_rowkey_as_string=False,
),
compression="string",
connection_id="string",
csv_options=google_native.bigquery.v2.CsvOptionsArgs(
allow_jagged_rows=False,
allow_quoted_newlines=False,
encoding="string",
field_delimiter="string",
null_marker="string",
preserve_ascii_control_characters=False,
quote="string",
skip_leading_rows="string",
),
decimal_target_types=["string"],
file_set_spec_type="string",
google_sheets_options=google_native.bigquery.v2.GoogleSheetsOptionsArgs(
range="string",
skip_leading_rows="string",
),
hive_partitioning_options=google_native.bigquery.v2.HivePartitioningOptionsArgs(
mode="string",
require_partition_filter=False,
source_uri_prefix="string",
),
ignore_unknown_values=False,
json_options=google_native.bigquery.v2.JsonOptionsArgs(
encoding="string",
),
max_bad_records=0,
metadata_cache_mode="string",
object_metadata="string",
parquet_options=google_native.bigquery.v2.ParquetOptionsArgs(
enable_list_inference=False,
enum_as_string=False,
),
reference_file_schema_uri="string",
schema=google_native.bigquery.v2.TableSchemaArgs(
fields=[google_native.bigquery.v2.TableFieldSchemaArgs(
categories=google_native.bigquery.v2.TableFieldSchemaCategoriesArgs(
names=["string"],
),
collation="string",
default_value_expression="string",
description="string",
fields=[table_field_schema],
max_length="string",
mode="string",
name="string",
policy_tags=google_native.bigquery.v2.TableFieldSchemaPolicyTagsArgs(
names=["string"],
),
precision="string",
range_element_type=google_native.bigquery.v2.TableFieldSchemaRangeElementTypeArgs(
type="string",
),
rounding_mode="string",
scale="string",
type="string",
)],
),
source_format="string",
source_uris=["string"],
),
friendly_name="string",
model=google_native.bigquery.v2.ModelDefinitionArgs(
model_options=google_native.bigquery.v2.ModelDefinitionModelOptionsArgs(
labels=["string"],
loss_type="string",
model_type="string",
),
training_runs=[google_native.bigquery.v2.BqmlTrainingRunArgs(
iteration_results=[google_native.bigquery.v2.BqmlIterationResultArgs(
duration_ms="string",
eval_loss=0,
index=0,
learn_rate=0,
training_loss=0,
)],
start_time="string",
state="string",
training_options=google_native.bigquery.v2.BqmlTrainingRunTrainingOptionsArgs(
early_stop=False,
l1_reg=0,
l2_reg=0,
learn_rate=0,
learn_rate_strategy="string",
line_search_init_learn_rate=0,
max_iteration="string",
min_rel_progress=0,
warm_start=False,
),
)],
),
view=google_native.bigquery.v2.ViewDefinitionArgs(
query="string",
use_explicit_column_names=False,
use_legacy_sql=False,
user_defined_function_resources=[google_native.bigquery.v2.UserDefinedFunctionResourceArgs(
inline_code="string",
resource_uri="string",
)],
),
clustering=google_native.bigquery.v2.ClusteringArgs(
fields=["string"],
),
labels={
"string": "string",
},
project="string",
range_partitioning=google_native.bigquery.v2.RangePartitioningArgs(
field="string",
range=google_native.bigquery.v2.RangePartitioningRangeArgs(
end="string",
interval="string",
start="string",
),
),
require_partition_filter=False,
resource_tags={
"string": "string",
},
schema=google_native.bigquery.v2.TableSchemaArgs(
fields=[table_field_schema],
),
table_constraints=google_native.bigquery.v2.TableConstraintsArgs(
foreign_keys=[google_native.bigquery.v2.TableConstraintsForeignKeysItemArgs(
column_references=[google_native.bigquery.v2.TableConstraintsForeignKeysItemColumnReferencesItemArgs(
referenced_column="string",
referencing_column="string",
)],
name="string",
referenced_table=google_native.bigquery.v2.TableConstraintsForeignKeysItemReferencedTableArgs(
dataset_id="string",
project="string",
table_id="string",
),
)],
primary_key=google_native.bigquery.v2.TableConstraintsPrimaryKeyArgs(
columns=["string"],
),
),
biglake_configuration=google_native.bigquery.v2.BigLakeConfigurationArgs(
connection_id="string",
file_format="string",
storage_uri="string",
table_format="string",
),
time_partitioning=google_native.bigquery.v2.TimePartitioningArgs(
expiration_ms="string",
field="string",
require_partition_filter=False,
type="string",
),
materialized_view=google_native.bigquery.v2.MaterializedViewDefinitionArgs(
allow_non_incremental_definition=False,
enable_refresh=False,
max_staleness="string",
query="string",
refresh_interval_ms="string",
))
const google_nativeTableResource = new google_native.bigquery.v2.Table("google-nativeTableResource", {
datasetId: "string",
maxStaleness: "string",
tableReference: {
datasetId: "string",
project: "string",
tableId: "string",
},
description: "string",
encryptionConfiguration: {
kmsKeyName: "string",
},
expirationTime: "string",
externalDataConfiguration: {
autodetect: false,
avroOptions: {
useAvroLogicalTypes: false,
},
bigtableOptions: {
columnFamilies: [{
columns: [{
encoding: "string",
fieldName: "string",
onlyReadLatest: false,
qualifierEncoded: "string",
qualifierString: "string",
type: "string",
}],
encoding: "string",
familyId: "string",
onlyReadLatest: false,
type: "string",
}],
ignoreUnspecifiedColumnFamilies: false,
readRowkeyAsString: false,
},
compression: "string",
connectionId: "string",
csvOptions: {
allowJaggedRows: false,
allowQuotedNewlines: false,
encoding: "string",
fieldDelimiter: "string",
nullMarker: "string",
preserveAsciiControlCharacters: false,
quote: "string",
skipLeadingRows: "string",
},
decimalTargetTypes: ["string"],
fileSetSpecType: "string",
googleSheetsOptions: {
range: "string",
skipLeadingRows: "string",
},
hivePartitioningOptions: {
mode: "string",
requirePartitionFilter: false,
sourceUriPrefix: "string",
},
ignoreUnknownValues: false,
jsonOptions: {
encoding: "string",
},
maxBadRecords: 0,
metadataCacheMode: "string",
objectMetadata: "string",
parquetOptions: {
enableListInference: false,
enumAsString: false,
},
referenceFileSchemaUri: "string",
schema: {
fields: [{
categories: {
names: ["string"],
},
collation: "string",
defaultValueExpression: "string",
description: "string",
fields: [tableFieldSchema],
maxLength: "string",
mode: "string",
name: "string",
policyTags: {
names: ["string"],
},
precision: "string",
rangeElementType: {
type: "string",
},
roundingMode: "string",
scale: "string",
type: "string",
}],
},
sourceFormat: "string",
sourceUris: ["string"],
},
friendlyName: "string",
model: {
modelOptions: {
labels: ["string"],
lossType: "string",
modelType: "string",
},
trainingRuns: [{
iterationResults: [{
durationMs: "string",
evalLoss: 0,
index: 0,
learnRate: 0,
trainingLoss: 0,
}],
startTime: "string",
state: "string",
trainingOptions: {
earlyStop: false,
l1Reg: 0,
l2Reg: 0,
learnRate: 0,
learnRateStrategy: "string",
lineSearchInitLearnRate: 0,
maxIteration: "string",
minRelProgress: 0,
warmStart: false,
},
}],
},
view: {
query: "string",
useExplicitColumnNames: false,
useLegacySql: false,
userDefinedFunctionResources: [{
inlineCode: "string",
resourceUri: "string",
}],
},
clustering: {
fields: ["string"],
},
labels: {
string: "string",
},
project: "string",
rangePartitioning: {
field: "string",
range: {
end: "string",
interval: "string",
start: "string",
},
},
requirePartitionFilter: false,
resourceTags: {
string: "string",
},
schema: {
fields: [tableFieldSchema],
},
tableConstraints: {
foreignKeys: [{
columnReferences: [{
referencedColumn: "string",
referencingColumn: "string",
}],
name: "string",
referencedTable: {
datasetId: "string",
project: "string",
tableId: "string",
},
}],
primaryKey: {
columns: ["string"],
},
},
biglakeConfiguration: {
connectionId: "string",
fileFormat: "string",
storageUri: "string",
tableFormat: "string",
},
timePartitioning: {
expirationMs: "string",
field: "string",
requirePartitionFilter: false,
type: "string",
},
materializedView: {
allowNonIncrementalDefinition: false,
enableRefresh: false,
maxStaleness: "string",
query: "string",
refreshIntervalMs: "string",
},
});
type: google-native:bigquery/v2:Table
properties:
biglakeConfiguration:
connectionId: string
fileFormat: string
storageUri: string
tableFormat: string
clustering:
fields:
- string
datasetId: string
description: string
encryptionConfiguration:
kmsKeyName: string
expirationTime: string
externalDataConfiguration:
autodetect: false
avroOptions:
useAvroLogicalTypes: false
bigtableOptions:
columnFamilies:
- columns:
- encoding: string
fieldName: string
onlyReadLatest: false
qualifierEncoded: string
qualifierString: string
type: string
encoding: string
familyId: string
onlyReadLatest: false
type: string
ignoreUnspecifiedColumnFamilies: false
readRowkeyAsString: false
compression: string
connectionId: string
csvOptions:
allowJaggedRows: false
allowQuotedNewlines: false
encoding: string
fieldDelimiter: string
nullMarker: string
preserveAsciiControlCharacters: false
quote: string
skipLeadingRows: string
decimalTargetTypes:
- string
fileSetSpecType: string
googleSheetsOptions:
range: string
skipLeadingRows: string
hivePartitioningOptions:
mode: string
requirePartitionFilter: false
sourceUriPrefix: string
ignoreUnknownValues: false
jsonOptions:
encoding: string
maxBadRecords: 0
metadataCacheMode: string
objectMetadata: string
parquetOptions:
enableListInference: false
enumAsString: false
referenceFileSchemaUri: string
schema:
fields:
- categories:
names:
- string
collation: string
defaultValueExpression: string
description: string
fields:
- ${tableFieldSchema}
maxLength: string
mode: string
name: string
policyTags:
names:
- string
precision: string
rangeElementType:
type: string
roundingMode: string
scale: string
type: string
sourceFormat: string
sourceUris:
- string
friendlyName: string
labels:
string: string
materializedView:
allowNonIncrementalDefinition: false
enableRefresh: false
maxStaleness: string
query: string
refreshIntervalMs: string
maxStaleness: string
model:
modelOptions:
labels:
- string
lossType: string
modelType: string
trainingRuns:
- iterationResults:
- durationMs: string
evalLoss: 0
index: 0
learnRate: 0
trainingLoss: 0
startTime: string
state: string
trainingOptions:
earlyStop: false
l1Reg: 0
l2Reg: 0
learnRate: 0
learnRateStrategy: string
lineSearchInitLearnRate: 0
maxIteration: string
minRelProgress: 0
warmStart: false
project: string
rangePartitioning:
field: string
range:
end: string
interval: string
start: string
requirePartitionFilter: false
resourceTags:
string: string
schema:
fields:
- ${tableFieldSchema}
tableConstraints:
foreignKeys:
- columnReferences:
- referencedColumn: string
referencingColumn: string
name: string
referencedTable:
datasetId: string
project: string
tableId: string
primaryKey:
columns:
- string
tableReference:
datasetId: string
project: string
tableId: string
timePartitioning:
expirationMs: string
field: string
requirePartitionFilter: false
type: string
view:
query: string
useExplicitColumnNames: false
useLegacySql: false
userDefinedFunctionResources:
- inlineCode: string
resourceUri: string
Table Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The Table resource accepts the following input properties:
- Dataset
Id string - Biglake
Configuration Pulumi.Google Native. Big Query. V2. Inputs. Big Lake Configuration - [Optional] Specifies the configuration of a BigLake managed table.
- Clustering
Pulumi.
Google Native. Big Query. V2. Inputs. Clustering - [Beta] Clustering specification for the table. Must be specified with partitioning, data in the table will be first partitioned and subsequently clustered.
- Description string
- [Optional] A user-friendly description of this table.
- Encryption
Configuration Pulumi.Google Native. Big Query. V2. Inputs. Encryption Configuration - Custom encryption configuration (e.g., Cloud KMS keys).
- Expiration
Time string - [Optional] The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created tables.
- External
Data Pulumi.Configuration Google Native. Big Query. V2. Inputs. External Data Configuration - [Optional] Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.
- Friendly
Name string - [Optional] A descriptive name for this table.
- Labels Dictionary<string, string>
- The labels associated with this table. You can use these to organize and group your tables. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.
- Materialized
View Pulumi.Google Native. Big Query. V2. Inputs. Materialized View Definition - [Optional] Materialized view definition.
- Max
Staleness string - [Optional] Max staleness of data that could be returned when table or materialized view is queried (formatted as Google SQL Interval type).
- Model
Pulumi.
Google Native. Big Query. V2. Inputs. Model Definition - [Output-only, Beta] Present iff this table represents a ML model. Describes the training information for the model, and it is required to run 'PREDICT' queries.
- Project string
- Range
Partitioning Pulumi.Google Native. Big Query. V2. Inputs. Range Partitioning - [TrustedTester] Range partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.
- Require
Partition boolFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
- Dictionary<string, string>
- [Optional] The tags associated with this table. Tag keys are globally unique. See additional information on tags. An object containing a list of "key": value pairs. The key is the namespaced friendly name of the tag key, e.g. "12345/environment" where 12345 is parent id. The value is the friendly short name of the tag value, e.g. "production".
- Schema
Pulumi.
Google Native. Big Query. V2. Inputs. Table Schema - [Optional] Describes the schema of this table.
- Table
Constraints Pulumi.Google Native. Big Query. V2. Inputs. Table Constraints - [Optional] The table constraints on the table.
- Table
Reference Pulumi.Google Native. Big Query. V2. Inputs. Table Reference - [Required] Reference describing the ID of this table.
- Time
Partitioning Pulumi.Google Native. Big Query. V2. Inputs. Time Partitioning - Time-based partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.
- View
Pulumi.
Google Native. Big Query. V2. Inputs. View Definition - [Optional] The view definition.
- Dataset
Id string - Biglake
Configuration BigLake Configuration Args - [Optional] Specifies the configuration of a BigLake managed table.
- Clustering
Clustering
Args - [Beta] Clustering specification for the table. Must be specified with partitioning, data in the table will be first partitioned and subsequently clustered.
- Description string
- [Optional] A user-friendly description of this table.
- Encryption
Configuration EncryptionConfiguration Args - Custom encryption configuration (e.g., Cloud KMS keys).
- Expiration
Time string - [Optional] The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created tables.
- External
Data ExternalConfiguration Data Configuration Args - [Optional] Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.
- Friendly
Name string - [Optional] A descriptive name for this table.
- Labels map[string]string
- The labels associated with this table. You can use these to organize and group your tables. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.
- Materialized
View MaterializedView Definition Args - [Optional] Materialized view definition.
- Max
Staleness string - [Optional] Max staleness of data that could be returned when table or materialized view is queried (formatted as Google SQL Interval type).
- Model
Model
Definition Args - [Output-only, Beta] Present iff this table represents a ML model. Describes the training information for the model, and it is required to run 'PREDICT' queries.
- Project string
- Range
Partitioning RangePartitioning Args - [TrustedTester] Range partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.
- Require
Partition boolFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
- map[string]string
- [Optional] The tags associated with this table. Tag keys are globally unique. See additional information on tags. An object containing a list of "key": value pairs. The key is the namespaced friendly name of the tag key, e.g. "12345/environment" where 12345 is parent id. The value is the friendly short name of the tag value, e.g. "production".
- Schema
Table
Schema Args - [Optional] Describes the schema of this table.
- Table
Constraints TableConstraints Args - [Optional] The table constraints on the table.
- Table
Reference TableReference Args - [Required] Reference describing the ID of this table.
- Time
Partitioning TimePartitioning Args - Time-based partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.
- View
View
Definition Args - [Optional] The view definition.
- dataset
Id String - biglake
Configuration BigLake Configuration - [Optional] Specifies the configuration of a BigLake managed table.
- clustering Clustering
- [Beta] Clustering specification for the table. Must be specified with partitioning, data in the table will be first partitioned and subsequently clustered.
- description String
- [Optional] A user-friendly description of this table.
- encryption
Configuration EncryptionConfiguration - Custom encryption configuration (e.g., Cloud KMS keys).
- expiration
Time String - [Optional] The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created tables.
- external
Data ExternalConfiguration Data Configuration - [Optional] Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.
- friendly
Name String - [Optional] A descriptive name for this table.
- labels Map<String,String>
- The labels associated with this table. You can use these to organize and group your tables. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.
- materialized
View MaterializedView Definition - [Optional] Materialized view definition.
- max
Staleness String - [Optional] Max staleness of data that could be returned when table or materialized view is queried (formatted as Google SQL Interval type).
- model
Model
Definition - [Output-only, Beta] Present iff this table represents a ML model. Describes the training information for the model, and it is required to run 'PREDICT' queries.
- project String
- range
Partitioning RangePartitioning - [TrustedTester] Range partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.
- require
Partition BooleanFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
- Map<String,String>
- [Optional] The tags associated with this table. Tag keys are globally unique. See additional information on tags. An object containing a list of "key": value pairs. The key is the namespaced friendly name of the tag key, e.g. "12345/environment" where 12345 is parent id. The value is the friendly short name of the tag value, e.g. "production".
- schema
Table
Schema - [Optional] Describes the schema of this table.
- table
Constraints TableConstraints - [Optional] The table constraints on the table.
- table
Reference TableReference - [Required] Reference describing the ID of this table.
- time
Partitioning TimePartitioning - Time-based partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.
- view
View
Definition - [Optional] The view definition.
- dataset
Id string - biglake
Configuration BigLake Configuration - [Optional] Specifies the configuration of a BigLake managed table.
- clustering Clustering
- [Beta] Clustering specification for the table. Must be specified with partitioning, data in the table will be first partitioned and subsequently clustered.
- description string
- [Optional] A user-friendly description of this table.
- encryption
Configuration EncryptionConfiguration - Custom encryption configuration (e.g., Cloud KMS keys).
- expiration
Time string - [Optional] The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created tables.
- external
Data ExternalConfiguration Data Configuration - [Optional] Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.
- friendly
Name string - [Optional] A descriptive name for this table.
- labels {[key: string]: string}
- The labels associated with this table. You can use these to organize and group your tables. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.
- materialized
View MaterializedView Definition - [Optional] Materialized view definition.
- max
Staleness string - [Optional] Max staleness of data that could be returned when table or materialized view is queried (formatted as Google SQL Interval type).
- model
Model
Definition - [Output-only, Beta] Present iff this table represents a ML model. Describes the training information for the model, and it is required to run 'PREDICT' queries.
- project string
- range
Partitioning RangePartitioning - [TrustedTester] Range partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.
- require
Partition booleanFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
- {[key: string]: string}
- [Optional] The tags associated with this table. Tag keys are globally unique. See additional information on tags. An object containing a list of "key": value pairs. The key is the namespaced friendly name of the tag key, e.g. "12345/environment" where 12345 is parent id. The value is the friendly short name of the tag value, e.g. "production".
- schema
Table
Schema - [Optional] Describes the schema of this table.
- table
Constraints TableConstraints - [Optional] The table constraints on the table.
- table
Reference TableReference - [Required] Reference describing the ID of this table.
- time
Partitioning TimePartitioning - Time-based partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.
- view
View
Definition - [Optional] The view definition.
- dataset_
id str - biglake_
configuration BigLake Configuration Args - [Optional] Specifies the configuration of a BigLake managed table.
- clustering
Clustering
Args - [Beta] Clustering specification for the table. Must be specified with partitioning, data in the table will be first partitioned and subsequently clustered.
- description str
- [Optional] A user-friendly description of this table.
- encryption_
configuration EncryptionConfiguration Args - Custom encryption configuration (e.g., Cloud KMS keys).
- expiration_
time str - [Optional] The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created tables.
- external_
data_ Externalconfiguration Data Configuration Args - [Optional] Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.
- friendly_
name str - [Optional] A descriptive name for this table.
- labels Mapping[str, str]
- The labels associated with this table. You can use these to organize and group your tables. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.
- materialized_
view MaterializedView Definition Args - [Optional] Materialized view definition.
- max_
staleness str - [Optional] Max staleness of data that could be returned when table or materialized view is queried (formatted as Google SQL Interval type).
- model
Model
Definition Args - [Output-only, Beta] Present iff this table represents a ML model. Describes the training information for the model, and it is required to run 'PREDICT' queries.
- project str
- range_
partitioning RangePartitioning Args - [TrustedTester] Range partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.
- require_
partition_ boolfilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
- Mapping[str, str]
- [Optional] The tags associated with this table. Tag keys are globally unique. See additional information on tags. An object containing a list of "key": value pairs. The key is the namespaced friendly name of the tag key, e.g. "12345/environment" where 12345 is parent id. The value is the friendly short name of the tag value, e.g. "production".
- schema
Table
Schema Args - [Optional] Describes the schema of this table.
- table_
constraints TableConstraints Args - [Optional] The table constraints on the table.
- table_
reference TableReference Args - [Required] Reference describing the ID of this table.
- time_
partitioning TimePartitioning Args - Time-based partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.
- view
View
Definition Args - [Optional] The view definition.
- dataset
Id String - biglake
Configuration Property Map - [Optional] Specifies the configuration of a BigLake managed table.
- clustering Property Map
- [Beta] Clustering specification for the table. Must be specified with partitioning, data in the table will be first partitioned and subsequently clustered.
- description String
- [Optional] A user-friendly description of this table.
- encryption
Configuration Property Map - Custom encryption configuration (e.g., Cloud KMS keys).
- expiration
Time String - [Optional] The time when this table expires, in milliseconds since the epoch. If not present, the table will persist indefinitely. Expired tables will be deleted and their storage reclaimed. The defaultTableExpirationMs property of the encapsulating dataset can be used to set a default expirationTime on newly created tables.
- external
Data Property MapConfiguration - [Optional] Describes the data format, location, and other properties of a table stored outside of BigQuery. By defining these properties, the data source can then be queried as if it were a standard BigQuery table.
- friendly
Name String - [Optional] A descriptive name for this table.
- labels Map<String>
- The labels associated with this table. You can use these to organize and group your tables. Label keys and values can be no longer than 63 characters, can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter and each label in the list must have a different key.
- materialized
View Property Map - [Optional] Materialized view definition.
- max
Staleness String - [Optional] Max staleness of data that could be returned when table or materialized view is queried (formatted as Google SQL Interval type).
- model Property Map
- [Output-only, Beta] Present iff this table represents a ML model. Describes the training information for the model, and it is required to run 'PREDICT' queries.
- project String
- range
Partitioning Property Map - [TrustedTester] Range partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.
- require
Partition BooleanFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified.
- Map<String>
- [Optional] The tags associated with this table. Tag keys are globally unique. See additional information on tags. An object containing a list of "key": value pairs. The key is the namespaced friendly name of the tag key, e.g. "12345/environment" where 12345 is parent id. The value is the friendly short name of the tag value, e.g. "production".
- schema Property Map
- [Optional] Describes the schema of this table.
- table
Constraints Property Map - [Optional] The table constraints on the table.
- table
Reference Property Map - [Required] Reference describing the ID of this table.
- time
Partitioning Property Map - Time-based partitioning specification for this table. Only one of timePartitioning and rangePartitioning should be specified.
- view Property Map
- [Optional] The view definition.
Outputs
All input properties are implicitly available as output properties. Additionally, the Table resource produces the following output properties:
- Clone
Definition Pulumi.Google Native. Big Query. V2. Outputs. Clone Definition Response - Clone definition.
- Creation
Time string - The time when this table was created, in milliseconds since the epoch.
- Default
Collation string - The default collation of the table.
- Default
Rounding stringMode - The default rounding mode of the table.
- Etag string
- A hash of the table metadata. Used to ensure there were no concurrent modifications to the resource when attempting an update. Not guaranteed to change when the table contents or the fields numRows, numBytes, numLongTermBytes or lastModifiedTime change.
- Id string
- The provider-assigned unique ID for this managed resource.
- Kind string
- The type of the resource.
- Last
Modified stringTime - The time when this table was last modified, in milliseconds since the epoch.
- Location string
- The geographic location where the table resides. This value is inherited from the dataset.
- Num
Active stringLogical Bytes - Number of logical bytes that are less than 90 days old.
- Num
Active stringPhysical Bytes - Number of physical bytes less than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- Num
Bytes string - The size of this table in bytes, excluding any data in the streaming buffer.
- Num
Long stringTerm Bytes - The number of bytes in the table that are considered "long-term storage".
- Num
Long stringTerm Logical Bytes - Number of logical bytes that are more than 90 days old.
- Num
Long stringTerm Physical Bytes - Number of physical bytes more than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- Num
Partitions string - The number of partitions present in the table or materialized view. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- Num
Physical stringBytes - [TrustedTester] The physical size of this table in bytes, excluding any data in the streaming buffer. This includes compression and storage used for time travel.
- Num
Rows string - The number of rows of data in this table, excluding any data in the streaming buffer.
- Num
Time stringTravel Physical Bytes - Number of physical bytes used by time travel storage (deleted or changed data). This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- Num
Total stringLogical Bytes - Total number of logical bytes in the table or materialized view.
- Num
Total stringPhysical Bytes - The physical size of this table in bytes. This also includes storage used for time travel. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- Self
Link string - A URL that can be used to access this resource again.
- Snapshot
Definition Pulumi.Google Native. Big Query. V2. Outputs. Snapshot Definition Response - Snapshot definition.
- Streaming
Buffer Pulumi.Google Native. Big Query. V2. Outputs. Streamingbuffer Response - Contains information regarding this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.
- Type string
- Describes the table type. The following values are supported: TABLE: A normal BigQuery table. VIEW: A virtual table defined by a SQL query. SNAPSHOT: An immutable, read-only table that is a copy of another table. [TrustedTester] MATERIALIZED_VIEW: SQL query whose result is persisted. EXTERNAL: A table that references data stored in an external storage system, such as Google Cloud Storage. The default value is TABLE.
- Clone
Definition CloneDefinition Response - Clone definition.
- Creation
Time string - The time when this table was created, in milliseconds since the epoch.
- Default
Collation string - The default collation of the table.
- Default
Rounding stringMode - The default rounding mode of the table.
- Etag string
- A hash of the table metadata. Used to ensure there were no concurrent modifications to the resource when attempting an update. Not guaranteed to change when the table contents or the fields numRows, numBytes, numLongTermBytes or lastModifiedTime change.
- Id string
- The provider-assigned unique ID for this managed resource.
- Kind string
- The type of the resource.
- Last
Modified stringTime - The time when this table was last modified, in milliseconds since the epoch.
- Location string
- The geographic location where the table resides. This value is inherited from the dataset.
- Num
Active stringLogical Bytes - Number of logical bytes that are less than 90 days old.
- Num
Active stringPhysical Bytes - Number of physical bytes less than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- Num
Bytes string - The size of this table in bytes, excluding any data in the streaming buffer.
- Num
Long stringTerm Bytes - The number of bytes in the table that are considered "long-term storage".
- Num
Long stringTerm Logical Bytes - Number of logical bytes that are more than 90 days old.
- Num
Long stringTerm Physical Bytes - Number of physical bytes more than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- Num
Partitions string - The number of partitions present in the table or materialized view. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- Num
Physical stringBytes - [TrustedTester] The physical size of this table in bytes, excluding any data in the streaming buffer. This includes compression and storage used for time travel.
- Num
Rows string - The number of rows of data in this table, excluding any data in the streaming buffer.
- Num
Time stringTravel Physical Bytes - Number of physical bytes used by time travel storage (deleted or changed data). This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- Num
Total stringLogical Bytes - Total number of logical bytes in the table or materialized view.
- Num
Total stringPhysical Bytes - The physical size of this table in bytes. This also includes storage used for time travel. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- Self
Link string - A URL that can be used to access this resource again.
- Snapshot
Definition SnapshotDefinition Response - Snapshot definition.
- Streaming
Buffer StreamingbufferResponse - Contains information regarding this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.
- Type string
- Describes the table type. The following values are supported: TABLE: A normal BigQuery table. VIEW: A virtual table defined by a SQL query. SNAPSHOT: An immutable, read-only table that is a copy of another table. [TrustedTester] MATERIALIZED_VIEW: SQL query whose result is persisted. EXTERNAL: A table that references data stored in an external storage system, such as Google Cloud Storage. The default value is TABLE.
- clone
Definition CloneDefinition Response - Clone definition.
- creation
Time String - The time when this table was created, in milliseconds since the epoch.
- default
Collation String - The default collation of the table.
- default
Rounding StringMode - The default rounding mode of the table.
- etag String
- A hash of the table metadata. Used to ensure there were no concurrent modifications to the resource when attempting an update. Not guaranteed to change when the table contents or the fields numRows, numBytes, numLongTermBytes or lastModifiedTime change.
- id String
- The provider-assigned unique ID for this managed resource.
- kind String
- The type of the resource.
- last
Modified StringTime - The time when this table was last modified, in milliseconds since the epoch.
- location String
- The geographic location where the table resides. This value is inherited from the dataset.
- num
Active StringLogical Bytes - Number of logical bytes that are less than 90 days old.
- num
Active StringPhysical Bytes - Number of physical bytes less than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num
Bytes String - The size of this table in bytes, excluding any data in the streaming buffer.
- num
Long StringTerm Bytes - The number of bytes in the table that are considered "long-term storage".
- num
Long StringTerm Logical Bytes - Number of logical bytes that are more than 90 days old.
- num
Long StringTerm Physical Bytes - Number of physical bytes more than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num
Partitions String - The number of partitions present in the table or materialized view. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num
Physical StringBytes - [TrustedTester] The physical size of this table in bytes, excluding any data in the streaming buffer. This includes compression and storage used for time travel.
- num
Rows String - The number of rows of data in this table, excluding any data in the streaming buffer.
- num
Time StringTravel Physical Bytes - Number of physical bytes used by time travel storage (deleted or changed data). This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num
Total StringLogical Bytes - Total number of logical bytes in the table or materialized view.
- num
Total StringPhysical Bytes - The physical size of this table in bytes. This also includes storage used for time travel. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- self
Link String - A URL that can be used to access this resource again.
- snapshot
Definition SnapshotDefinition Response - Snapshot definition.
- streaming
Buffer StreamingbufferResponse - Contains information regarding this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.
- type String
- Describes the table type. The following values are supported: TABLE: A normal BigQuery table. VIEW: A virtual table defined by a SQL query. SNAPSHOT: An immutable, read-only table that is a copy of another table. [TrustedTester] MATERIALIZED_VIEW: SQL query whose result is persisted. EXTERNAL: A table that references data stored in an external storage system, such as Google Cloud Storage. The default value is TABLE.
- clone
Definition CloneDefinition Response - Clone definition.
- creation
Time string - The time when this table was created, in milliseconds since the epoch.
- default
Collation string - The default collation of the table.
- default
Rounding stringMode - The default rounding mode of the table.
- etag string
- A hash of the table metadata. Used to ensure there were no concurrent modifications to the resource when attempting an update. Not guaranteed to change when the table contents or the fields numRows, numBytes, numLongTermBytes or lastModifiedTime change.
- id string
- The provider-assigned unique ID for this managed resource.
- kind string
- The type of the resource.
- last
Modified stringTime - The time when this table was last modified, in milliseconds since the epoch.
- location string
- The geographic location where the table resides. This value is inherited from the dataset.
- num
Active stringLogical Bytes - Number of logical bytes that are less than 90 days old.
- num
Active stringPhysical Bytes - Number of physical bytes less than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num
Bytes string - The size of this table in bytes, excluding any data in the streaming buffer.
- num
Long stringTerm Bytes - The number of bytes in the table that are considered "long-term storage".
- num
Long stringTerm Logical Bytes - Number of logical bytes that are more than 90 days old.
- num
Long stringTerm Physical Bytes - Number of physical bytes more than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num
Partitions string - The number of partitions present in the table or materialized view. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num
Physical stringBytes - [TrustedTester] The physical size of this table in bytes, excluding any data in the streaming buffer. This includes compression and storage used for time travel.
- num
Rows string - The number of rows of data in this table, excluding any data in the streaming buffer.
- num
Time stringTravel Physical Bytes - Number of physical bytes used by time travel storage (deleted or changed data). This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num
Total stringLogical Bytes - Total number of logical bytes in the table or materialized view.
- num
Total stringPhysical Bytes - The physical size of this table in bytes. This also includes storage used for time travel. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- self
Link string - A URL that can be used to access this resource again.
- snapshot
Definition SnapshotDefinition Response - Snapshot definition.
- streaming
Buffer StreamingbufferResponse - Contains information regarding this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.
- type string
- Describes the table type. The following values are supported: TABLE: A normal BigQuery table. VIEW: A virtual table defined by a SQL query. SNAPSHOT: An immutable, read-only table that is a copy of another table. [TrustedTester] MATERIALIZED_VIEW: SQL query whose result is persisted. EXTERNAL: A table that references data stored in an external storage system, such as Google Cloud Storage. The default value is TABLE.
- clone_
definition CloneDefinition Response - Clone definition.
- creation_
time str - The time when this table was created, in milliseconds since the epoch.
- default_
collation str - The default collation of the table.
- default_
rounding_ strmode - The default rounding mode of the table.
- etag str
- A hash of the table metadata. Used to ensure there were no concurrent modifications to the resource when attempting an update. Not guaranteed to change when the table contents or the fields numRows, numBytes, numLongTermBytes or lastModifiedTime change.
- id str
- The provider-assigned unique ID for this managed resource.
- kind str
- The type of the resource.
- last_
modified_ strtime - The time when this table was last modified, in milliseconds since the epoch.
- location str
- The geographic location where the table resides. This value is inherited from the dataset.
- num_
active_ strlogical_ bytes - Number of logical bytes that are less than 90 days old.
- num_
active_ strphysical_ bytes - Number of physical bytes less than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num_
bytes str - The size of this table in bytes, excluding any data in the streaming buffer.
- num_
long_ strterm_ bytes - The number of bytes in the table that are considered "long-term storage".
- num_
long_ strterm_ logical_ bytes - Number of logical bytes that are more than 90 days old.
- num_
long_ strterm_ physical_ bytes - Number of physical bytes more than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num_
partitions str - The number of partitions present in the table or materialized view. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num_
physical_ strbytes - [TrustedTester] The physical size of this table in bytes, excluding any data in the streaming buffer. This includes compression and storage used for time travel.
- num_
rows str - The number of rows of data in this table, excluding any data in the streaming buffer.
- num_
time_ strtravel_ physical_ bytes - Number of physical bytes used by time travel storage (deleted or changed data). This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num_
total_ strlogical_ bytes - Total number of logical bytes in the table or materialized view.
- num_
total_ strphysical_ bytes - The physical size of this table in bytes. This also includes storage used for time travel. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- self_
link str - A URL that can be used to access this resource again.
- snapshot_
definition SnapshotDefinition Response - Snapshot definition.
- streaming_
buffer StreamingbufferResponse - Contains information regarding this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.
- type str
- Describes the table type. The following values are supported: TABLE: A normal BigQuery table. VIEW: A virtual table defined by a SQL query. SNAPSHOT: An immutable, read-only table that is a copy of another table. [TrustedTester] MATERIALIZED_VIEW: SQL query whose result is persisted. EXTERNAL: A table that references data stored in an external storage system, such as Google Cloud Storage. The default value is TABLE.
- clone
Definition Property Map - Clone definition.
- creation
Time String - The time when this table was created, in milliseconds since the epoch.
- default
Collation String - The default collation of the table.
- default
Rounding StringMode - The default rounding mode of the table.
- etag String
- A hash of the table metadata. Used to ensure there were no concurrent modifications to the resource when attempting an update. Not guaranteed to change when the table contents or the fields numRows, numBytes, numLongTermBytes or lastModifiedTime change.
- id String
- The provider-assigned unique ID for this managed resource.
- kind String
- The type of the resource.
- last
Modified StringTime - The time when this table was last modified, in milliseconds since the epoch.
- location String
- The geographic location where the table resides. This value is inherited from the dataset.
- num
Active StringLogical Bytes - Number of logical bytes that are less than 90 days old.
- num
Active StringPhysical Bytes - Number of physical bytes less than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num
Bytes String - The size of this table in bytes, excluding any data in the streaming buffer.
- num
Long StringTerm Bytes - The number of bytes in the table that are considered "long-term storage".
- num
Long StringTerm Logical Bytes - Number of logical bytes that are more than 90 days old.
- num
Long StringTerm Physical Bytes - Number of physical bytes more than 90 days old. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num
Partitions String - The number of partitions present in the table or materialized view. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num
Physical StringBytes - [TrustedTester] The physical size of this table in bytes, excluding any data in the streaming buffer. This includes compression and storage used for time travel.
- num
Rows String - The number of rows of data in this table, excluding any data in the streaming buffer.
- num
Time StringTravel Physical Bytes - Number of physical bytes used by time travel storage (deleted or changed data). This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- num
Total StringLogical Bytes - Total number of logical bytes in the table or materialized view.
- num
Total StringPhysical Bytes - The physical size of this table in bytes. This also includes storage used for time travel. This data is not kept in real time, and might be delayed by a few seconds to a few minutes.
- self
Link String - A URL that can be used to access this resource again.
- snapshot
Definition Property Map - Snapshot definition.
- streaming
Buffer Property Map - Contains information regarding this table's streaming buffer, if one is present. This field will be absent if the table is not being streamed to or if there is no data in the streaming buffer.
- type String
- Describes the table type. The following values are supported: TABLE: A normal BigQuery table. VIEW: A virtual table defined by a SQL query. SNAPSHOT: An immutable, read-only table that is a copy of another table. [TrustedTester] MATERIALIZED_VIEW: SQL query whose result is persisted. EXTERNAL: A table that references data stored in an external storage system, such as Google Cloud Storage. The default value is TABLE.
Supporting Types
AvroOptions, AvroOptionsArgs
- Use
Avro boolLogical Types - [Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).
- Use
Avro boolLogical Types - [Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).
- use
Avro BooleanLogical Types - [Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).
- use
Avro booleanLogical Types - [Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).
- use_
avro_ boollogical_ types - [Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).
- use
Avro BooleanLogical Types - [Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).
AvroOptionsResponse, AvroOptionsResponseArgs
- Use
Avro boolLogical Types - [Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).
- Use
Avro boolLogical Types - [Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).
- use
Avro BooleanLogical Types - [Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).
- use
Avro booleanLogical Types - [Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).
- use_
avro_ boollogical_ types - [Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).
- use
Avro BooleanLogical Types - [Optional] If sourceFormat is set to "AVRO", indicates whether to interpret logical types as the corresponding BigQuery data type (for example, TIMESTAMP), instead of using the raw type (for example, INTEGER).
BigLakeConfiguration, BigLakeConfigurationArgs
- Connection
Id string - [Required] Required and immutable. Credential reference for accessing external storage system. Normalized as project_id.location_id.connection_id.
- File
Format string - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- Storage
Uri string - [Required] Required and immutable. Fully qualified location prefix of the external folder where data is stored. Normalized to standard format: "gs:////". Starts with "gs://" rather than "/bigstore/". Ends with "/". Does not contain "*". See also BigLakeStorageMetadata on how it is used.
- Table
Format string - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- Connection
Id string - [Required] Required and immutable. Credential reference for accessing external storage system. Normalized as project_id.location_id.connection_id.
- File
Format string - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- Storage
Uri string - [Required] Required and immutable. Fully qualified location prefix of the external folder where data is stored. Normalized to standard format: "gs:////". Starts with "gs://" rather than "/bigstore/". Ends with "/". Does not contain "*". See also BigLakeStorageMetadata on how it is used.
- Table
Format string - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- connection
Id String - [Required] Required and immutable. Credential reference for accessing external storage system. Normalized as project_id.location_id.connection_id.
- file
Format String - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- storage
Uri String - [Required] Required and immutable. Fully qualified location prefix of the external folder where data is stored. Normalized to standard format: "gs:////". Starts with "gs://" rather than "/bigstore/". Ends with "/". Does not contain "*". See also BigLakeStorageMetadata on how it is used.
- table
Format String - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- connection
Id string - [Required] Required and immutable. Credential reference for accessing external storage system. Normalized as project_id.location_id.connection_id.
- file
Format string - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- storage
Uri string - [Required] Required and immutable. Fully qualified location prefix of the external folder where data is stored. Normalized to standard format: "gs:////". Starts with "gs://" rather than "/bigstore/". Ends with "/". Does not contain "*". See also BigLakeStorageMetadata on how it is used.
- table
Format string - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- connection_
id str - [Required] Required and immutable. Credential reference for accessing external storage system. Normalized as project_id.location_id.connection_id.
- file_
format str - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- storage_
uri str - [Required] Required and immutable. Fully qualified location prefix of the external folder where data is stored. Normalized to standard format: "gs:////". Starts with "gs://" rather than "/bigstore/". Ends with "/". Does not contain "*". See also BigLakeStorageMetadata on how it is used.
- table_
format str - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- connection
Id String - [Required] Required and immutable. Credential reference for accessing external storage system. Normalized as project_id.location_id.connection_id.
- file
Format String - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- storage
Uri String - [Required] Required and immutable. Fully qualified location prefix of the external folder where data is stored. Normalized to standard format: "gs:////". Starts with "gs://" rather than "/bigstore/". Ends with "/". Does not contain "*". See also BigLakeStorageMetadata on how it is used.
- table
Format String - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
BigLakeConfigurationResponse, BigLakeConfigurationResponseArgs
- Connection
Id string - [Required] Required and immutable. Credential reference for accessing external storage system. Normalized as project_id.location_id.connection_id.
- File
Format string - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- Storage
Uri string - [Required] Required and immutable. Fully qualified location prefix of the external folder where data is stored. Normalized to standard format: "gs:////". Starts with "gs://" rather than "/bigstore/". Ends with "/". Does not contain "*". See also BigLakeStorageMetadata on how it is used.
- Table
Format string - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- Connection
Id string - [Required] Required and immutable. Credential reference for accessing external storage system. Normalized as project_id.location_id.connection_id.
- File
Format string - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- Storage
Uri string - [Required] Required and immutable. Fully qualified location prefix of the external folder where data is stored. Normalized to standard format: "gs:////". Starts with "gs://" rather than "/bigstore/". Ends with "/". Does not contain "*". See also BigLakeStorageMetadata on how it is used.
- Table
Format string - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- connection
Id String - [Required] Required and immutable. Credential reference for accessing external storage system. Normalized as project_id.location_id.connection_id.
- file
Format String - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- storage
Uri String - [Required] Required and immutable. Fully qualified location prefix of the external folder where data is stored. Normalized to standard format: "gs:////". Starts with "gs://" rather than "/bigstore/". Ends with "/". Does not contain "*". See also BigLakeStorageMetadata on how it is used.
- table
Format String - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- connection
Id string - [Required] Required and immutable. Credential reference for accessing external storage system. Normalized as project_id.location_id.connection_id.
- file
Format string - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- storage
Uri string - [Required] Required and immutable. Fully qualified location prefix of the external folder where data is stored. Normalized to standard format: "gs:////". Starts with "gs://" rather than "/bigstore/". Ends with "/". Does not contain "*". See also BigLakeStorageMetadata on how it is used.
- table
Format string - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- connection_
id str - [Required] Required and immutable. Credential reference for accessing external storage system. Normalized as project_id.location_id.connection_id.
- file_
format str - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- storage_
uri str - [Required] Required and immutable. Fully qualified location prefix of the external folder where data is stored. Normalized to standard format: "gs:////". Starts with "gs://" rather than "/bigstore/". Ends with "/". Does not contain "*". See also BigLakeStorageMetadata on how it is used.
- table_
format str - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- connection
Id String - [Required] Required and immutable. Credential reference for accessing external storage system. Normalized as project_id.location_id.connection_id.
- file
Format String - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
- storage
Uri String - [Required] Required and immutable. Fully qualified location prefix of the external folder where data is stored. Normalized to standard format: "gs:////". Starts with "gs://" rather than "/bigstore/". Ends with "/". Does not contain "*". See also BigLakeStorageMetadata on how it is used.
- table
Format String - [Required] Required and immutable. Open source file format that the table data is stored in. Currently only PARQUET is supported.
BigtableColumn, BigtableColumnArgs
- Encoding string
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.
- Field
Name string - [Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.
- Only
Read boolLatest - [Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.
- Qualifier
Encoded string - [Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.
- Qualifier
String string - Type string
- [Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.
- Encoding string
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.
- Field
Name string - [Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.
- Only
Read boolLatest - [Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.
- Qualifier
Encoded string - [Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.
- Qualifier
String string - Type string
- [Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.
- encoding String
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.
- field
Name String - [Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.
- only
Read BooleanLatest - [Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.
- qualifier
Encoded String - [Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.
- qualifier
String String - type String
- [Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.
- encoding string
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.
- field
Name string - [Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.
- only
Read booleanLatest - [Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.
- qualifier
Encoded string - [Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.
- qualifier
String string - type string
- [Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.
- encoding str
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.
- field_
name str - [Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.
- only_
read_ boollatest - [Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.
- qualifier_
encoded str - [Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.
- qualifier_
string str - type str
- [Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.
- encoding String
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.
- field
Name String - [Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.
- only
Read BooleanLatest - [Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.
- qualifier
Encoded String - [Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.
- qualifier
String String - type String
- [Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.
BigtableColumnFamily, BigtableColumnFamilyArgs
- Columns
List<Pulumi.
Google Native. Big Query. V2. Inputs. Bigtable Column> - [Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.
- Encoding string
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.
- Family
Id string - Identifier of the column family.
- Only
Read boolLatest - [Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.
- Type string
- [Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.
- Columns
[]Bigtable
Column - [Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.
- Encoding string
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.
- Family
Id string - Identifier of the column family.
- Only
Read boolLatest - [Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.
- Type string
- [Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.
- columns
List<Bigtable
Column> - [Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.
- encoding String
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.
- family
Id String - Identifier of the column family.
- only
Read BooleanLatest - [Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.
- type String
- [Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.
- columns
Bigtable
Column[] - [Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.
- encoding string
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.
- family
Id string - Identifier of the column family.
- only
Read booleanLatest - [Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.
- type string
- [Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.
- columns
Sequence[Bigtable
Column] - [Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.
- encoding str
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.
- family_
id str - Identifier of the column family.
- only_
read_ boollatest - [Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.
- type str
- [Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.
- columns List<Property Map>
- [Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.
- encoding String
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.
- family
Id String - Identifier of the column family.
- only
Read BooleanLatest - [Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.
- type String
- [Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.
BigtableColumnFamilyResponse, BigtableColumnFamilyResponseArgs
- Columns
List<Pulumi.
Google Native. Big Query. V2. Inputs. Bigtable Column Response> - [Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.
- Encoding string
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.
- Family
Id string - Identifier of the column family.
- Only
Read boolLatest - [Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.
- Type string
- [Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.
- Columns
[]Bigtable
Column Response - [Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.
- Encoding string
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.
- Family
Id string - Identifier of the column family.
- Only
Read boolLatest - [Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.
- Type string
- [Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.
- columns
List<Bigtable
Column Response> - [Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.
- encoding String
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.
- family
Id String - Identifier of the column family.
- only
Read BooleanLatest - [Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.
- type String
- [Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.
- columns
Bigtable
Column Response[] - [Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.
- encoding string
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.
- family
Id string - Identifier of the column family.
- only
Read booleanLatest - [Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.
- type string
- [Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.
- columns
Sequence[Bigtable
Column Response] - [Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.
- encoding str
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.
- family_
id str - Identifier of the column family.
- only_
read_ boollatest - [Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.
- type str
- [Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.
- columns List<Property Map>
- [Optional] Lists of columns that should be exposed as individual fields as opposed to a list of (column name, value) pairs. All columns whose qualifier matches a qualifier in this list can be accessed as .. Other columns can be accessed as a list through .Column field.
- encoding String
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. This can be overridden for a specific column by listing that column in 'columns' and specifying an encoding for it.
- family
Id String - Identifier of the column family.
- only
Read BooleanLatest - [Optional] If this is set only the latest version of value are exposed for all columns in this column family. This can be overridden for a specific column by listing that column in 'columns' and specifying a different setting for that column.
- type String
- [Optional] The type to convert the value in cells of this column family. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. This can be overridden for a specific column by listing that column in 'columns' and specifying a type for it.
BigtableColumnResponse, BigtableColumnResponseArgs
- Encoding string
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.
- Field
Name string - [Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.
- Only
Read boolLatest - [Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.
- Qualifier
Encoded string - [Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.
- Qualifier
String string - Type string
- [Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.
- Encoding string
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.
- Field
Name string - [Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.
- Only
Read boolLatest - [Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.
- Qualifier
Encoded string - [Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.
- Qualifier
String string - Type string
- [Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.
- encoding String
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.
- field
Name String - [Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.
- only
Read BooleanLatest - [Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.
- qualifier
Encoded String - [Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.
- qualifier
String String - type String
- [Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.
- encoding string
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.
- field
Name string - [Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.
- only
Read booleanLatest - [Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.
- qualifier
Encoded string - [Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.
- qualifier
String string - type string
- [Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.
- encoding str
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.
- field_
name str - [Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.
- only_
read_ boollatest - [Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.
- qualifier_
encoded str - [Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.
- qualifier_
string str - type str
- [Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.
- encoding String
- [Optional] The encoding of the values when the type is not STRING. Acceptable encoding values are: TEXT - indicates values are alphanumeric text strings. BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 'encoding' can also be set at the column family level. However, the setting at this level takes precedence if 'encoding' is set at both levels.
- field
Name String - [Optional] If the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field name and is used as field name in queries.
- only
Read BooleanLatest - [Optional] If this is set, only the latest version of value in this column are exposed. 'onlyReadLatest' can also be set at the column family level. However, the setting at this level takes precedence if 'onlyReadLatest' is set at both levels.
- qualifier
Encoded String - [Required] Qualifier of the column. Columns in the parent column family that has this exact qualifier are exposed as . field. If the qualifier is valid UTF-8 string, it can be specified in the qualifier_string field. Otherwise, a base-64 encoded value must be set to qualifier_encoded. The column field name is the same as the column qualifier. However, if the qualifier is not a valid BigQuery field identifier i.e. does not match [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as field_name.
- qualifier
String String - type String
- [Optional] The type to convert the value in cells of this column. The values are expected to be encoded using HBase Bytes.toBytes function when using the BINARY encoding value. Following BigQuery types are allowed (case-sensitive) - BYTES STRING INTEGER FLOAT BOOLEAN Default type is BYTES. 'type' can also be set at the column family level. However, the setting at this level takes precedence if 'type' is set at both levels.
BigtableOptions, BigtableOptionsArgs
- Column
Families List<Pulumi.Google Native. Big Query. V2. Inputs. Bigtable Column Family> - [Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.
- Ignore
Unspecified boolColumn Families - [Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.
- Read
Rowkey boolAs String - [Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.
- Column
Families []BigtableColumn Family - [Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.
- Ignore
Unspecified boolColumn Families - [Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.
- Read
Rowkey boolAs String - [Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.
- column
Families List<BigtableColumn Family> - [Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.
- ignore
Unspecified BooleanColumn Families - [Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.
- read
Rowkey BooleanAs String - [Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.
- column
Families BigtableColumn Family[] - [Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.
- ignore
Unspecified booleanColumn Families - [Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.
- read
Rowkey booleanAs String - [Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.
- column_
families Sequence[BigtableColumn Family] - [Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.
- ignore_
unspecified_ boolcolumn_ families - [Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.
- read_
rowkey_ boolas_ string - [Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.
- column
Families List<Property Map> - [Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.
- ignore
Unspecified BooleanColumn Families - [Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.
- read
Rowkey BooleanAs String - [Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.
BigtableOptionsResponse, BigtableOptionsResponseArgs
- Column
Families List<Pulumi.Google Native. Big Query. V2. Inputs. Bigtable Column Family Response> - [Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.
- Ignore
Unspecified boolColumn Families - [Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.
- Read
Rowkey boolAs String - [Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.
- Column
Families []BigtableColumn Family Response - [Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.
- Ignore
Unspecified boolColumn Families - [Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.
- Read
Rowkey boolAs String - [Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.
- column
Families List<BigtableColumn Family Response> - [Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.
- ignore
Unspecified BooleanColumn Families - [Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.
- read
Rowkey BooleanAs String - [Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.
- column
Families BigtableColumn Family Response[] - [Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.
- ignore
Unspecified booleanColumn Families - [Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.
- read
Rowkey booleanAs String - [Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.
- column_
families Sequence[BigtableColumn Family Response] - [Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.
- ignore_
unspecified_ boolcolumn_ families - [Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.
- read_
rowkey_ boolas_ string - [Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.
- column
Families List<Property Map> - [Optional] List of column families to expose in the table schema along with their types. This list restricts the column families that can be referenced in queries and specifies their value types. You can use this list to do type conversions - see the 'type' field for more details. If you leave this list empty, all column families are present in the table schema and their values are read as BYTES. During a query only the column families referenced in that query are read from Bigtable.
- ignore
Unspecified BooleanColumn Families - [Optional] If field is true, then the column families that are not specified in columnFamilies list are not exposed in the table schema. Otherwise, they are read with BYTES type values. The default value is false.
- read
Rowkey BooleanAs String - [Optional] If field is true, then the rowkey column families will be read and converted to string. Otherwise they are read with BYTES type values and users need to manually cast them with CAST if necessary. The default value is false.
BqmlIterationResult, BqmlIterationResultArgs
- Duration
Ms string - [Output-only, Beta] Time taken to run the training iteration in milliseconds.
- Eval
Loss double - [Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.
- Index int
- [Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.
- Learn
Rate double - [Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.
- Training
Loss double - [Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.
- Duration
Ms string - [Output-only, Beta] Time taken to run the training iteration in milliseconds.
- Eval
Loss float64 - [Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.
- Index int
- [Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.
- Learn
Rate float64 - [Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.
- Training
Loss float64 - [Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.
- duration
Ms String - [Output-only, Beta] Time taken to run the training iteration in milliseconds.
- eval
Loss Double - [Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.
- index Integer
- [Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.
- learn
Rate Double - [Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.
- training
Loss Double - [Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.
- duration
Ms string - [Output-only, Beta] Time taken to run the training iteration in milliseconds.
- eval
Loss number - [Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.
- index number
- [Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.
- learn
Rate number - [Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.
- training
Loss number - [Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.
- duration_
ms str - [Output-only, Beta] Time taken to run the training iteration in milliseconds.
- eval_
loss float - [Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.
- index int
- [Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.
- learn_
rate float - [Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.
- training_
loss float - [Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.
- duration
Ms String - [Output-only, Beta] Time taken to run the training iteration in milliseconds.
- eval
Loss Number - [Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.
- index Number
- [Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.
- learn
Rate Number - [Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.
- training
Loss Number - [Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.
BqmlIterationResultResponse, BqmlIterationResultResponseArgs
- Duration
Ms string - [Output-only, Beta] Time taken to run the training iteration in milliseconds.
- Eval
Loss double - [Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.
- Index int
- [Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.
- Learn
Rate double - [Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.
- Training
Loss double - [Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.
- Duration
Ms string - [Output-only, Beta] Time taken to run the training iteration in milliseconds.
- Eval
Loss float64 - [Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.
- Index int
- [Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.
- Learn
Rate float64 - [Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.
- Training
Loss float64 - [Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.
- duration
Ms String - [Output-only, Beta] Time taken to run the training iteration in milliseconds.
- eval
Loss Double - [Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.
- index Integer
- [Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.
- learn
Rate Double - [Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.
- training
Loss Double - [Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.
- duration
Ms string - [Output-only, Beta] Time taken to run the training iteration in milliseconds.
- eval
Loss number - [Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.
- index number
- [Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.
- learn
Rate number - [Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.
- training
Loss number - [Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.
- duration_
ms str - [Output-only, Beta] Time taken to run the training iteration in milliseconds.
- eval_
loss float - [Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.
- index int
- [Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.
- learn_
rate float - [Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.
- training_
loss float - [Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.
- duration
Ms String - [Output-only, Beta] Time taken to run the training iteration in milliseconds.
- eval
Loss Number - [Output-only, Beta] Eval loss computed on the eval data at the end of the iteration. The eval loss is used for early stopping to avoid overfitting. No eval loss if eval_split_method option is specified as no_split or auto_split with input data size less than 500 rows.
- index Number
- [Output-only, Beta] Index of the ML training iteration, starting from zero for each training run.
- learn
Rate Number - [Output-only, Beta] Learning rate used for this iteration, it varies for different training iterations if learn_rate_strategy option is not constant.
- training
Loss Number - [Output-only, Beta] Training loss computed on the training data at the end of the iteration. The training loss function is defined by model type.
BqmlTrainingRun, BqmlTrainingRunArgs
- Iteration
Results List<Pulumi.Google Native. Big Query. V2. Inputs. Bqml Iteration Result> - [Output-only, Beta] List of each iteration results.
- Start
Time string - [Output-only, Beta] Training run start time in milliseconds since the epoch.
- State string
- [Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.
- Training
Options Pulumi.Google Native. Big Query. V2. Inputs. Bqml Training Run Training Options - [Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.
- Iteration
Results []BqmlIteration Result - [Output-only, Beta] List of each iteration results.
- Start
Time string - [Output-only, Beta] Training run start time in milliseconds since the epoch.
- State string
- [Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.
- Training
Options BqmlTraining Run Training Options - [Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.
- iteration
Results List<BqmlIteration Result> - [Output-only, Beta] List of each iteration results.
- start
Time String - [Output-only, Beta] Training run start time in milliseconds since the epoch.
- state String
- [Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.
- training
Options BqmlTraining Run Training Options - [Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.
- iteration
Results BqmlIteration Result[] - [Output-only, Beta] List of each iteration results.
- start
Time string - [Output-only, Beta] Training run start time in milliseconds since the epoch.
- state string
- [Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.
- training
Options BqmlTraining Run Training Options - [Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.
- iteration_
results Sequence[BqmlIteration Result] - [Output-only, Beta] List of each iteration results.
- start_
time str - [Output-only, Beta] Training run start time in milliseconds since the epoch.
- state str
- [Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.
- training_
options BqmlTraining Run Training Options - [Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.
- iteration
Results List<Property Map> - [Output-only, Beta] List of each iteration results.
- start
Time String - [Output-only, Beta] Training run start time in milliseconds since the epoch.
- state String
- [Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.
- training
Options Property Map - [Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.
BqmlTrainingRunResponse, BqmlTrainingRunResponseArgs
- Iteration
Results List<Pulumi.Google Native. Big Query. V2. Inputs. Bqml Iteration Result Response> - [Output-only, Beta] List of each iteration results.
- Start
Time string - [Output-only, Beta] Training run start time in milliseconds since the epoch.
- State string
- [Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.
- Training
Options Pulumi.Google Native. Big Query. V2. Inputs. Bqml Training Run Training Options Response - [Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.
- Iteration
Results []BqmlIteration Result Response - [Output-only, Beta] List of each iteration results.
- Start
Time string - [Output-only, Beta] Training run start time in milliseconds since the epoch.
- State string
- [Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.
- Training
Options BqmlTraining Run Training Options Response - [Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.
- iteration
Results List<BqmlIteration Result Response> - [Output-only, Beta] List of each iteration results.
- start
Time String - [Output-only, Beta] Training run start time in milliseconds since the epoch.
- state String
- [Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.
- training
Options BqmlTraining Run Training Options Response - [Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.
- iteration
Results BqmlIteration Result Response[] - [Output-only, Beta] List of each iteration results.
- start
Time string - [Output-only, Beta] Training run start time in milliseconds since the epoch.
- state string
- [Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.
- training
Options BqmlTraining Run Training Options Response - [Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.
- iteration_
results Sequence[BqmlIteration Result Response] - [Output-only, Beta] List of each iteration results.
- start_
time str - [Output-only, Beta] Training run start time in milliseconds since the epoch.
- state str
- [Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.
- training_
options BqmlTraining Run Training Options Response - [Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.
- iteration
Results List<Property Map> - [Output-only, Beta] List of each iteration results.
- start
Time String - [Output-only, Beta] Training run start time in milliseconds since the epoch.
- state String
- [Output-only, Beta] Different state applicable for a training run. IN PROGRESS: Training run is in progress. FAILED: Training run ended due to a non-retryable failure. SUCCEEDED: Training run successfully completed. CANCELLED: Training run cancelled by the user.
- training
Options Property Map - [Output-only, Beta] Training options used by this training run. These options are mutable for subsequent training runs. Default values are explicitly stored for options not specified in the input query of the first training run. For subsequent training runs, any option not explicitly specified in the input query will be copied from the previous training run.
BqmlTrainingRunTrainingOptions, BqmlTrainingRunTrainingOptionsArgs
- Early
Stop bool - L1Reg double
- L2Reg double
- Learn
Rate double - Learn
Rate stringStrategy - Line
Search doubleInit Learn Rate - Max
Iteration string - Min
Rel doubleProgress - Warm
Start bool
- Early
Stop bool - L1Reg float64
- L2Reg float64
- Learn
Rate float64 - Learn
Rate stringStrategy - Line
Search float64Init Learn Rate - Max
Iteration string - Min
Rel float64Progress - Warm
Start bool
- early
Stop Boolean - l1Reg Double
- l2Reg Double
- learn
Rate Double - learn
Rate StringStrategy - line
Search DoubleInit Learn Rate - max
Iteration String - min
Rel DoubleProgress - warm
Start Boolean
- early
Stop boolean - l1Reg number
- l2Reg number
- learn
Rate number - learn
Rate stringStrategy - line
Search numberInit Learn Rate - max
Iteration string - min
Rel numberProgress - warm
Start boolean
- early_
stop bool - l1_
reg float - l2_
reg float - learn_
rate float - learn_
rate_ strstrategy - line_
search_ floatinit_ learn_ rate - max_
iteration str - min_
rel_ floatprogress - warm_
start bool
- early
Stop Boolean - l1Reg Number
- l2Reg Number
- learn
Rate Number - learn
Rate StringStrategy - line
Search NumberInit Learn Rate - max
Iteration String - min
Rel NumberProgress - warm
Start Boolean
BqmlTrainingRunTrainingOptionsResponse, BqmlTrainingRunTrainingOptionsResponseArgs
- Early
Stop bool - L1Reg double
- L2Reg double
- Learn
Rate double - Learn
Rate stringStrategy - Line
Search doubleInit Learn Rate - Max
Iteration string - Min
Rel doubleProgress - Warm
Start bool
- Early
Stop bool - L1Reg float64
- L2Reg float64
- Learn
Rate float64 - Learn
Rate stringStrategy - Line
Search float64Init Learn Rate - Max
Iteration string - Min
Rel float64Progress - Warm
Start bool
- early
Stop Boolean - l1Reg Double
- l2Reg Double
- learn
Rate Double - learn
Rate StringStrategy - line
Search DoubleInit Learn Rate - max
Iteration String - min
Rel DoubleProgress - warm
Start Boolean
- early
Stop boolean - l1Reg number
- l2Reg number
- learn
Rate number - learn
Rate stringStrategy - line
Search numberInit Learn Rate - max
Iteration string - min
Rel numberProgress - warm
Start boolean
- early_
stop bool - l1_
reg float - l2_
reg float - learn_
rate float - learn_
rate_ strstrategy - line_
search_ floatinit_ learn_ rate - max_
iteration str - min_
rel_ floatprogress - warm_
start bool
- early
Stop Boolean - l1Reg Number
- l2Reg Number
- learn
Rate Number - learn
Rate StringStrategy - line
Search NumberInit Learn Rate - max
Iteration String - min
Rel NumberProgress - warm
Start Boolean
CloneDefinitionResponse, CloneDefinitionResponseArgs
- Base
Table Pulumi.Reference Google Native. Big Query. V2. Inputs. Table Reference Response - [Required] Reference describing the ID of the table that was cloned.
- Clone
Time string - [Required] The time at which the base table was cloned. This value is reported in the JSON response using RFC3339 format.
- Base
Table TableReference Reference Response - [Required] Reference describing the ID of the table that was cloned.
- Clone
Time string - [Required] The time at which the base table was cloned. This value is reported in the JSON response using RFC3339 format.
- base
Table TableReference Reference Response - [Required] Reference describing the ID of the table that was cloned.
- clone
Time String - [Required] The time at which the base table was cloned. This value is reported in the JSON response using RFC3339 format.
- base
Table TableReference Reference Response - [Required] Reference describing the ID of the table that was cloned.
- clone
Time string - [Required] The time at which the base table was cloned. This value is reported in the JSON response using RFC3339 format.
- base_
table_ Tablereference Reference Response - [Required] Reference describing the ID of the table that was cloned.
- clone_
time str - [Required] The time at which the base table was cloned. This value is reported in the JSON response using RFC3339 format.
- base
Table Property MapReference - [Required] Reference describing the ID of the table that was cloned.
- clone
Time String - [Required] The time at which the base table was cloned. This value is reported in the JSON response using RFC3339 format.
Clustering, ClusteringArgs
- Fields List<string>
- [Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
- Fields []string
- [Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
- fields List<String>
- [Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
- fields string[]
- [Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
- fields Sequence[str]
- [Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
- fields List<String>
- [Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
ClusteringResponse, ClusteringResponseArgs
- Fields List<string>
- [Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
- Fields []string
- [Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
- fields List<String>
- [Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
- fields string[]
- [Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
- fields Sequence[str]
- [Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
- fields List<String>
- [Repeated] One or more fields on which data should be clustered. Only top-level, non-repeated, simple-type fields are supported. When you cluster a table using multiple columns, the order of columns you specify is important. The order of the specified columns determines the sort order of the data.
CsvOptions, CsvOptionsArgs
- Allow
Jagged boolRows - [Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- Allow
Quoted boolNewlines - [Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- Encoding string
- [Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
- Field
Delimiter string - [Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
- Null
Marker string - [Optional] An custom string that will represent a NULL value in CSV import data.
- Preserve
Ascii boolControl Characters - [Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
- Quote string
- [Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
- Skip
Leading stringRows - [Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- Allow
Jagged boolRows - [Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- Allow
Quoted boolNewlines - [Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- Encoding string
- [Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
- Field
Delimiter string - [Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
- Null
Marker string - [Optional] An custom string that will represent a NULL value in CSV import data.
- Preserve
Ascii boolControl Characters - [Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
- Quote string
- [Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
- Skip
Leading stringRows - [Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- allow
Jagged BooleanRows - [Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- allow
Quoted BooleanNewlines - [Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- encoding String
- [Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
- field
Delimiter String - [Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
- null
Marker String - [Optional] An custom string that will represent a NULL value in CSV import data.
- preserve
Ascii BooleanControl Characters - [Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
- quote String
- [Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
- skip
Leading StringRows - [Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- allow
Jagged booleanRows - [Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- allow
Quoted booleanNewlines - [Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- encoding string
- [Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
- field
Delimiter string - [Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
- null
Marker string - [Optional] An custom string that will represent a NULL value in CSV import data.
- preserve
Ascii booleanControl Characters - [Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
- quote string
- [Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
- skip
Leading stringRows - [Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- allow_
jagged_ boolrows - [Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- allow_
quoted_ boolnewlines - [Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- encoding str
- [Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
- field_
delimiter str - [Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
- null_
marker str - [Optional] An custom string that will represent a NULL value in CSV import data.
- preserve_
ascii_ boolcontrol_ characters - [Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
- quote str
- [Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
- skip_
leading_ strrows - [Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- allow
Jagged BooleanRows - [Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- allow
Quoted BooleanNewlines - [Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- encoding String
- [Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
- field
Delimiter String - [Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
- null
Marker String - [Optional] An custom string that will represent a NULL value in CSV import data.
- preserve
Ascii BooleanControl Characters - [Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
- quote String
- [Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
- skip
Leading StringRows - [Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
CsvOptionsResponse, CsvOptionsResponseArgs
- Allow
Jagged boolRows - [Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- Allow
Quoted boolNewlines - [Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- Encoding string
- [Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
- Field
Delimiter string - [Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
- Null
Marker string - [Optional] An custom string that will represent a NULL value in CSV import data.
- Preserve
Ascii boolControl Characters - [Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
- Quote string
- [Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
- Skip
Leading stringRows - [Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- Allow
Jagged boolRows - [Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- Allow
Quoted boolNewlines - [Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- Encoding string
- [Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
- Field
Delimiter string - [Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
- Null
Marker string - [Optional] An custom string that will represent a NULL value in CSV import data.
- Preserve
Ascii boolControl Characters - [Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
- Quote string
- [Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
- Skip
Leading stringRows - [Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- allow
Jagged BooleanRows - [Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- allow
Quoted BooleanNewlines - [Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- encoding String
- [Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
- field
Delimiter String - [Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
- null
Marker String - [Optional] An custom string that will represent a NULL value in CSV import data.
- preserve
Ascii BooleanControl Characters - [Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
- quote String
- [Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
- skip
Leading StringRows - [Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- allow
Jagged booleanRows - [Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- allow
Quoted booleanNewlines - [Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- encoding string
- [Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
- field
Delimiter string - [Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
- null
Marker string - [Optional] An custom string that will represent a NULL value in CSV import data.
- preserve
Ascii booleanControl Characters - [Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
- quote string
- [Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
- skip
Leading stringRows - [Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- allow_
jagged_ boolrows - [Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- allow_
quoted_ boolnewlines - [Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- encoding str
- [Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
- field_
delimiter str - [Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
- null_
marker str - [Optional] An custom string that will represent a NULL value in CSV import data.
- preserve_
ascii_ boolcontrol_ characters - [Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
- quote str
- [Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
- skip_
leading_ strrows - [Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- allow
Jagged BooleanRows - [Optional] Indicates if BigQuery should accept rows that are missing trailing optional columns. If true, BigQuery treats missing trailing columns as null values. If false, records with missing trailing columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false.
- allow
Quoted BooleanNewlines - [Optional] Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file. The default value is false.
- encoding String
- [Optional] The character encoding of the data. The supported values are UTF-8 or ISO-8859-1. The default value is UTF-8. BigQuery decodes the data after the raw, binary data has been split using the values of the quote and fieldDelimiter properties.
- field
Delimiter String - [Optional] The separator for fields in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. BigQuery also supports the escape sequence "\t" to specify a tab separator. The default value is a comma (',').
- null
Marker String - [Optional] An custom string that will represent a NULL value in CSV import data.
- preserve
Ascii BooleanControl Characters - [Optional] Preserves the embedded ASCII control characters (the first 32 characters in the ASCII-table, from '\x00' to '\x1F') when loading from CSV. Only applicable to CSV, ignored for other formats.
- quote String
- [Optional] The value that is used to quote data sections in a CSV file. BigQuery converts the string to ISO-8859-1 encoding, and then uses the first byte of the encoded string to split the data in its raw, binary state. The default value is a double-quote ('"'). If your data does not contain quoted sections, set the property value to an empty string. If your data contains quoted newline characters, you must also set the allowQuotedNewlines property to true.
- skip
Leading StringRows - [Optional] The number of rows at the top of a CSV file that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows in the file that should be skipped. When autodetect is on, the behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
EncryptionConfiguration, EncryptionConfigurationArgs
- Kms
Key stringName - Optional. Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
- Kms
Key stringName - Optional. Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
- kms
Key StringName - Optional. Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
- kms
Key stringName - Optional. Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
- kms_
key_ strname - Optional. Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
- kms
Key StringName - Optional. Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
EncryptionConfigurationResponse, EncryptionConfigurationResponseArgs
- Kms
Key stringName - Optional. Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
- Kms
Key stringName - Optional. Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
- kms
Key StringName - Optional. Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
- kms
Key stringName - Optional. Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
- kms_
key_ strname - Optional. Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
- kms
Key StringName - Optional. Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key.
ExternalDataConfiguration, ExternalDataConfigurationArgs
- Autodetect bool
- Try to detect schema and format options automatically. Any option specified explicitly will be honored.
- Avro
Options Pulumi.Google Native. Big Query. V2. Inputs. Avro Options - Additional properties to set if sourceFormat is set to Avro.
- Bigtable
Options Pulumi.Google Native. Big Query. V2. Inputs. Bigtable Options - [Optional] Additional options if sourceFormat is set to BIGTABLE.
- Compression string
- [Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- Connection
Id string - [Optional, Trusted Tester] Connection for external data source.
- Csv
Options Pulumi.Google Native. Big Query. V2. Inputs. Csv Options - Additional properties to set if sourceFormat is set to CSV.
- Decimal
Target List<string>Types - [Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.
- File
Set stringSpec Type - [Optional] Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems.
- Google
Sheets Pulumi.Options Google Native. Big Query. V2. Inputs. Google Sheets Options - [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
- Hive
Partitioning Pulumi.Options Google Native. Big Query. V2. Inputs. Hive Partitioning Options - [Optional] Options to configure hive partitioning support.
- Ignore
Unknown boolValues - [Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.
- Json
Options Pulumi.Google Native. Big Query. V2. Inputs. Json Options - Additional properties to set if
sourceFormat
is set toNEWLINE_DELIMITED_JSON
. - Max
Bad intRecords - [Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- Metadata
Cache stringMode - [Optional] Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
- Object
Metadata string - ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- Parquet
Options Pulumi.Google Native. Big Query. V2. Inputs. Parquet Options - Additional properties to set if sourceFormat is set to Parquet.
- Reference
File stringSchema Uri - [Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.
- Schema
Pulumi.
Google Native. Big Query. V2. Inputs. Table Schema - [Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.
- Source
Format string - [Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- Source
Uris List<string> - [Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.
- Autodetect bool
- Try to detect schema and format options automatically. Any option specified explicitly will be honored.
- Avro
Options AvroOptions - Additional properties to set if sourceFormat is set to Avro.
- Bigtable
Options BigtableOptions - [Optional] Additional options if sourceFormat is set to BIGTABLE.
- Compression string
- [Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- Connection
Id string - [Optional, Trusted Tester] Connection for external data source.
- Csv
Options CsvOptions - Additional properties to set if sourceFormat is set to CSV.
- Decimal
Target []stringTypes - [Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.
- File
Set stringSpec Type - [Optional] Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems.
- Google
Sheets GoogleOptions Sheets Options - [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
- Hive
Partitioning HiveOptions Partitioning Options - [Optional] Options to configure hive partitioning support.
- Ignore
Unknown boolValues - [Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.
- Json
Options JsonOptions - Additional properties to set if
sourceFormat
is set toNEWLINE_DELIMITED_JSON
. - Max
Bad intRecords - [Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- Metadata
Cache stringMode - [Optional] Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
- Object
Metadata string - ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- Parquet
Options ParquetOptions - Additional properties to set if sourceFormat is set to Parquet.
- Reference
File stringSchema Uri - [Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.
- Schema
Table
Schema - [Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.
- Source
Format string - [Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- Source
Uris []string - [Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.
- autodetect Boolean
- Try to detect schema and format options automatically. Any option specified explicitly will be honored.
- avro
Options AvroOptions - Additional properties to set if sourceFormat is set to Avro.
- bigtable
Options BigtableOptions - [Optional] Additional options if sourceFormat is set to BIGTABLE.
- compression String
- [Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- connection
Id String - [Optional, Trusted Tester] Connection for external data source.
- csv
Options CsvOptions - Additional properties to set if sourceFormat is set to CSV.
- decimal
Target List<String>Types - [Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.
- file
Set StringSpec Type - [Optional] Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems.
- google
Sheets GoogleOptions Sheets Options - [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
- hive
Partitioning HiveOptions Partitioning Options - [Optional] Options to configure hive partitioning support.
- ignore
Unknown BooleanValues - [Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.
- json
Options JsonOptions - Additional properties to set if
sourceFormat
is set toNEWLINE_DELIMITED_JSON
. - max
Bad IntegerRecords - [Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- metadata
Cache StringMode - [Optional] Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
- object
Metadata String - ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- parquet
Options ParquetOptions - Additional properties to set if sourceFormat is set to Parquet.
- reference
File StringSchema Uri - [Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.
- schema
Table
Schema - [Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.
- source
Format String - [Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- source
Uris List<String> - [Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.
- autodetect boolean
- Try to detect schema and format options automatically. Any option specified explicitly will be honored.
- avro
Options AvroOptions - Additional properties to set if sourceFormat is set to Avro.
- bigtable
Options BigtableOptions - [Optional] Additional options if sourceFormat is set to BIGTABLE.
- compression string
- [Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- connection
Id string - [Optional, Trusted Tester] Connection for external data source.
- csv
Options CsvOptions - Additional properties to set if sourceFormat is set to CSV.
- decimal
Target string[]Types - [Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.
- file
Set stringSpec Type - [Optional] Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems.
- google
Sheets GoogleOptions Sheets Options - [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
- hive
Partitioning HiveOptions Partitioning Options - [Optional] Options to configure hive partitioning support.
- ignore
Unknown booleanValues - [Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.
- json
Options JsonOptions - Additional properties to set if
sourceFormat
is set toNEWLINE_DELIMITED_JSON
. - max
Bad numberRecords - [Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- metadata
Cache stringMode - [Optional] Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
- object
Metadata string - ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- parquet
Options ParquetOptions - Additional properties to set if sourceFormat is set to Parquet.
- reference
File stringSchema Uri - [Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.
- schema
Table
Schema - [Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.
- source
Format string - [Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- source
Uris string[] - [Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.
- autodetect bool
- Try to detect schema and format options automatically. Any option specified explicitly will be honored.
- avro_
options AvroOptions - Additional properties to set if sourceFormat is set to Avro.
- bigtable_
options BigtableOptions - [Optional] Additional options if sourceFormat is set to BIGTABLE.
- compression str
- [Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- connection_
id str - [Optional, Trusted Tester] Connection for external data source.
- csv_
options CsvOptions - Additional properties to set if sourceFormat is set to CSV.
- decimal_
target_ Sequence[str]types - [Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.
- file_
set_ strspec_ type - [Optional] Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems.
- google_
sheets_ Googleoptions Sheets Options - [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
- hive_
partitioning_ Hiveoptions Partitioning Options - [Optional] Options to configure hive partitioning support.
- ignore_
unknown_ boolvalues - [Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.
- json_
options JsonOptions - Additional properties to set if
sourceFormat
is set toNEWLINE_DELIMITED_JSON
. - max_
bad_ intrecords - [Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- metadata_
cache_ strmode - [Optional] Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
- object_
metadata str - ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- parquet_
options ParquetOptions - Additional properties to set if sourceFormat is set to Parquet.
- reference_
file_ strschema_ uri - [Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.
- schema
Table
Schema - [Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.
- source_
format str - [Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- source_
uris Sequence[str] - [Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.
- autodetect Boolean
- Try to detect schema and format options automatically. Any option specified explicitly will be honored.
- avro
Options Property Map - Additional properties to set if sourceFormat is set to Avro.
- bigtable
Options Property Map - [Optional] Additional options if sourceFormat is set to BIGTABLE.
- compression String
- [Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- connection
Id String - [Optional, Trusted Tester] Connection for external data source.
- csv
Options Property Map - Additional properties to set if sourceFormat is set to CSV.
- decimal
Target List<String>Types - [Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.
- file
Set StringSpec Type - [Optional] Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems.
- google
Sheets Property MapOptions - [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
- hive
Partitioning Property MapOptions - [Optional] Options to configure hive partitioning support.
- ignore
Unknown BooleanValues - [Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.
- json
Options Property Map - Additional properties to set if
sourceFormat
is set toNEWLINE_DELIMITED_JSON
. - max
Bad NumberRecords - [Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- metadata
Cache StringMode - [Optional] Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
- object
Metadata String - ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- parquet
Options Property Map - Additional properties to set if sourceFormat is set to Parquet.
- reference
File StringSchema Uri - [Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.
- schema Property Map
- [Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.
- source
Format String - [Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- source
Uris List<String> - [Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.
ExternalDataConfigurationResponse, ExternalDataConfigurationResponseArgs
- Autodetect bool
- Try to detect schema and format options automatically. Any option specified explicitly will be honored.
- Avro
Options Pulumi.Google Native. Big Query. V2. Inputs. Avro Options Response - Additional properties to set if sourceFormat is set to Avro.
- Bigtable
Options Pulumi.Google Native. Big Query. V2. Inputs. Bigtable Options Response - [Optional] Additional options if sourceFormat is set to BIGTABLE.
- Compression string
- [Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- Connection
Id string - [Optional, Trusted Tester] Connection for external data source.
- Csv
Options Pulumi.Google Native. Big Query. V2. Inputs. Csv Options Response - Additional properties to set if sourceFormat is set to CSV.
- Decimal
Target List<string>Types - [Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.
- File
Set stringSpec Type - [Optional] Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems.
- Google
Sheets Pulumi.Options Google Native. Big Query. V2. Inputs. Google Sheets Options Response - [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
- Hive
Partitioning Pulumi.Options Google Native. Big Query. V2. Inputs. Hive Partitioning Options Response - [Optional] Options to configure hive partitioning support.
- Ignore
Unknown boolValues - [Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.
- Json
Options Pulumi.Google Native. Big Query. V2. Inputs. Json Options Response - Additional properties to set if
sourceFormat
is set toNEWLINE_DELIMITED_JSON
. - Max
Bad intRecords - [Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- Metadata
Cache stringMode - [Optional] Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
- Object
Metadata string - ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- Parquet
Options Pulumi.Google Native. Big Query. V2. Inputs. Parquet Options Response - Additional properties to set if sourceFormat is set to Parquet.
- Reference
File stringSchema Uri - [Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.
- Schema
Pulumi.
Google Native. Big Query. V2. Inputs. Table Schema Response - [Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.
- Source
Format string - [Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- Source
Uris List<string> - [Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.
- Autodetect bool
- Try to detect schema and format options automatically. Any option specified explicitly will be honored.
- Avro
Options AvroOptions Response - Additional properties to set if sourceFormat is set to Avro.
- Bigtable
Options BigtableOptions Response - [Optional] Additional options if sourceFormat is set to BIGTABLE.
- Compression string
- [Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- Connection
Id string - [Optional, Trusted Tester] Connection for external data source.
- Csv
Options CsvOptions Response - Additional properties to set if sourceFormat is set to CSV.
- Decimal
Target []stringTypes - [Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.
- File
Set stringSpec Type - [Optional] Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems.
- Google
Sheets GoogleOptions Sheets Options Response - [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
- Hive
Partitioning HiveOptions Partitioning Options Response - [Optional] Options to configure hive partitioning support.
- Ignore
Unknown boolValues - [Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.
- Json
Options JsonOptions Response - Additional properties to set if
sourceFormat
is set toNEWLINE_DELIMITED_JSON
. - Max
Bad intRecords - [Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- Metadata
Cache stringMode - [Optional] Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
- Object
Metadata string - ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- Parquet
Options ParquetOptions Response - Additional properties to set if sourceFormat is set to Parquet.
- Reference
File stringSchema Uri - [Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.
- Schema
Table
Schema Response - [Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.
- Source
Format string - [Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- Source
Uris []string - [Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.
- autodetect Boolean
- Try to detect schema and format options automatically. Any option specified explicitly will be honored.
- avro
Options AvroOptions Response - Additional properties to set if sourceFormat is set to Avro.
- bigtable
Options BigtableOptions Response - [Optional] Additional options if sourceFormat is set to BIGTABLE.
- compression String
- [Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- connection
Id String - [Optional, Trusted Tester] Connection for external data source.
- csv
Options CsvOptions Response - Additional properties to set if sourceFormat is set to CSV.
- decimal
Target List<String>Types - [Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.
- file
Set StringSpec Type - [Optional] Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems.
- google
Sheets GoogleOptions Sheets Options Response - [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
- hive
Partitioning HiveOptions Partitioning Options Response - [Optional] Options to configure hive partitioning support.
- ignore
Unknown BooleanValues - [Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.
- json
Options JsonOptions Response - Additional properties to set if
sourceFormat
is set toNEWLINE_DELIMITED_JSON
. - max
Bad IntegerRecords - [Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- metadata
Cache StringMode - [Optional] Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
- object
Metadata String - ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- parquet
Options ParquetOptions Response - Additional properties to set if sourceFormat is set to Parquet.
- reference
File StringSchema Uri - [Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.
- schema
Table
Schema Response - [Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.
- source
Format String - [Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- source
Uris List<String> - [Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.
- autodetect boolean
- Try to detect schema and format options automatically. Any option specified explicitly will be honored.
- avro
Options AvroOptions Response - Additional properties to set if sourceFormat is set to Avro.
- bigtable
Options BigtableOptions Response - [Optional] Additional options if sourceFormat is set to BIGTABLE.
- compression string
- [Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- connection
Id string - [Optional, Trusted Tester] Connection for external data source.
- csv
Options CsvOptions Response - Additional properties to set if sourceFormat is set to CSV.
- decimal
Target string[]Types - [Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.
- file
Set stringSpec Type - [Optional] Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems.
- google
Sheets GoogleOptions Sheets Options Response - [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
- hive
Partitioning HiveOptions Partitioning Options Response - [Optional] Options to configure hive partitioning support.
- ignore
Unknown booleanValues - [Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.
- json
Options JsonOptions Response - Additional properties to set if
sourceFormat
is set toNEWLINE_DELIMITED_JSON
. - max
Bad numberRecords - [Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- metadata
Cache stringMode - [Optional] Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
- object
Metadata string - ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- parquet
Options ParquetOptions Response - Additional properties to set if sourceFormat is set to Parquet.
- reference
File stringSchema Uri - [Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.
- schema
Table
Schema Response - [Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.
- source
Format string - [Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- source
Uris string[] - [Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.
- autodetect bool
- Try to detect schema and format options automatically. Any option specified explicitly will be honored.
- avro_
options AvroOptions Response - Additional properties to set if sourceFormat is set to Avro.
- bigtable_
options BigtableOptions Response - [Optional] Additional options if sourceFormat is set to BIGTABLE.
- compression str
- [Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- connection_
id str - [Optional, Trusted Tester] Connection for external data source.
- csv_
options CsvOptions Response - Additional properties to set if sourceFormat is set to CSV.
- decimal_
target_ Sequence[str]types - [Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.
- file_
set_ strspec_ type - [Optional] Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems.
- google_
sheets_ Googleoptions Sheets Options Response - [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
- hive_
partitioning_ Hiveoptions Partitioning Options Response - [Optional] Options to configure hive partitioning support.
- ignore_
unknown_ boolvalues - [Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.
- json_
options JsonOptions Response - Additional properties to set if
sourceFormat
is set toNEWLINE_DELIMITED_JSON
. - max_
bad_ intrecords - [Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- metadata_
cache_ strmode - [Optional] Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
- object_
metadata str - ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- parquet_
options ParquetOptions Response - Additional properties to set if sourceFormat is set to Parquet.
- reference_
file_ strschema_ uri - [Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.
- schema
Table
Schema Response - [Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.
- source_
format str - [Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- source_
uris Sequence[str] - [Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.
- autodetect Boolean
- Try to detect schema and format options automatically. Any option specified explicitly will be honored.
- avro
Options Property Map - Additional properties to set if sourceFormat is set to Avro.
- bigtable
Options Property Map - [Optional] Additional options if sourceFormat is set to BIGTABLE.
- compression String
- [Optional] The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- connection
Id String - [Optional, Trusted Tester] Connection for external data source.
- csv
Options Property Map - Additional properties to set if sourceFormat is set to CSV.
- decimal
Target List<String>Types - [Optional] Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (precision,scale) is: (38,9) -> NUMERIC; (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); (76,38) -> BIGNUMERIC; (77,38) -> BIGNUMERIC (error if value exeeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", "NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["NUMERIC"] for the other file formats.
- file
Set StringSpec Type - [Optional] Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems.
- google
Sheets Property MapOptions - [Optional] Additional options if sourceFormat is set to GOOGLE_SHEETS.
- hive
Partitioning Property MapOptions - [Optional] Options to configure hive partitioning support.
- ignore
Unknown BooleanValues - [Optional] Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored.
- json
Options Property Map - Additional properties to set if
sourceFormat
is set toNEWLINE_DELIMITED_JSON
. - max
Bad NumberRecords - [Optional] The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. This is only valid for CSV, JSON, and Google Sheets. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups and Avro formats.
- metadata
Cache StringMode - [Optional] Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source.
- object
Metadata String - ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type.
- parquet
Options Property Map - Additional properties to set if sourceFormat is set to Parquet.
- reference
File StringSchema Uri - [Optional] Provide a referencing file with the expected table schema. Enabled for the format: AVRO, PARQUET, ORC.
- schema Property Map
- [Optional] The schema for the data. Schema is required for CSV and JSON formats. Schema is disallowed for Google Cloud Bigtable, Cloud Datastore backups, and Avro formats.
- source
Format String - [Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify "NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". [Beta] For Google Cloud Bigtable, specify "BIGTABLE".
- source
Uris List<String> - [Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed.
GoogleSheetsOptions, GoogleSheetsOptionsArgs
- Range string
- [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
- Skip
Leading stringRows - [Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- Range string
- [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
- Skip
Leading stringRows - [Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- range String
- [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
- skip
Leading StringRows - [Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- range string
- [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
- skip
Leading stringRows - [Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- range str
- [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
- skip_
leading_ strrows - [Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- range String
- [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
- skip
Leading StringRows - [Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
GoogleSheetsOptionsResponse, GoogleSheetsOptionsResponseArgs
- Range string
- [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
- Skip
Leading stringRows - [Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- Range string
- [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
- Skip
Leading stringRows - [Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- range String
- [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
- skip
Leading StringRows - [Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- range string
- [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
- skip
Leading stringRows - [Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- range str
- [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
- skip_
leading_ strrows - [Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
- range String
- [Optional] Range of a sheet to query from. Only used when non-empty. Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id For example: sheet1!A1:B20
- skip
Leading StringRows - [Optional] The number of rows at the top of a sheet that BigQuery will skip when reading the data. The default value is 0. This property is useful if you have header rows that should be skipped. When autodetect is on, behavior is the following: * skipLeadingRows unspecified - Autodetect tries to detect headers in the first row. If they are not detected, the row is read as data. Otherwise data is read starting from the second row. * skipLeadingRows is 0 - Instructs autodetect that there are no headers and data should be read starting from the first row. * skipLeadingRows = N > 0 - Autodetect skips N-1 rows and tries to detect headers in row N. If headers are not detected, row N is just skipped. Otherwise row N is used to extract column names for the detected schema.
HivePartitioningOptions, HivePartitioningOptionsArgs
- Mode string
- [Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.
- Require
Partition boolFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.
- Source
Uri stringPrefix - [Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).
- Mode string
- [Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.
- Require
Partition boolFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.
- Source
Uri stringPrefix - [Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).
- mode String
- [Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.
- require
Partition BooleanFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.
- source
Uri StringPrefix - [Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).
- mode string
- [Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.
- require
Partition booleanFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.
- source
Uri stringPrefix - [Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).
- mode str
- [Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.
- require_
partition_ boolfilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.
- source_
uri_ strprefix - [Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).
- mode String
- [Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.
- require
Partition BooleanFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.
- source
Uri StringPrefix - [Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).
HivePartitioningOptionsResponse, HivePartitioningOptionsResponseArgs
- Fields List<string>
- For permanent external tables, this field is populated with the hive partition keys in the order they were inferred. The types of the partition keys can be deduced by checking the table schema (which will include the partition keys). Not every API will populate this field in the output. For example, Tables.Get will populate it, but Tables.List will not contain this field.
- Mode string
- [Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.
- Require
Partition boolFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.
- Source
Uri stringPrefix - [Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).
- Fields []string
- For permanent external tables, this field is populated with the hive partition keys in the order they were inferred. The types of the partition keys can be deduced by checking the table schema (which will include the partition keys). Not every API will populate this field in the output. For example, Tables.Get will populate it, but Tables.List will not contain this field.
- Mode string
- [Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.
- Require
Partition boolFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.
- Source
Uri stringPrefix - [Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).
- fields List<String>
- For permanent external tables, this field is populated with the hive partition keys in the order they were inferred. The types of the partition keys can be deduced by checking the table schema (which will include the partition keys). Not every API will populate this field in the output. For example, Tables.Get will populate it, but Tables.List will not contain this field.
- mode String
- [Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.
- require
Partition BooleanFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.
- source
Uri StringPrefix - [Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).
- fields string[]
- For permanent external tables, this field is populated with the hive partition keys in the order they were inferred. The types of the partition keys can be deduced by checking the table schema (which will include the partition keys). Not every API will populate this field in the output. For example, Tables.Get will populate it, but Tables.List will not contain this field.
- mode string
- [Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.
- require
Partition booleanFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.
- source
Uri stringPrefix - [Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).
- fields Sequence[str]
- For permanent external tables, this field is populated with the hive partition keys in the order they were inferred. The types of the partition keys can be deduced by checking the table schema (which will include the partition keys). Not every API will populate this field in the output. For example, Tables.Get will populate it, but Tables.List will not contain this field.
- mode str
- [Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.
- require_
partition_ boolfilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.
- source_
uri_ strprefix - [Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).
- fields List<String>
- For permanent external tables, this field is populated with the hive partition keys in the order they were inferred. The types of the partition keys can be deduced by checking the table schema (which will include the partition keys). Not every API will populate this field in the output. For example, Tables.Get will populate it, but Tables.List will not contain this field.
- mode String
- [Optional] When set, what mode of hive partitioning to use when reading data. The following modes are supported. (1) AUTO: automatically infer partition key name(s) and type(s). (2) STRINGS: automatically infer partition key name(s). All types are interpreted as strings. (3) CUSTOM: partition key schema is encoded in the source URI prefix. Not all storage formats support hive partitioning. Requesting hive partitioning on an unsupported format will lead to an error. Currently supported types include: AVRO, CSV, JSON, ORC and Parquet.
- require
Partition BooleanFilter - [Optional] If set to true, queries over this table require a partition filter that can be used for partition elimination to be specified. Note that this field should only be true when creating a permanent external table or querying a temporary external table. Hive-partitioned loads with requirePartitionFilter explicitly set to true will fail.
- source
Uri StringPrefix - [Optional] When hive partition detection is requested, a common prefix for all source uris should be supplied. The prefix must end immediately before the partition key encoding begins. For example, consider files following this data layout. gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro When hive partitioning is requested with either AUTO or STRINGS detection, the common prefix can be either of gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing slash does not matter).
JsonOptions, JsonOptionsArgs
- Encoding string
- [Optional] The character encoding of the data. The supported values are UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8.
- Encoding string
- [Optional] The character encoding of the data. The supported values are UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8.
- encoding String
- [Optional] The character encoding of the data. The supported values are UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8.
- encoding string
- [Optional] The character encoding of the data. The supported values are UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8.
- encoding str
- [Optional] The character encoding of the data. The supported values are UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8.
- encoding String
- [Optional] The character encoding of the data. The supported values are UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8.
JsonOptionsResponse, JsonOptionsResponseArgs
- Encoding string
- [Optional] The character encoding of the data. The supported values are UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8.
- Encoding string
- [Optional] The character encoding of the data. The supported values are UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8.
- encoding String
- [Optional] The character encoding of the data. The supported values are UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8.
- encoding string
- [Optional] The character encoding of the data. The supported values are UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8.
- encoding str
- [Optional] The character encoding of the data. The supported values are UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8.
- encoding String
- [Optional] The character encoding of the data. The supported values are UTF-8, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is UTF-8.
MaterializedViewDefinition, MaterializedViewDefinitionArgs
- Allow
Non boolIncremental Definition - [Optional] Allow non incremental materialized view definition. The default value is "false".
- Enable
Refresh bool - [Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
- Max
Staleness string - [Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).
- Query string
- [Required] A query whose result is persisted.
- Refresh
Interval stringMs - [Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
- Allow
Non boolIncremental Definition - [Optional] Allow non incremental materialized view definition. The default value is "false".
- Enable
Refresh bool - [Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
- Max
Staleness string - [Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).
- Query string
- [Required] A query whose result is persisted.
- Refresh
Interval stringMs - [Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
- allow
Non BooleanIncremental Definition - [Optional] Allow non incremental materialized view definition. The default value is "false".
- enable
Refresh Boolean - [Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
- max
Staleness String - [Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).
- query String
- [Required] A query whose result is persisted.
- refresh
Interval StringMs - [Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
- allow
Non booleanIncremental Definition - [Optional] Allow non incremental materialized view definition. The default value is "false".
- enable
Refresh boolean - [Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
- max
Staleness string - [Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).
- query string
- [Required] A query whose result is persisted.
- refresh
Interval stringMs - [Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
- allow_
non_ boolincremental_ definition - [Optional] Allow non incremental materialized view definition. The default value is "false".
- enable_
refresh bool - [Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
- max_
staleness str - [Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).
- query str
- [Required] A query whose result is persisted.
- refresh_
interval_ strms - [Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
- allow
Non BooleanIncremental Definition - [Optional] Allow non incremental materialized view definition. The default value is "false".
- enable
Refresh Boolean - [Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
- max
Staleness String - [Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).
- query String
- [Required] A query whose result is persisted.
- refresh
Interval StringMs - [Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
MaterializedViewDefinitionResponse, MaterializedViewDefinitionResponseArgs
- Allow
Non boolIncremental Definition - [Optional] Allow non incremental materialized view definition. The default value is "false".
- Enable
Refresh bool - [Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
- Last
Refresh stringTime - [TrustedTester] The time when this materialized view was last modified, in milliseconds since the epoch.
- Max
Staleness string - [Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).
- Query string
- [Required] A query whose result is persisted.
- Refresh
Interval stringMs - [Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
- Allow
Non boolIncremental Definition - [Optional] Allow non incremental materialized view definition. The default value is "false".
- Enable
Refresh bool - [Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
- Last
Refresh stringTime - [TrustedTester] The time when this materialized view was last modified, in milliseconds since the epoch.
- Max
Staleness string - [Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).
- Query string
- [Required] A query whose result is persisted.
- Refresh
Interval stringMs - [Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
- allow
Non BooleanIncremental Definition - [Optional] Allow non incremental materialized view definition. The default value is "false".
- enable
Refresh Boolean - [Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
- last
Refresh StringTime - [TrustedTester] The time when this materialized view was last modified, in milliseconds since the epoch.
- max
Staleness String - [Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).
- query String
- [Required] A query whose result is persisted.
- refresh
Interval StringMs - [Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
- allow
Non booleanIncremental Definition - [Optional] Allow non incremental materialized view definition. The default value is "false".
- enable
Refresh boolean - [Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
- last
Refresh stringTime - [TrustedTester] The time when this materialized view was last modified, in milliseconds since the epoch.
- max
Staleness string - [Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).
- query string
- [Required] A query whose result is persisted.
- refresh
Interval stringMs - [Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
- allow_
non_ boolincremental_ definition - [Optional] Allow non incremental materialized view definition. The default value is "false".
- enable_
refresh bool - [Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
- last_
refresh_ strtime - [TrustedTester] The time when this materialized view was last modified, in milliseconds since the epoch.
- max_
staleness str - [Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).
- query str
- [Required] A query whose result is persisted.
- refresh_
interval_ strms - [Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
- allow
Non BooleanIncremental Definition - [Optional] Allow non incremental materialized view definition. The default value is "false".
- enable
Refresh Boolean - [Optional] [TrustedTester] Enable automatic refresh of the materialized view when the base table is updated. The default value is "true".
- last
Refresh StringTime - [TrustedTester] The time when this materialized view was last modified, in milliseconds since the epoch.
- max
Staleness String - [Optional] Max staleness of data that could be returned when materizlized view is queried (formatted as Google SQL Interval type).
- query String
- [Required] A query whose result is persisted.
- refresh
Interval StringMs - [Optional] [TrustedTester] The maximum frequency at which this materialized view will be refreshed. The default value is "1800000" (30 minutes).
ModelDefinition, ModelDefinitionArgs
- Model
Options Pulumi.Google Native. Big Query. V2. Inputs. Model Definition Model Options - [Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.
- Training
Runs List<Pulumi.Google Native. Big Query. V2. Inputs. Bqml Training Run> - [Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.
- Model
Options ModelDefinition Model Options - [Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.
- Training
Runs []BqmlTraining Run - [Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.
- model
Options ModelDefinition Model Options - [Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.
- training
Runs List<BqmlTraining Run> - [Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.
- model
Options ModelDefinition Model Options - [Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.
- training
Runs BqmlTraining Run[] - [Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.
- model_
options ModelDefinition Model Options - [Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.
- training_
runs Sequence[BqmlTraining Run] - [Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.
- model
Options Property Map - [Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.
- training
Runs List<Property Map> - [Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.
ModelDefinitionModelOptions, ModelDefinitionModelOptionsArgs
- labels Sequence[str]
- loss_
type str - model_
type str
ModelDefinitionModelOptionsResponse, ModelDefinitionModelOptionsResponseArgs
- labels Sequence[str]
- loss_
type str - model_
type str
ModelDefinitionResponse, ModelDefinitionResponseArgs
- Model
Options Pulumi.Google Native. Big Query. V2. Inputs. Model Definition Model Options Response - [Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.
- Training
Runs List<Pulumi.Google Native. Big Query. V2. Inputs. Bqml Training Run Response> - [Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.
- Model
Options ModelDefinition Model Options Response - [Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.
- Training
Runs []BqmlTraining Run Response - [Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.
- model
Options ModelDefinition Model Options Response - [Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.
- training
Runs List<BqmlTraining Run Response> - [Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.
- model
Options ModelDefinition Model Options Response - [Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.
- training
Runs BqmlTraining Run Response[] - [Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.
- model_
options ModelDefinition Model Options Response - [Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.
- training_
runs Sequence[BqmlTraining Run Response] - [Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.
- model
Options Property Map - [Output-only, Beta] Model options used for the first training run. These options are immutable for subsequent training runs. Default values are used for any options not specified in the input query.
- training
Runs List<Property Map> - [Output-only, Beta] Information about ml training runs, each training run comprises of multiple iterations and there may be multiple training runs for the model if warm start is used or if a user decides to continue a previously cancelled query.
ParquetOptions, ParquetOptionsArgs
- Enable
List boolInference - [Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.
- Enum
As boolString - [Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
- Enable
List boolInference - [Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.
- Enum
As boolString - [Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
- enable
List BooleanInference - [Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.
- enum
As BooleanString - [Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
- enable
List booleanInference - [Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.
- enum
As booleanString - [Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
- enable_
list_ boolinference - [Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.
- enum_
as_ boolstring - [Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
- enable
List BooleanInference - [Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.
- enum
As BooleanString - [Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
ParquetOptionsResponse, ParquetOptionsResponseArgs
- Enable
List boolInference - [Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.
- Enum
As boolString - [Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
- Enable
List boolInference - [Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.
- Enum
As boolString - [Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
- enable
List BooleanInference - [Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.
- enum
As BooleanString - [Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
- enable
List booleanInference - [Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.
- enum
As booleanString - [Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
- enable_
list_ boolinference - [Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.
- enum_
as_ boolstring - [Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
- enable
List BooleanInference - [Optional] Indicates whether to use schema inference specifically for Parquet LIST logical type.
- enum
As BooleanString - [Optional] Indicates whether to infer Parquet ENUM logical type as STRING instead of BYTES by default.
RangePartitioning, RangePartitioningArgs
- Field string
- [TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.
- Range
Pulumi.
Google Native. Big Query. V2. Inputs. Range Partitioning Range - [TrustedTester] [Required] Defines the ranges for range partitioning.
- Field string
- [TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.
- Range
Range
Partitioning Range - [TrustedTester] [Required] Defines the ranges for range partitioning.
- field String
- [TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.
- range
Range
Partitioning Range - [TrustedTester] [Required] Defines the ranges for range partitioning.
- field string
- [TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.
- range
Range
Partitioning Range - [TrustedTester] [Required] Defines the ranges for range partitioning.
- field str
- [TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.
- range
Range
Partitioning Range - [TrustedTester] [Required] Defines the ranges for range partitioning.
- field String
- [TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.
- range Property Map
- [TrustedTester] [Required] Defines the ranges for range partitioning.
RangePartitioningRange, RangePartitioningRangeArgs
RangePartitioningRangeResponse, RangePartitioningRangeResponseArgs
RangePartitioningResponse, RangePartitioningResponseArgs
- Field string
- [TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.
- Range
Pulumi.
Google Native. Big Query. V2. Inputs. Range Partitioning Range Response - [TrustedTester] [Required] Defines the ranges for range partitioning.
- Field string
- [TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.
- Range
Range
Partitioning Range Response - [TrustedTester] [Required] Defines the ranges for range partitioning.
- field String
- [TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.
- range
Range
Partitioning Range Response - [TrustedTester] [Required] Defines the ranges for range partitioning.
- field string
- [TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.
- range
Range
Partitioning Range Response - [TrustedTester] [Required] Defines the ranges for range partitioning.
- field str
- [TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.
- range
Range
Partitioning Range Response - [TrustedTester] [Required] Defines the ranges for range partitioning.
- field String
- [TrustedTester] [Required] The table is partitioned by this field. The field must be a top-level NULLABLE/REQUIRED field. The only supported type is INTEGER/INT64.
- range Property Map
- [TrustedTester] [Required] Defines the ranges for range partitioning.
SnapshotDefinitionResponse, SnapshotDefinitionResponseArgs
- Base
Table Pulumi.Reference Google Native. Big Query. V2. Inputs. Table Reference Response - [Required] Reference describing the ID of the table that was snapshot.
- Snapshot
Time string - [Required] The time at which the base table was snapshot. This value is reported in the JSON response using RFC3339 format.
- Base
Table TableReference Reference Response - [Required] Reference describing the ID of the table that was snapshot.
- Snapshot
Time string - [Required] The time at which the base table was snapshot. This value is reported in the JSON response using RFC3339 format.
- base
Table TableReference Reference Response - [Required] Reference describing the ID of the table that was snapshot.
- snapshot
Time String - [Required] The time at which the base table was snapshot. This value is reported in the JSON response using RFC3339 format.
- base
Table TableReference Reference Response - [Required] Reference describing the ID of the table that was snapshot.
- snapshot
Time string - [Required] The time at which the base table was snapshot. This value is reported in the JSON response using RFC3339 format.
- base_
table_ Tablereference Reference Response - [Required] Reference describing the ID of the table that was snapshot.
- snapshot_
time str - [Required] The time at which the base table was snapshot. This value is reported in the JSON response using RFC3339 format.
- base
Table Property MapReference - [Required] Reference describing the ID of the table that was snapshot.
- snapshot
Time String - [Required] The time at which the base table was snapshot. This value is reported in the JSON response using RFC3339 format.
StreamingbufferResponse, StreamingbufferResponseArgs
- Estimated
Bytes string - A lower-bound estimate of the number of bytes currently in the streaming buffer.
- Estimated
Rows string - A lower-bound estimate of the number of rows currently in the streaming buffer.
- Oldest
Entry stringTime - Contains the timestamp of the oldest entry in the streaming buffer, in milliseconds since the epoch, if the streaming buffer is available.
- Estimated
Bytes string - A lower-bound estimate of the number of bytes currently in the streaming buffer.
- Estimated
Rows string - A lower-bound estimate of the number of rows currently in the streaming buffer.
- Oldest
Entry stringTime - Contains the timestamp of the oldest entry in the streaming buffer, in milliseconds since the epoch, if the streaming buffer is available.
- estimated
Bytes String - A lower-bound estimate of the number of bytes currently in the streaming buffer.
- estimated
Rows String - A lower-bound estimate of the number of rows currently in the streaming buffer.
- oldest
Entry StringTime - Contains the timestamp of the oldest entry in the streaming buffer, in milliseconds since the epoch, if the streaming buffer is available.
- estimated
Bytes string - A lower-bound estimate of the number of bytes currently in the streaming buffer.
- estimated
Rows string - A lower-bound estimate of the number of rows currently in the streaming buffer.
- oldest
Entry stringTime - Contains the timestamp of the oldest entry in the streaming buffer, in milliseconds since the epoch, if the streaming buffer is available.
- estimated_
bytes str - A lower-bound estimate of the number of bytes currently in the streaming buffer.
- estimated_
rows str - A lower-bound estimate of the number of rows currently in the streaming buffer.
- oldest_
entry_ strtime - Contains the timestamp of the oldest entry in the streaming buffer, in milliseconds since the epoch, if the streaming buffer is available.
- estimated
Bytes String - A lower-bound estimate of the number of bytes currently in the streaming buffer.
- estimated
Rows String - A lower-bound estimate of the number of rows currently in the streaming buffer.
- oldest
Entry StringTime - Contains the timestamp of the oldest entry in the streaming buffer, in milliseconds since the epoch, if the streaming buffer is available.
TableConstraints, TableConstraintsArgs
- Foreign
Keys List<Pulumi.Google Native. Big Query. V2. Inputs. Table Constraints Foreign Keys Item> - [Optional] The foreign keys of the tables.
- Primary
Key Pulumi.Google Native. Big Query. V2. Inputs. Table Constraints Primary Key - [Optional] The primary key of the table.
- Foreign
Keys []TableConstraints Foreign Keys Item - [Optional] The foreign keys of the tables.
- Primary
Key TableConstraints Primary Key - [Optional] The primary key of the table.
- foreign
Keys List<TableConstraints Foreign Keys Item> - [Optional] The foreign keys of the tables.
- primary
Key TableConstraints Primary Key - [Optional] The primary key of the table.
- foreign
Keys TableConstraints Foreign Keys Item[] - [Optional] The foreign keys of the tables.
- primary
Key TableConstraints Primary Key - [Optional] The primary key of the table.
- foreign_
keys Sequence[TableConstraints Foreign Keys Item] - [Optional] The foreign keys of the tables.
- primary_
key TableConstraints Primary Key - [Optional] The primary key of the table.
- foreign
Keys List<Property Map> - [Optional] The foreign keys of the tables.
- primary
Key Property Map - [Optional] The primary key of the table.
TableConstraintsForeignKeysItem, TableConstraintsForeignKeysItemArgs
TableConstraintsForeignKeysItemColumnReferencesItem, TableConstraintsForeignKeysItemColumnReferencesItemArgs
- Referenced
Column string - Referencing
Column string
- Referenced
Column string - Referencing
Column string
- referenced
Column String - referencing
Column String
- referenced
Column string - referencing
Column string
- referenced
Column String - referencing
Column String
TableConstraintsForeignKeysItemColumnReferencesItemResponse, TableConstraintsForeignKeysItemColumnReferencesItemResponseArgs
- Referenced
Column string - Referencing
Column string
- Referenced
Column string - Referencing
Column string
- referenced
Column String - referencing
Column String
- referenced
Column string - referencing
Column string
- referenced
Column String - referencing
Column String
TableConstraintsForeignKeysItemReferencedTable, TableConstraintsForeignKeysItemReferencedTableArgs
- dataset_
id str - project str
- table_
id str
TableConstraintsForeignKeysItemReferencedTableResponse, TableConstraintsForeignKeysItemReferencedTableResponseArgs
- dataset_
id str - project str
- table_
id str
TableConstraintsForeignKeysItemResponse, TableConstraintsForeignKeysItemResponseArgs
TableConstraintsPrimaryKey, TableConstraintsPrimaryKeyArgs
- Columns List<string>
- Columns []string
- columns List<String>
- columns string[]
- columns Sequence[str]
- columns List<String>
TableConstraintsPrimaryKeyResponse, TableConstraintsPrimaryKeyResponseArgs
- Columns List<string>
- Columns []string
- columns List<String>
- columns string[]
- columns Sequence[str]
- columns List<String>
TableConstraintsResponse, TableConstraintsResponseArgs
- Foreign
Keys List<Pulumi.Google Native. Big Query. V2. Inputs. Table Constraints Foreign Keys Item Response> - [Optional] The foreign keys of the tables.
- Primary
Key Pulumi.Google Native. Big Query. V2. Inputs. Table Constraints Primary Key Response - [Optional] The primary key of the table.
- Foreign
Keys []TableConstraints Foreign Keys Item Response - [Optional] The foreign keys of the tables.
- Primary
Key TableConstraints Primary Key Response - [Optional] The primary key of the table.
- foreign
Keys List<TableConstraints Foreign Keys Item Response> - [Optional] The foreign keys of the tables.
- primary
Key TableConstraints Primary Key Response - [Optional] The primary key of the table.
- foreign
Keys TableConstraints Foreign Keys Item Response[] - [Optional] The foreign keys of the tables.
- primary
Key TableConstraints Primary Key Response - [Optional] The primary key of the table.
- foreign_
keys Sequence[TableConstraints Foreign Keys Item Response] - [Optional] The foreign keys of the tables.
- primary_
key TableConstraints Primary Key Response - [Optional] The primary key of the table.
- foreign
Keys List<Property Map> - [Optional] The foreign keys of the tables.
- primary
Key Property Map - [Optional] The primary key of the table.
TableFieldSchema, TableFieldSchemaArgs
- Categories
Pulumi.
Google Native. Big Query. V2. Inputs. Table Field Schema Categories - [Optional] The categories attached to this field, used for field-level access control.
- Collation string
- Optional. Collation specification of the field. It only can be set on string type field.
- Default
Value stringExpression - Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']
- Description string
- [Optional] The field description. The maximum length is 1,024 characters.
- Fields
List<Pulumi.
Google Native. Big Query. V2. Inputs. Table Field Schema> - [Optional] Describes the nested schema fields if the type property is set to RECORD.
- Max
Length string - [Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠"STRING" and ≠"BYTES".
- Mode string
- [Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.
- Name string
- [Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.
- Pulumi.
Google Native. Big Query. V2. Inputs. Table Field Schema Policy Tags - Precision string
- [Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠"NUMERIC" and ≠"BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.
- Range
Element Pulumi.Type Google Native. Big Query. V2. Inputs. Table Field Schema Range Element Type - Optional. The subtype of the RANGE, if the type of this field is RANGE. If the type is RANGE, this field is required. Possible values for the field element type of a RANGE include: - DATE - DATETIME - TIMESTAMP
- Rounding
Mode string - Optional. Rounding Mode specification of the field. It only can be set on NUMERIC or BIGNUMERIC type fields.
- Scale string
- [Optional] See documentation for precision.
- Type string
- [Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).
- Categories
Table
Field Schema Categories - [Optional] The categories attached to this field, used for field-level access control.
- Collation string
- Optional. Collation specification of the field. It only can be set on string type field.
- Default
Value stringExpression - Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']
- Description string
- [Optional] The field description. The maximum length is 1,024 characters.
- Fields
[]Table
Field Schema - [Optional] Describes the nested schema fields if the type property is set to RECORD.
- Max
Length string - [Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠"STRING" and ≠"BYTES".
- Mode string
- [Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.
- Name string
- [Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.
- Table
Field Schema Policy Tags - Precision string
- [Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠"NUMERIC" and ≠"BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.
- Range
Element TableType Field Schema Range Element Type - Optional. The subtype of the RANGE, if the type of this field is RANGE. If the type is RANGE, this field is required. Possible values for the field element type of a RANGE include: - DATE - DATETIME - TIMESTAMP
- Rounding
Mode string - Optional. Rounding Mode specification of the field. It only can be set on NUMERIC or BIGNUMERIC type fields.
- Scale string
- [Optional] See documentation for precision.
- Type string
- [Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).
- categories
Table
Field Schema Categories - [Optional] The categories attached to this field, used for field-level access control.
- collation String
- Optional. Collation specification of the field. It only can be set on string type field.
- default
Value StringExpression - Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']
- description String
- [Optional] The field description. The maximum length is 1,024 characters.
- fields
List<Table
Field Schema> - [Optional] Describes the nested schema fields if the type property is set to RECORD.
- max
Length String - [Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠"STRING" and ≠"BYTES".
- mode String
- [Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.
- name String
- [Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.
- Table
Field Schema Policy Tags - precision String
- [Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠"NUMERIC" and ≠"BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.
- range
Element TableType Field Schema Range Element Type - Optional. The subtype of the RANGE, if the type of this field is RANGE. If the type is RANGE, this field is required. Possible values for the field element type of a RANGE include: - DATE - DATETIME - TIMESTAMP
- rounding
Mode String - Optional. Rounding Mode specification of the field. It only can be set on NUMERIC or BIGNUMERIC type fields.
- scale String
- [Optional] See documentation for precision.
- type String
- [Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).
- categories
Table
Field Schema Categories - [Optional] The categories attached to this field, used for field-level access control.
- collation string
- Optional. Collation specification of the field. It only can be set on string type field.
- default
Value stringExpression - Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']
- description string
- [Optional] The field description. The maximum length is 1,024 characters.
- fields
Table
Field Schema[] - [Optional] Describes the nested schema fields if the type property is set to RECORD.
- max
Length string - [Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠"STRING" and ≠"BYTES".
- mode string
- [Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.
- name string
- [Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.
- Table
Field Schema Policy Tags - precision string
- [Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠"NUMERIC" and ≠"BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.
- range
Element TableType Field Schema Range Element Type - Optional. The subtype of the RANGE, if the type of this field is RANGE. If the type is RANGE, this field is required. Possible values for the field element type of a RANGE include: - DATE - DATETIME - TIMESTAMP
- rounding
Mode string - Optional. Rounding Mode specification of the field. It only can be set on NUMERIC or BIGNUMERIC type fields.
- scale string
- [Optional] See documentation for precision.
- type string
- [Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).
- categories
Table
Field Schema Categories - [Optional] The categories attached to this field, used for field-level access control.
- collation str
- Optional. Collation specification of the field. It only can be set on string type field.
- default_
value_ strexpression - Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']
- description str
- [Optional] The field description. The maximum length is 1,024 characters.
- fields
Sequence[Table
Field Schema] - [Optional] Describes the nested schema fields if the type property is set to RECORD.
- max_
length str - [Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠"STRING" and ≠"BYTES".
- mode str
- [Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.
- name str
- [Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.
- Table
Field Schema Policy Tags - precision str
- [Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠"NUMERIC" and ≠"BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.
- range_
element_ Tabletype Field Schema Range Element Type - Optional. The subtype of the RANGE, if the type of this field is RANGE. If the type is RANGE, this field is required. Possible values for the field element type of a RANGE include: - DATE - DATETIME - TIMESTAMP
- rounding_
mode str - Optional. Rounding Mode specification of the field. It only can be set on NUMERIC or BIGNUMERIC type fields.
- scale str
- [Optional] See documentation for precision.
- type str
- [Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).
- categories Property Map
- [Optional] The categories attached to this field, used for field-level access control.
- collation String
- Optional. Collation specification of the field. It only can be set on string type field.
- default
Value StringExpression - Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']
- description String
- [Optional] The field description. The maximum length is 1,024 characters.
- fields List<Property Map>
- [Optional] Describes the nested schema fields if the type property is set to RECORD.
- max
Length String - [Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠"STRING" and ≠"BYTES".
- mode String
- [Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.
- name String
- [Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.
- Property Map
- precision String
- [Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠"NUMERIC" and ≠"BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.
- range
Element Property MapType - Optional. The subtype of the RANGE, if the type of this field is RANGE. If the type is RANGE, this field is required. Possible values for the field element type of a RANGE include: - DATE - DATETIME - TIMESTAMP
- rounding
Mode String - Optional. Rounding Mode specification of the field. It only can be set on NUMERIC or BIGNUMERIC type fields.
- scale String
- [Optional] See documentation for precision.
- type String
- [Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).
TableFieldSchemaCategories, TableFieldSchemaCategoriesArgs
- Names List<string>
- A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.
- Names []string
- A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.
- names List<String>
- A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.
- names string[]
- A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.
- names Sequence[str]
- A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.
- names List<String>
- A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.
TableFieldSchemaCategoriesResponse, TableFieldSchemaCategoriesResponseArgs
- Names List<string>
- A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.
- Names []string
- A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.
- names List<String>
- A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.
- names string[]
- A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.
- names Sequence[str]
- A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.
- names List<String>
- A list of category resource names. For example, "projects/1/taxonomies/2/categories/3". At most 5 categories are allowed.
TableFieldSchemaPolicyTags, TableFieldSchemaPolicyTagsArgs
- Names List<string>
- A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.
- Names []string
- A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.
- names List<String>
- A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.
- names string[]
- A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.
- names Sequence[str]
- A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.
- names List<String>
- A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.
TableFieldSchemaPolicyTagsResponse, TableFieldSchemaPolicyTagsResponseArgs
- Names List<string>
- A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.
- Names []string
- A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.
- names List<String>
- A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.
- names string[]
- A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.
- names Sequence[str]
- A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.
- names List<String>
- A list of category resource names. For example, "projects/1/location/eu/taxonomies/2/policyTags/3". At most 1 policy tag is allowed.
TableFieldSchemaRangeElementType, TableFieldSchemaRangeElementTypeArgs
- Type string
- The field element type of a RANGE
- Type string
- The field element type of a RANGE
- type String
- The field element type of a RANGE
- type string
- The field element type of a RANGE
- type str
- The field element type of a RANGE
- type String
- The field element type of a RANGE
TableFieldSchemaRangeElementTypeResponse, TableFieldSchemaRangeElementTypeResponseArgs
- Type string
- The field element type of a RANGE
- Type string
- The field element type of a RANGE
- type String
- The field element type of a RANGE
- type string
- The field element type of a RANGE
- type str
- The field element type of a RANGE
- type String
- The field element type of a RANGE
TableFieldSchemaResponse, TableFieldSchemaResponseArgs
- Categories
Pulumi.
Google Native. Big Query. V2. Inputs. Table Field Schema Categories Response - [Optional] The categories attached to this field, used for field-level access control.
- Collation string
- Optional. Collation specification of the field. It only can be set on string type field.
- Default
Value stringExpression - Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']
- Description string
- [Optional] The field description. The maximum length is 1,024 characters.
- Fields
List<Pulumi.
Google Native. Big Query. V2. Inputs. Table Field Schema Response> - [Optional] Describes the nested schema fields if the type property is set to RECORD.
- Max
Length string - [Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠"STRING" and ≠"BYTES".
- Mode string
- [Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.
- Name string
- [Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.
- Pulumi.
Google Native. Big Query. V2. Inputs. Table Field Schema Policy Tags Response - Precision string
- [Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠"NUMERIC" and ≠"BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.
- Range
Element Pulumi.Type Google Native. Big Query. V2. Inputs. Table Field Schema Range Element Type Response - Optional. The subtype of the RANGE, if the type of this field is RANGE. If the type is RANGE, this field is required. Possible values for the field element type of a RANGE include: - DATE - DATETIME - TIMESTAMP
- Rounding
Mode string - Optional. Rounding Mode specification of the field. It only can be set on NUMERIC or BIGNUMERIC type fields.
- Scale string
- [Optional] See documentation for precision.
- Type string
- [Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).
- Categories
Table
Field Schema Categories Response - [Optional] The categories attached to this field, used for field-level access control.
- Collation string
- Optional. Collation specification of the field. It only can be set on string type field.
- Default
Value stringExpression - Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']
- Description string
- [Optional] The field description. The maximum length is 1,024 characters.
- Fields
[]Table
Field Schema Response - [Optional] Describes the nested schema fields if the type property is set to RECORD.
- Max
Length string - [Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠"STRING" and ≠"BYTES".
- Mode string
- [Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.
- Name string
- [Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.
- Table
Field Schema Policy Tags Response - Precision string
- [Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠"NUMERIC" and ≠"BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.
- Range
Element TableType Field Schema Range Element Type Response - Optional. The subtype of the RANGE, if the type of this field is RANGE. If the type is RANGE, this field is required. Possible values for the field element type of a RANGE include: - DATE - DATETIME - TIMESTAMP
- Rounding
Mode string - Optional. Rounding Mode specification of the field. It only can be set on NUMERIC or BIGNUMERIC type fields.
- Scale string
- [Optional] See documentation for precision.
- Type string
- [Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).
- categories
Table
Field Schema Categories Response - [Optional] The categories attached to this field, used for field-level access control.
- collation String
- Optional. Collation specification of the field. It only can be set on string type field.
- default
Value StringExpression - Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']
- description String
- [Optional] The field description. The maximum length is 1,024 characters.
- fields
List<Table
Field Schema Response> - [Optional] Describes the nested schema fields if the type property is set to RECORD.
- max
Length String - [Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠"STRING" and ≠"BYTES".
- mode String
- [Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.
- name String
- [Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.
- Table
Field Schema Policy Tags Response - precision String
- [Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠"NUMERIC" and ≠"BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.
- range
Element TableType Field Schema Range Element Type Response - Optional. The subtype of the RANGE, if the type of this field is RANGE. If the type is RANGE, this field is required. Possible values for the field element type of a RANGE include: - DATE - DATETIME - TIMESTAMP
- rounding
Mode String - Optional. Rounding Mode specification of the field. It only can be set on NUMERIC or BIGNUMERIC type fields.
- scale String
- [Optional] See documentation for precision.
- type String
- [Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).
- categories
Table
Field Schema Categories Response - [Optional] The categories attached to this field, used for field-level access control.
- collation string
- Optional. Collation specification of the field. It only can be set on string type field.
- default
Value stringExpression - Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']
- description string
- [Optional] The field description. The maximum length is 1,024 characters.
- fields
Table
Field Schema Response[] - [Optional] Describes the nested schema fields if the type property is set to RECORD.
- max
Length string - [Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠"STRING" and ≠"BYTES".
- mode string
- [Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.
- name string
- [Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.
- Table
Field Schema Policy Tags Response - precision string
- [Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠"NUMERIC" and ≠"BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.
- range
Element TableType Field Schema Range Element Type Response - Optional. The subtype of the RANGE, if the type of this field is RANGE. If the type is RANGE, this field is required. Possible values for the field element type of a RANGE include: - DATE - DATETIME - TIMESTAMP
- rounding
Mode string - Optional. Rounding Mode specification of the field. It only can be set on NUMERIC or BIGNUMERIC type fields.
- scale string
- [Optional] See documentation for precision.
- type string
- [Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).
- categories
Table
Field Schema Categories Response - [Optional] The categories attached to this field, used for field-level access control.
- collation str
- Optional. Collation specification of the field. It only can be set on string type field.
- default_
value_ strexpression - Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']
- description str
- [Optional] The field description. The maximum length is 1,024 characters.
- fields
Sequence[Table
Field Schema Response] - [Optional] Describes the nested schema fields if the type property is set to RECORD.
- max_
length str - [Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠"STRING" and ≠"BYTES".
- mode str
- [Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.
- name str
- [Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.
- Table
Field Schema Policy Tags Response - precision str
- [Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠"NUMERIC" and ≠"BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.
- range_
element_ Tabletype Field Schema Range Element Type Response - Optional. The subtype of the RANGE, if the type of this field is RANGE. If the type is RANGE, this field is required. Possible values for the field element type of a RANGE include: - DATE - DATETIME - TIMESTAMP
- rounding_
mode str - Optional. Rounding Mode specification of the field. It only can be set on NUMERIC or BIGNUMERIC type fields.
- scale str
- [Optional] See documentation for precision.
- type str
- [Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).
- categories Property Map
- [Optional] The categories attached to this field, used for field-level access control.
- collation String
- Optional. Collation specification of the field. It only can be set on string type field.
- default
Value StringExpression - Optional. A SQL expression to specify the default value for this field. It can only be set for top level fields (columns). You can use struct or array expression to specify default value for the entire struct or array. The valid SQL expressions are: - Literals for all data types, including STRUCT and ARRAY. - Following functions: - CURRENT_TIMESTAMP - CURRENT_TIME - CURRENT_DATE - CURRENT_DATETIME - GENERATE_UUID - RAND - SESSION_USER - ST_GEOGPOINT - Struct or array composed with the above allowed functions, for example, [CURRENT_DATE(), DATE '2020-01-01']
- description String
- [Optional] The field description. The maximum length is 1,024 characters.
- fields List<Property Map>
- [Optional] Describes the nested schema fields if the type property is set to RECORD.
- max
Length String - [Optional] Maximum length of values of this field for STRINGS or BYTES. If max_length is not specified, no maximum length constraint is imposed on this field. If type = "STRING", then max_length represents the maximum UTF-8 length of strings in this field. If type = "BYTES", then max_length represents the maximum number of bytes in this field. It is invalid to set this field if type ≠"STRING" and ≠"BYTES".
- mode String
- [Optional] The field mode. Possible values include NULLABLE, REQUIRED and REPEATED. The default value is NULLABLE.
- name String
- [Required] The field name. The name must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), and must start with a letter or underscore. The maximum length is 300 characters.
- Property Map
- precision String
- [Optional] Precision (maximum number of total digits in base 10) and scale (maximum number of digits in the fractional part in base 10) constraints for values of this field for NUMERIC or BIGNUMERIC. It is invalid to set precision or scale if type ≠"NUMERIC" and ≠"BIGNUMERIC". If precision and scale are not specified, no value range constraint is imposed on this field insofar as values are permitted by the type. Values of this NUMERIC or BIGNUMERIC field must be in this range when: - Precision (P) and scale (S) are specified: [-10P-S + 10-S, 10P-S - 10-S] - Precision (P) is specified but not scale (and thus scale is interpreted to be equal to zero): [-10P + 1, 10P - 1]. Acceptable values for precision and scale if both are specified: - If type = "NUMERIC": 1 ≤ precision - scale ≤ 29 and 0 ≤ scale ≤ 9. - If type = "BIGNUMERIC": 1 ≤ precision - scale ≤ 38 and 0 ≤ scale ≤ 38. Acceptable values for precision if only precision is specified but not scale (and thus scale is interpreted to be equal to zero): - If type = "NUMERIC": 1 ≤ precision ≤ 29. - If type = "BIGNUMERIC": 1 ≤ precision ≤ 38. If scale is specified but not precision, then it is invalid.
- range
Element Property MapType - Optional. The subtype of the RANGE, if the type of this field is RANGE. If the type is RANGE, this field is required. Possible values for the field element type of a RANGE include: - DATE - DATETIME - TIMESTAMP
- rounding
Mode String - Optional. Rounding Mode specification of the field. It only can be set on NUMERIC or BIGNUMERIC type fields.
- scale String
- [Optional] See documentation for precision.
- type String
- [Required] The field data type. Possible values include STRING, BYTES, INTEGER, INT64 (same as INTEGER), FLOAT, FLOAT64 (same as FLOAT), NUMERIC, BIGNUMERIC, BOOLEAN, BOOL (same as BOOLEAN), TIMESTAMP, DATE, TIME, DATETIME, INTERVAL, RECORD (where RECORD indicates that the field contains a nested schema) or STRUCT (same as RECORD).
TableReference, TableReferenceArgs
- Dataset
Id string - [Required] The ID of the dataset containing this table.
- Project string
- [Required] The ID of the project containing this table.
- Table
Id string - [Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
- Dataset
Id string - [Required] The ID of the dataset containing this table.
- Project string
- [Required] The ID of the project containing this table.
- Table
Id string - [Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
- dataset
Id String - [Required] The ID of the dataset containing this table.
- project String
- [Required] The ID of the project containing this table.
- table
Id String - [Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
- dataset
Id string - [Required] The ID of the dataset containing this table.
- project string
- [Required] The ID of the project containing this table.
- table
Id string - [Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
- dataset_
id str - [Required] The ID of the dataset containing this table.
- project str
- [Required] The ID of the project containing this table.
- table_
id str - [Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
- dataset
Id String - [Required] The ID of the dataset containing this table.
- project String
- [Required] The ID of the project containing this table.
- table
Id String - [Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
TableReferenceResponse, TableReferenceResponseArgs
- Dataset
Id string - [Required] The ID of the dataset containing this table.
- Project string
- [Required] The ID of the project containing this table.
- Table
Id string - [Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
- Dataset
Id string - [Required] The ID of the dataset containing this table.
- Project string
- [Required] The ID of the project containing this table.
- Table
Id string - [Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
- dataset
Id String - [Required] The ID of the dataset containing this table.
- project String
- [Required] The ID of the project containing this table.
- table
Id String - [Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
- dataset
Id string - [Required] The ID of the dataset containing this table.
- project string
- [Required] The ID of the project containing this table.
- table
Id string - [Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
- dataset_
id str - [Required] The ID of the dataset containing this table.
- project str
- [Required] The ID of the project containing this table.
- table_
id str - [Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
- dataset
Id String - [Required] The ID of the dataset containing this table.
- project String
- [Required] The ID of the project containing this table.
- table
Id String - [Required] The ID of the table. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 1,024 characters.
TableSchema, TableSchemaArgs
- Fields
List<Pulumi.
Google Native. Big Query. V2. Inputs. Table Field Schema> - Describes the fields in a table.
- Fields
[]Table
Field Schema - Describes the fields in a table.
- fields
List<Table
Field Schema> - Describes the fields in a table.
- fields
Table
Field Schema[] - Describes the fields in a table.
- fields
Sequence[Table
Field Schema] - Describes the fields in a table.
- fields List<Property Map>
- Describes the fields in a table.
TableSchemaResponse, TableSchemaResponseArgs
- Fields
List<Pulumi.
Google Native. Big Query. V2. Inputs. Table Field Schema Response> - Describes the fields in a table.
- Fields
[]Table
Field Schema Response - Describes the fields in a table.
- fields
List<Table
Field Schema Response> - Describes the fields in a table.
- fields
Table
Field Schema Response[] - Describes the fields in a table.
- fields
Sequence[Table
Field Schema Response] - Describes the fields in a table.
- fields List<Property Map>
- Describes the fields in a table.
TimePartitioning, TimePartitioningArgs
- Expiration
Ms string - [Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
- Field string
- [Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
- Require
Partition boolFilter - Type string
- [Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
- Expiration
Ms string - [Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
- Field string
- [Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
- Require
Partition boolFilter - Type string
- [Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
- expiration
Ms String - [Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
- field String
- [Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
- require
Partition BooleanFilter - type String
- [Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
- expiration
Ms string - [Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
- field string
- [Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
- require
Partition booleanFilter - type string
- [Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
- expiration_
ms str - [Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
- field str
- [Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
- require_
partition_ boolfilter - type str
- [Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
- expiration
Ms String - [Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
- field String
- [Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
- require
Partition BooleanFilter - type String
- [Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
TimePartitioningResponse, TimePartitioningResponseArgs
- Expiration
Ms string - [Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
- Field string
- [Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
- Require
Partition boolFilter - Type string
- [Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
- Expiration
Ms string - [Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
- Field string
- [Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
- Require
Partition boolFilter - Type string
- [Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
- expiration
Ms String - [Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
- field String
- [Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
- require
Partition BooleanFilter - type String
- [Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
- expiration
Ms string - [Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
- field string
- [Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
- require
Partition booleanFilter - type string
- [Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
- expiration_
ms str - [Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
- field str
- [Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
- require_
partition_ boolfilter - type str
- [Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
- expiration
Ms String - [Optional] Number of milliseconds for which to keep the storage for partitions in the table. The storage in a partition will have an expiration time of its partition time plus this value.
- field String
- [Beta] [Optional] If not set, the table is partitioned by pseudo column, referenced via either '_PARTITIONTIME' as TIMESTAMP type, or '_PARTITIONDATE' as DATE type. If field is specified, the table is instead partitioned by this field. The field must be a top-level TIMESTAMP or DATE field. Its mode must be NULLABLE or REQUIRED.
- require
Partition BooleanFilter - type String
- [Required] The supported types are DAY, HOUR, MONTH, and YEAR, which will generate one partition per day, hour, month, and year, respectively. When the type is not specified, the default behavior is DAY.
UserDefinedFunctionResource, UserDefinedFunctionResourceArgs
- Inline
Code string - [Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
- Resource
Uri string - [Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
- Inline
Code string - [Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
- Resource
Uri string - [Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
- inline
Code String - [Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
- resource
Uri String - [Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
- inline
Code string - [Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
- resource
Uri string - [Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
- inline_
code str - [Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
- resource_
uri str - [Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
- inline
Code String - [Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
- resource
Uri String - [Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
UserDefinedFunctionResourceResponse, UserDefinedFunctionResourceResponseArgs
- Inline
Code string - [Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
- Resource
Uri string - [Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
- Inline
Code string - [Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
- Resource
Uri string - [Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
- inline
Code String - [Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
- resource
Uri String - [Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
- inline
Code string - [Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
- resource
Uri string - [Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
- inline_
code str - [Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
- resource_
uri str - [Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
- inline
Code String - [Pick one] An inline resource that contains code for a user-defined function (UDF). Providing a inline code resource is equivalent to providing a URI for a file containing the same code.
- resource
Uri String - [Pick one] A code resource to load from a Google Cloud Storage URI (gs://bucket/path).
ViewDefinition, ViewDefinitionArgs
- Query string
- [Required] A query that BigQuery executes when the view is referenced.
- Use
Explicit boolColumn Names - True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/
- Use
Legacy boolSql - Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
- User
Defined List<Pulumi.Function Resources Google Native. Big Query. V2. Inputs. User Defined Function Resource> - Describes user-defined function resources used in the query.
- Query string
- [Required] A query that BigQuery executes when the view is referenced.
- Use
Explicit boolColumn Names - True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/
- Use
Legacy boolSql - Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
- User
Defined []UserFunction Resources Defined Function Resource - Describes user-defined function resources used in the query.
- query String
- [Required] A query that BigQuery executes when the view is referenced.
- use
Explicit BooleanColumn Names - True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/
- use
Legacy BooleanSql - Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
- user
Defined List<UserFunction Resources Defined Function Resource> - Describes user-defined function resources used in the query.
- query string
- [Required] A query that BigQuery executes when the view is referenced.
- use
Explicit booleanColumn Names - True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/
- use
Legacy booleanSql - Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
- user
Defined UserFunction Resources Defined Function Resource[] - Describes user-defined function resources used in the query.
- query str
- [Required] A query that BigQuery executes when the view is referenced.
- use_
explicit_ boolcolumn_ names - True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/
- use_
legacy_ boolsql - Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
- user_
defined_ Sequence[Userfunction_ resources Defined Function Resource] - Describes user-defined function resources used in the query.
- query String
- [Required] A query that BigQuery executes when the view is referenced.
- use
Explicit BooleanColumn Names - True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/
- use
Legacy BooleanSql - Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
- user
Defined List<Property Map>Function Resources - Describes user-defined function resources used in the query.
ViewDefinitionResponse, ViewDefinitionResponseArgs
- Query string
- [Required] A query that BigQuery executes when the view is referenced.
- Use
Explicit boolColumn Names - True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/
- Use
Legacy boolSql - Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
- User
Defined List<Pulumi.Function Resources Google Native. Big Query. V2. Inputs. User Defined Function Resource Response> - Describes user-defined function resources used in the query.
- Query string
- [Required] A query that BigQuery executes when the view is referenced.
- Use
Explicit boolColumn Names - True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/
- Use
Legacy boolSql - Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
- User
Defined []UserFunction Resources Defined Function Resource Response - Describes user-defined function resources used in the query.
- query String
- [Required] A query that BigQuery executes when the view is referenced.
- use
Explicit BooleanColumn Names - True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/
- use
Legacy BooleanSql - Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
- user
Defined List<UserFunction Resources Defined Function Resource Response> - Describes user-defined function resources used in the query.
- query string
- [Required] A query that BigQuery executes when the view is referenced.
- use
Explicit booleanColumn Names - True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/
- use
Legacy booleanSql - Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
- user
Defined UserFunction Resources Defined Function Resource Response[] - Describes user-defined function resources used in the query.
- query str
- [Required] A query that BigQuery executes when the view is referenced.
- use_
explicit_ boolcolumn_ names - True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/
- use_
legacy_ boolsql - Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
- user_
defined_ Sequence[Userfunction_ resources Defined Function Resource Response] - Describes user-defined function resources used in the query.
- query String
- [Required] A query that BigQuery executes when the view is referenced.
- use
Explicit BooleanColumn Names - True if the column names are explicitly specified. For example by using the 'CREATE VIEW v(c1, c2) AS ...' syntax. Can only be set using BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/
- use
Legacy BooleanSql - Specifies whether to use BigQuery's legacy SQL for this view. The default value is true. If set to false, the view will use BigQuery's standard SQL: https://cloud.google.com/bigquery/sql-reference/ Queries and views that reference this view must use the same flag value.
- user
Defined List<Property Map>Function Resources - Describes user-defined function resources used in the query.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.