Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.datalabeling/v1beta1.EvaluationJob
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates an evaluation job. Auto-naming is currently not supported for this resource.
Create EvaluationJob Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new EvaluationJob(name: string, args: EvaluationJobArgs, opts?: CustomResourceOptions);
@overload
def EvaluationJob(resource_name: str,
args: EvaluationJobArgs,
opts: Optional[ResourceOptions] = None)
@overload
def EvaluationJob(resource_name: str,
opts: Optional[ResourceOptions] = None,
annotation_spec_set: Optional[str] = None,
description: Optional[str] = None,
evaluation_job_config: Optional[GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs] = None,
label_missing_ground_truth: Optional[bool] = None,
model_version: Optional[str] = None,
schedule: Optional[str] = None,
project: Optional[str] = None)
func NewEvaluationJob(ctx *Context, name string, args EvaluationJobArgs, opts ...ResourceOption) (*EvaluationJob, error)
public EvaluationJob(string name, EvaluationJobArgs args, CustomResourceOptions? opts = null)
public EvaluationJob(String name, EvaluationJobArgs args)
public EvaluationJob(String name, EvaluationJobArgs args, CustomResourceOptions options)
type: google-native:datalabeling/v1beta1:EvaluationJob
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args EvaluationJobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args EvaluationJobArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args EvaluationJobArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args EvaluationJobArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args EvaluationJobArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var evaluationJobResource = new GoogleNative.DataLabeling.V1Beta1.EvaluationJob("evaluationJobResource", new()
{
AnnotationSpecSet = "string",
Description = "string",
EvaluationJobConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs
{
BigqueryImportKeys =
{
{ "string", "string" },
},
EvaluationConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationConfigArgs
{
BoundingBoxEvaluationOptions = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsArgs
{
IouThreshold = 0,
},
},
ExampleCount = 0,
ExampleSamplePercentage = 0,
BoundingPolyConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BoundingPolyConfigArgs
{
AnnotationSpecSet = "string",
InstructionMessage = "string",
},
EvaluationJobAlertConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigArgs
{
Email = "string",
MinAcceptableMeanAveragePrecision = 0,
},
HumanAnnotationConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1HumanAnnotationConfigArgs
{
AnnotatedDatasetDisplayName = "string",
Instruction = "string",
AnnotatedDatasetDescription = "string",
ContributorEmails = new[]
{
"string",
},
LabelGroup = "string",
LanguageCode = "string",
QuestionDuration = "string",
ReplicaCount = 0,
UserEmailAddress = "string",
},
ImageClassificationConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1ImageClassificationConfigArgs
{
AnnotationSpecSet = "string",
AllowMultiLabel = false,
AnswerAggregationType = GoogleNative.DataLabeling.V1Beta1.GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType.StringAggregationTypeUnspecified,
},
InputConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1InputConfigArgs
{
DataType = GoogleNative.DataLabeling.V1Beta1.GoogleCloudDatalabelingV1beta1InputConfigDataType.DataTypeUnspecified,
AnnotationType = GoogleNative.DataLabeling.V1Beta1.GoogleCloudDatalabelingV1beta1InputConfigAnnotationType.AnnotationTypeUnspecified,
BigquerySource = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1BigQuerySourceArgs
{
InputUri = "string",
},
ClassificationMetadata = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1ClassificationMetadataArgs
{
IsMultiLabel = false,
},
GcsSource = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1GcsSourceArgs
{
InputUri = "string",
MimeType = "string",
},
TextMetadata = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1TextMetadataArgs
{
LanguageCode = "string",
},
},
TextClassificationConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1TextClassificationConfigArgs
{
AnnotationSpecSet = "string",
AllowMultiLabel = false,
SentimentConfig = new GoogleNative.DataLabeling.V1Beta1.Inputs.GoogleCloudDatalabelingV1beta1SentimentConfigArgs
{
EnableLabelSentimentSelection = false,
},
},
},
LabelMissingGroundTruth = false,
ModelVersion = "string",
Schedule = "string",
Project = "string",
});
example, err := datalabeling.NewEvaluationJob(ctx, "evaluationJobResource", &datalabeling.EvaluationJobArgs{
AnnotationSpecSet: pulumi.String("string"),
Description: pulumi.String("string"),
EvaluationJobConfig: &datalabeling.GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs{
BigqueryImportKeys: pulumi.StringMap{
"string": pulumi.String("string"),
},
EvaluationConfig: &datalabeling.GoogleCloudDatalabelingV1beta1EvaluationConfigArgs{
BoundingBoxEvaluationOptions: &datalabeling.GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsArgs{
IouThreshold: pulumi.Float64(0),
},
},
ExampleCount: pulumi.Int(0),
ExampleSamplePercentage: pulumi.Float64(0),
BoundingPolyConfig: &datalabeling.GoogleCloudDatalabelingV1beta1BoundingPolyConfigArgs{
AnnotationSpecSet: pulumi.String("string"),
InstructionMessage: pulumi.String("string"),
},
EvaluationJobAlertConfig: &datalabeling.GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigArgs{
Email: pulumi.String("string"),
MinAcceptableMeanAveragePrecision: pulumi.Float64(0),
},
HumanAnnotationConfig: &datalabeling.GoogleCloudDatalabelingV1beta1HumanAnnotationConfigArgs{
AnnotatedDatasetDisplayName: pulumi.String("string"),
Instruction: pulumi.String("string"),
AnnotatedDatasetDescription: pulumi.String("string"),
ContributorEmails: pulumi.StringArray{
pulumi.String("string"),
},
LabelGroup: pulumi.String("string"),
LanguageCode: pulumi.String("string"),
QuestionDuration: pulumi.String("string"),
ReplicaCount: pulumi.Int(0),
UserEmailAddress: pulumi.String("string"),
},
ImageClassificationConfig: &datalabeling.GoogleCloudDatalabelingV1beta1ImageClassificationConfigArgs{
AnnotationSpecSet: pulumi.String("string"),
AllowMultiLabel: pulumi.Bool(false),
AnswerAggregationType: datalabeling.GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationTypeStringAggregationTypeUnspecified,
},
InputConfig: &datalabeling.GoogleCloudDatalabelingV1beta1InputConfigArgs{
DataType: datalabeling.GoogleCloudDatalabelingV1beta1InputConfigDataTypeDataTypeUnspecified,
AnnotationType: datalabeling.GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeAnnotationTypeUnspecified,
BigquerySource: &datalabeling.GoogleCloudDatalabelingV1beta1BigQuerySourceArgs{
InputUri: pulumi.String("string"),
},
ClassificationMetadata: &datalabeling.GoogleCloudDatalabelingV1beta1ClassificationMetadataArgs{
IsMultiLabel: pulumi.Bool(false),
},
GcsSource: &datalabeling.GoogleCloudDatalabelingV1beta1GcsSourceArgs{
InputUri: pulumi.String("string"),
MimeType: pulumi.String("string"),
},
TextMetadata: &datalabeling.GoogleCloudDatalabelingV1beta1TextMetadataArgs{
LanguageCode: pulumi.String("string"),
},
},
TextClassificationConfig: &datalabeling.GoogleCloudDatalabelingV1beta1TextClassificationConfigArgs{
AnnotationSpecSet: pulumi.String("string"),
AllowMultiLabel: pulumi.Bool(false),
SentimentConfig: &datalabeling.GoogleCloudDatalabelingV1beta1SentimentConfigArgs{
EnableLabelSentimentSelection: pulumi.Bool(false),
},
},
},
LabelMissingGroundTruth: pulumi.Bool(false),
ModelVersion: pulumi.String("string"),
Schedule: pulumi.String("string"),
Project: pulumi.String("string"),
})
var evaluationJobResource = new EvaluationJob("evaluationJobResource", EvaluationJobArgs.builder()
.annotationSpecSet("string")
.description("string")
.evaluationJobConfig(GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs.builder()
.bigqueryImportKeys(Map.of("string", "string"))
.evaluationConfig(GoogleCloudDatalabelingV1beta1EvaluationConfigArgs.builder()
.boundingBoxEvaluationOptions(GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsArgs.builder()
.iouThreshold(0)
.build())
.build())
.exampleCount(0)
.exampleSamplePercentage(0)
.boundingPolyConfig(GoogleCloudDatalabelingV1beta1BoundingPolyConfigArgs.builder()
.annotationSpecSet("string")
.instructionMessage("string")
.build())
.evaluationJobAlertConfig(GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigArgs.builder()
.email("string")
.minAcceptableMeanAveragePrecision(0)
.build())
.humanAnnotationConfig(GoogleCloudDatalabelingV1beta1HumanAnnotationConfigArgs.builder()
.annotatedDatasetDisplayName("string")
.instruction("string")
.annotatedDatasetDescription("string")
.contributorEmails("string")
.labelGroup("string")
.languageCode("string")
.questionDuration("string")
.replicaCount(0)
.userEmailAddress("string")
.build())
.imageClassificationConfig(GoogleCloudDatalabelingV1beta1ImageClassificationConfigArgs.builder()
.annotationSpecSet("string")
.allowMultiLabel(false)
.answerAggregationType("STRING_AGGREGATION_TYPE_UNSPECIFIED")
.build())
.inputConfig(GoogleCloudDatalabelingV1beta1InputConfigArgs.builder()
.dataType("DATA_TYPE_UNSPECIFIED")
.annotationType("ANNOTATION_TYPE_UNSPECIFIED")
.bigquerySource(GoogleCloudDatalabelingV1beta1BigQuerySourceArgs.builder()
.inputUri("string")
.build())
.classificationMetadata(GoogleCloudDatalabelingV1beta1ClassificationMetadataArgs.builder()
.isMultiLabel(false)
.build())
.gcsSource(GoogleCloudDatalabelingV1beta1GcsSourceArgs.builder()
.inputUri("string")
.mimeType("string")
.build())
.textMetadata(GoogleCloudDatalabelingV1beta1TextMetadataArgs.builder()
.languageCode("string")
.build())
.build())
.textClassificationConfig(GoogleCloudDatalabelingV1beta1TextClassificationConfigArgs.builder()
.annotationSpecSet("string")
.allowMultiLabel(false)
.sentimentConfig(GoogleCloudDatalabelingV1beta1SentimentConfigArgs.builder()
.enableLabelSentimentSelection(false)
.build())
.build())
.build())
.labelMissingGroundTruth(false)
.modelVersion("string")
.schedule("string")
.project("string")
.build());
evaluation_job_resource = google_native.datalabeling.v1beta1.EvaluationJob("evaluationJobResource",
annotation_spec_set="string",
description="string",
evaluation_job_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs(
bigquery_import_keys={
"string": "string",
},
evaluation_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1EvaluationConfigArgs(
bounding_box_evaluation_options=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsArgs(
iou_threshold=0,
),
),
example_count=0,
example_sample_percentage=0,
bounding_poly_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1BoundingPolyConfigArgs(
annotation_spec_set="string",
instruction_message="string",
),
evaluation_job_alert_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigArgs(
email="string",
min_acceptable_mean_average_precision=0,
),
human_annotation_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1HumanAnnotationConfigArgs(
annotated_dataset_display_name="string",
instruction="string",
annotated_dataset_description="string",
contributor_emails=["string"],
label_group="string",
language_code="string",
question_duration="string",
replica_count=0,
user_email_address="string",
),
image_classification_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1ImageClassificationConfigArgs(
annotation_spec_set="string",
allow_multi_label=False,
answer_aggregation_type=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType.STRING_AGGREGATION_TYPE_UNSPECIFIED,
),
input_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1InputConfigArgs(
data_type=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1InputConfigDataType.DATA_TYPE_UNSPECIFIED,
annotation_type=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1InputConfigAnnotationType.ANNOTATION_TYPE_UNSPECIFIED,
bigquery_source=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1BigQuerySourceArgs(
input_uri="string",
),
classification_metadata=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1ClassificationMetadataArgs(
is_multi_label=False,
),
gcs_source=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1GcsSourceArgs(
input_uri="string",
mime_type="string",
),
text_metadata=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1TextMetadataArgs(
language_code="string",
),
),
text_classification_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1TextClassificationConfigArgs(
annotation_spec_set="string",
allow_multi_label=False,
sentiment_config=google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1SentimentConfigArgs(
enable_label_sentiment_selection=False,
),
),
),
label_missing_ground_truth=False,
model_version="string",
schedule="string",
project="string")
const evaluationJobResource = new google_native.datalabeling.v1beta1.EvaluationJob("evaluationJobResource", {
annotationSpecSet: "string",
description: "string",
evaluationJobConfig: {
bigqueryImportKeys: {
string: "string",
},
evaluationConfig: {
boundingBoxEvaluationOptions: {
iouThreshold: 0,
},
},
exampleCount: 0,
exampleSamplePercentage: 0,
boundingPolyConfig: {
annotationSpecSet: "string",
instructionMessage: "string",
},
evaluationJobAlertConfig: {
email: "string",
minAcceptableMeanAveragePrecision: 0,
},
humanAnnotationConfig: {
annotatedDatasetDisplayName: "string",
instruction: "string",
annotatedDatasetDescription: "string",
contributorEmails: ["string"],
labelGroup: "string",
languageCode: "string",
questionDuration: "string",
replicaCount: 0,
userEmailAddress: "string",
},
imageClassificationConfig: {
annotationSpecSet: "string",
allowMultiLabel: false,
answerAggregationType: google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType.StringAggregationTypeUnspecified,
},
inputConfig: {
dataType: google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1InputConfigDataType.DataTypeUnspecified,
annotationType: google_native.datalabeling.v1beta1.GoogleCloudDatalabelingV1beta1InputConfigAnnotationType.AnnotationTypeUnspecified,
bigquerySource: {
inputUri: "string",
},
classificationMetadata: {
isMultiLabel: false,
},
gcsSource: {
inputUri: "string",
mimeType: "string",
},
textMetadata: {
languageCode: "string",
},
},
textClassificationConfig: {
annotationSpecSet: "string",
allowMultiLabel: false,
sentimentConfig: {
enableLabelSentimentSelection: false,
},
},
},
labelMissingGroundTruth: false,
modelVersion: "string",
schedule: "string",
project: "string",
});
type: google-native:datalabeling/v1beta1:EvaluationJob
properties:
annotationSpecSet: string
description: string
evaluationJobConfig:
bigqueryImportKeys:
string: string
boundingPolyConfig:
annotationSpecSet: string
instructionMessage: string
evaluationConfig:
boundingBoxEvaluationOptions:
iouThreshold: 0
evaluationJobAlertConfig:
email: string
minAcceptableMeanAveragePrecision: 0
exampleCount: 0
exampleSamplePercentage: 0
humanAnnotationConfig:
annotatedDatasetDescription: string
annotatedDatasetDisplayName: string
contributorEmails:
- string
instruction: string
labelGroup: string
languageCode: string
questionDuration: string
replicaCount: 0
userEmailAddress: string
imageClassificationConfig:
allowMultiLabel: false
annotationSpecSet: string
answerAggregationType: STRING_AGGREGATION_TYPE_UNSPECIFIED
inputConfig:
annotationType: ANNOTATION_TYPE_UNSPECIFIED
bigquerySource:
inputUri: string
classificationMetadata:
isMultiLabel: false
dataType: DATA_TYPE_UNSPECIFIED
gcsSource:
inputUri: string
mimeType: string
textMetadata:
languageCode: string
textClassificationConfig:
allowMultiLabel: false
annotationSpecSet: string
sentimentConfig:
enableLabelSentimentSelection: false
labelMissingGroundTruth: false
modelVersion: string
project: string
schedule: string
EvaluationJob Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The EvaluationJob resource accepts the following input properties:
- Annotation
Spec stringSet - Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- Description string
- Description of the job. The description can be up to 25,000 characters long.
- Evaluation
Job Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Evaluation Job Config - Configuration details for the evaluation job.
- Label
Missing boolGround Truth - Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
. - Model
Version string - The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- Schedule string
- Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- Project string
- Annotation
Spec stringSet - Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- Description string
- Description of the job. The description can be up to 25,000 characters long.
- Evaluation
Job GoogleConfig Cloud Datalabeling V1beta1Evaluation Job Config Args - Configuration details for the evaluation job.
- Label
Missing boolGround Truth - Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
. - Model
Version string - The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- Schedule string
- Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- Project string
- annotation
Spec StringSet - Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- description String
- Description of the job. The description can be up to 25,000 characters long.
- evaluation
Job GoogleConfig Cloud Datalabeling V1beta1Evaluation Job Config - Configuration details for the evaluation job.
- label
Missing BooleanGround Truth - Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
. - model
Version String - The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- schedule String
- Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- project String
- annotation
Spec stringSet - Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- description string
- Description of the job. The description can be up to 25,000 characters long.
- evaluation
Job GoogleConfig Cloud Datalabeling V1beta1Evaluation Job Config - Configuration details for the evaluation job.
- label
Missing booleanGround Truth - Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
. - model
Version string - The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- schedule string
- Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- project string
- annotation_
spec_ strset - Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- description str
- Description of the job. The description can be up to 25,000 characters long.
- evaluation_
job_ Googleconfig Cloud Datalabeling V1beta1Evaluation Job Config Args - Configuration details for the evaluation job.
- label_
missing_ boolground_ truth - Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
. - model_
version str - The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- schedule str
- Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- project str
- annotation
Spec StringSet - Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- description String
- Description of the job. The description can be up to 25,000 characters long.
- evaluation
Job Property MapConfig - Configuration details for the evaluation job.
- label
Missing BooleanGround Truth - Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
. - model
Version String - The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- schedule String
- Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- project String
Outputs
All input properties are implicitly available as output properties. Additionally, the EvaluationJob resource produces the following output properties:
- Attempts
List<Pulumi.
Google Native. Data Labeling. V1Beta1. Outputs. Google Cloud Datalabeling V1beta1Attempt Response> - Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- Create
Time string - Timestamp of when this evaluation job was created.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- State string
- Describes the current state of the job.
- Attempts
[]Google
Cloud Datalabeling V1beta1Attempt Response - Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- Create
Time string - Timestamp of when this evaluation job was created.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- State string
- Describes the current state of the job.
- attempts
List<Google
Cloud Datalabeling V1beta1Attempt Response> - Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- create
Time String - Timestamp of when this evaluation job was created.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- state String
- Describes the current state of the job.
- attempts
Google
Cloud Datalabeling V1beta1Attempt Response[] - Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- create
Time string - Timestamp of when this evaluation job was created.
- id string
- The provider-assigned unique ID for this managed resource.
- name string
- After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- state string
- Describes the current state of the job.
- attempts
Sequence[Google
Cloud Datalabeling V1beta1Attempt Response] - Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- create_
time str - Timestamp of when this evaluation job was created.
- id str
- The provider-assigned unique ID for this managed resource.
- name str
- After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- state str
- Describes the current state of the job.
- attempts List<Property Map>
- Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- create
Time String - Timestamp of when this evaluation job was created.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- state String
- Describes the current state of the job.
Supporting Types
GoogleCloudDatalabelingV1beta1AttemptResponse, GoogleCloudDatalabelingV1beta1AttemptResponseArgs
- Attempt
Time string - Partial
Failures List<Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Rpc Status Response> - Details of errors that occurred.
- Attempt
Time string - Partial
Failures []GoogleRpc Status Response - Details of errors that occurred.
- attempt
Time String - partial
Failures List<GoogleRpc Status Response> - Details of errors that occurred.
- attempt
Time string - partial
Failures GoogleRpc Status Response[] - Details of errors that occurred.
- attempt_
time str - partial_
failures Sequence[GoogleRpc Status Response] - Details of errors that occurred.
- attempt
Time String - partial
Failures List<Property Map> - Details of errors that occurred.
GoogleCloudDatalabelingV1beta1BigQuerySource, GoogleCloudDatalabelingV1beta1BigQuerySourceArgs
- Input
Uri string - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- Input
Uri string - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri String - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri string - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input_
uri str - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri String - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
GoogleCloudDatalabelingV1beta1BigQuerySourceResponse, GoogleCloudDatalabelingV1beta1BigQuerySourceResponseArgs
- Input
Uri string - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- Input
Uri string - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri String - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri string - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input_
uri str - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri String - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptions, GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsArgs
- Iou
Threshold double - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- Iou
Threshold float64 - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold Double - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold number - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou_
threshold float - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold Number - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse, GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponseArgs
- Iou
Threshold double - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- Iou
Threshold float64 - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold Double - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold number - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou_
threshold float - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold Number - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
GoogleCloudDatalabelingV1beta1BoundingPolyConfig, GoogleCloudDatalabelingV1beta1BoundingPolyConfigArgs
- Annotation
Spec stringSet - Annotation spec set resource name.
- Instruction
Message string - Optional. Instruction message showed on contributors UI.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Instruction
Message string - Optional. Instruction message showed on contributors UI.
- annotation
Spec StringSet - Annotation spec set resource name.
- instruction
Message String - Optional. Instruction message showed on contributors UI.
- annotation
Spec stringSet - Annotation spec set resource name.
- instruction
Message string - Optional. Instruction message showed on contributors UI.
- annotation_
spec_ strset - Annotation spec set resource name.
- instruction_
message str - Optional. Instruction message showed on contributors UI.
- annotation
Spec StringSet - Annotation spec set resource name.
- instruction
Message String - Optional. Instruction message showed on contributors UI.
GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse, GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponseArgs
- Annotation
Spec stringSet - Annotation spec set resource name.
- Instruction
Message string - Optional. Instruction message showed on contributors UI.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Instruction
Message string - Optional. Instruction message showed on contributors UI.
- annotation
Spec StringSet - Annotation spec set resource name.
- instruction
Message String - Optional. Instruction message showed on contributors UI.
- annotation
Spec stringSet - Annotation spec set resource name.
- instruction
Message string - Optional. Instruction message showed on contributors UI.
- annotation_
spec_ strset - Annotation spec set resource name.
- instruction_
message str - Optional. Instruction message showed on contributors UI.
- annotation
Spec StringSet - Annotation spec set resource name.
- instruction
Message String - Optional. Instruction message showed on contributors UI.
GoogleCloudDatalabelingV1beta1ClassificationMetadata, GoogleCloudDatalabelingV1beta1ClassificationMetadataArgs
- Is
Multi boolLabel - Whether the classification task is multi-label or not.
- Is
Multi boolLabel - Whether the classification task is multi-label or not.
- is
Multi BooleanLabel - Whether the classification task is multi-label or not.
- is
Multi booleanLabel - Whether the classification task is multi-label or not.
- is_
multi_ boollabel - Whether the classification task is multi-label or not.
- is
Multi BooleanLabel - Whether the classification task is multi-label or not.
GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse, GoogleCloudDatalabelingV1beta1ClassificationMetadataResponseArgs
- Is
Multi boolLabel - Whether the classification task is multi-label or not.
- Is
Multi boolLabel - Whether the classification task is multi-label or not.
- is
Multi BooleanLabel - Whether the classification task is multi-label or not.
- is
Multi booleanLabel - Whether the classification task is multi-label or not.
- is_
multi_ boollabel - Whether the classification task is multi-label or not.
- is
Multi BooleanLabel - Whether the classification task is multi-label or not.
GoogleCloudDatalabelingV1beta1EvaluationConfig, GoogleCloudDatalabelingV1beta1EvaluationConfigArgs
- Bounding
Box Pulumi.Evaluation Options Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Bounding Box Evaluation Options - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- Bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding_
box_ Googleevaluation_ options Cloud Datalabeling V1beta1Bounding Box Evaluation Options - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box Property MapEvaluation Options - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
GoogleCloudDatalabelingV1beta1EvaluationConfigResponse, GoogleCloudDatalabelingV1beta1EvaluationConfigResponseArgs
- Bounding
Box Pulumi.Evaluation Options Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- Bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding_
box_ Googleevaluation_ options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box Property MapEvaluation Options - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfig, GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigArgs
- Email string
- An email address to send alerts to.
- double
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- Email string
- An email address to send alerts to.
- float64
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email String
- An email address to send alerts to.
- Double
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email string
- An email address to send alerts to.
- number
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email str
- An email address to send alerts to.
- min_
acceptable_ floatmean_ average_ precision - A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email String
- An email address to send alerts to.
- Number
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse, GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponseArgs
- Email string
- An email address to send alerts to.
- double
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- Email string
- An email address to send alerts to.
- float64
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email String
- An email address to send alerts to.
- Double
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email string
- An email address to send alerts to.
- number
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email str
- An email address to send alerts to.
- min_
acceptable_ floatmean_ average_ precision - A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email String
- An email address to send alerts to.
- Number
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
GoogleCloudDatalabelingV1beta1EvaluationJobConfig, GoogleCloudDatalabelingV1beta1EvaluationJobConfigArgs
- Bigquery
Import Dictionary<string, string>Keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - Evaluation
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Evaluation Config - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - Example
Count int - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - Example
Sample doublePercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- Bounding
Poly Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Bounding Poly Config - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - Evaluation
Job Pulumi.Alert Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Evaluation Job Alert Config - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- Human
Annotation Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Human Annotation Config - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - Image
Classification Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Image Classification Config - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - Input
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Input Config - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - Text
Classification Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Text Classification Config - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- Bigquery
Import map[string]stringKeys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - Evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - Example
Count int - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - Example
Sample float64Percentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- Bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - Evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- Human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - Image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - Input
Config GoogleCloud Datalabeling V1beta1Input Config - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - Text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import Map<String,String>Keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - example
Count Integer - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - example
Sample DoublePercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - input
Config GoogleCloud Datalabeling V1beta1Input Config - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import {[key: string]: string}Keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - example
Count number - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - example
Sample numberPercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - input
Config GoogleCloud Datalabeling V1beta1Input Config - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery_
import_ Mapping[str, str]keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - evaluation_
config GoogleCloud Datalabeling V1beta1Evaluation Config - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - example_
count int - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - example_
sample_ floatpercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- bounding_
poly_ Googleconfig Cloud Datalabeling V1beta1Bounding Poly Config - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - evaluation_
job_ Googlealert_ config Cloud Datalabeling V1beta1Evaluation Job Alert Config - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- human_
annotation_ Googleconfig Cloud Datalabeling V1beta1Human Annotation Config - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - image_
classification_ Googleconfig Cloud Datalabeling V1beta1Image Classification Config - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - input_
config GoogleCloud Datalabeling V1beta1Input Config - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - text_
classification_ Googleconfig Cloud Datalabeling V1beta1Text Classification Config - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import Map<String>Keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - evaluation
Config Property Map - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - example
Count Number - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - example
Sample NumberPercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- bounding
Poly Property MapConfig - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - evaluation
Job Property MapAlert Config - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- human
Annotation Property MapConfig - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - image
Classification Property MapConfig - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - input
Config Property Map - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - text
Classification Property MapConfig - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
GoogleCloudDatalabelingV1beta1EvaluationJobConfigResponse, GoogleCloudDatalabelingV1beta1EvaluationJobConfigResponseArgs
- Bigquery
Import Dictionary<string, string>Keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - Bounding
Poly Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Bounding Poly Config Response - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - Evaluation
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Evaluation Config Response - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - Evaluation
Job Pulumi.Alert Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Evaluation Job Alert Config Response - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- Example
Count int - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - Example
Sample doublePercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- Human
Annotation Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Human Annotation Config Response - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - Image
Classification Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Image Classification Config Response - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - Input
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Input Config Response - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - Text
Classification Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Text Classification Config Response - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- Bigquery
Import map[string]stringKeys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - Bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config Response - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - Evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config Response - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - Evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config Response - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- Example
Count int - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - Example
Sample float64Percentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- Human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config Response - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - Image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config Response - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - Input
Config GoogleCloud Datalabeling V1beta1Input Config Response - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - Text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config Response - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import Map<String,String>Keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config Response - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config Response - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config Response - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- example
Count Integer - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - example
Sample DoublePercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config Response - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config Response - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - input
Config GoogleCloud Datalabeling V1beta1Input Config Response - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config Response - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import {[key: string]: string}Keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config Response - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config Response - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config Response - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- example
Count number - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - example
Sample numberPercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config Response - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config Response - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - input
Config GoogleCloud Datalabeling V1beta1Input Config Response - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config Response - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery_
import_ Mapping[str, str]keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - bounding_
poly_ Googleconfig Cloud Datalabeling V1beta1Bounding Poly Config Response - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - evaluation_
config GoogleCloud Datalabeling V1beta1Evaluation Config Response - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - evaluation_
job_ Googlealert_ config Cloud Datalabeling V1beta1Evaluation Job Alert Config Response - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- example_
count int - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - example_
sample_ floatpercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- human_
annotation_ Googleconfig Cloud Datalabeling V1beta1Human Annotation Config Response - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - image_
classification_ Googleconfig Cloud Datalabeling V1beta1Image Classification Config Response - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - input_
config GoogleCloud Datalabeling V1beta1Input Config Response - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - text_
classification_ Googleconfig Cloud Datalabeling V1beta1Text Classification Config Response - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import Map<String>Keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - bounding
Poly Property MapConfig - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - evaluation
Config Property Map - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - evaluation
Job Property MapAlert Config - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- example
Count Number - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - example
Sample NumberPercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- human
Annotation Property MapConfig - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - image
Classification Property MapConfig - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - input
Config Property Map - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - text
Classification Property MapConfig - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
GoogleCloudDatalabelingV1beta1GcsSource, GoogleCloudDatalabelingV1beta1GcsSourceArgs
GoogleCloudDatalabelingV1beta1GcsSourceResponse, GoogleCloudDatalabelingV1beta1GcsSourceResponseArgs
GoogleCloudDatalabelingV1beta1HumanAnnotationConfig, GoogleCloudDatalabelingV1beta1HumanAnnotationConfigArgs
- Annotated
Dataset stringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- Instruction string
- Instruction resource name.
- Annotated
Dataset stringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- Contributor
Emails List<string> - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- Label
Group string - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - Language
Code string - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- Question
Duration string - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- Replica
Count int - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- User
Email stringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- Annotated
Dataset stringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- Instruction string
- Instruction resource name.
- Annotated
Dataset stringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- Contributor
Emails []string - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- Label
Group string - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - Language
Code string - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- Question
Duration string - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- Replica
Count int - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- User
Email stringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset StringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- instruction String
- Instruction resource name.
- annotated
Dataset StringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- contributor
Emails List<String> - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- label
Group String - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - language
Code String - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration String - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count Integer - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email StringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset stringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- instruction string
- Instruction resource name.
- annotated
Dataset stringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- contributor
Emails string[] - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- label
Group string - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - language
Code string - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration string - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count number - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email stringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated_
dataset_ strdisplay_ name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- instruction str
- Instruction resource name.
- annotated_
dataset_ strdescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- contributor_
emails Sequence[str] - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- label_
group str - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - language_
code str - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question_
duration str - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica_
count int - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user_
email_ straddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset StringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- instruction String
- Instruction resource name.
- annotated
Dataset StringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- contributor
Emails List<String> - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- label
Group String - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - language
Code String - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration String - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count Number - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email StringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse, GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponseArgs
- Annotated
Dataset stringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- Annotated
Dataset stringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- Contributor
Emails List<string> - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- Instruction string
- Instruction resource name.
- Label
Group string - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - Language
Code string - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- Question
Duration string - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- Replica
Count int - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- User
Email stringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- Annotated
Dataset stringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- Annotated
Dataset stringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- Contributor
Emails []string - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- Instruction string
- Instruction resource name.
- Label
Group string - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - Language
Code string - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- Question
Duration string - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- Replica
Count int - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- User
Email stringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset StringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- annotated
Dataset StringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- contributor
Emails List<String> - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- instruction String
- Instruction resource name.
- label
Group String - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - language
Code String - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration String - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count Integer - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email StringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset stringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- annotated
Dataset stringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- contributor
Emails string[] - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- instruction string
- Instruction resource name.
- label
Group string - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - language
Code string - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration string - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count number - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email stringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated_
dataset_ strdescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- annotated_
dataset_ strdisplay_ name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- contributor_
emails Sequence[str] - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- instruction str
- Instruction resource name.
- label_
group str - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - language_
code str - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question_
duration str - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica_
count int - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user_
email_ straddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset StringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- annotated
Dataset StringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- contributor
Emails List<String> - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- instruction String
- Instruction resource name.
- label
Group String - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - language
Code String - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration String - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count Number - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email StringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
GoogleCloudDatalabelingV1beta1ImageClassificationConfig, GoogleCloudDatalabelingV1beta1ImageClassificationConfigArgs
- Annotation
Spec stringSet - Annotation spec set resource name.
- Allow
Multi boolLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- Answer
Aggregation Pulumi.Type Google Native. Data Labeling. V1Beta1. Google Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type - Optional. The type of how to aggregate answers.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Allow
Multi boolLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- Answer
Aggregation GoogleType Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type - Optional. The type of how to aggregate answers.
- annotation
Spec StringSet - Annotation spec set resource name.
- allow
Multi BooleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- answer
Aggregation GoogleType Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type - Optional. The type of how to aggregate answers.
- annotation
Spec stringSet - Annotation spec set resource name.
- allow
Multi booleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- answer
Aggregation GoogleType Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type - Optional. The type of how to aggregate answers.
- annotation_
spec_ strset - Annotation spec set resource name.
- allow_
multi_ boollabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- answer_
aggregation_ Googletype Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type - Optional. The type of how to aggregate answers.
- annotation
Spec StringSet - Annotation spec set resource name.
- allow
Multi BooleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- answer
Aggregation "STRING_AGGREGATION_TYPE_UNSPECIFIED" | "MAJORITY_VOTE" | "UNANIMOUS_VOTE" | "NO_AGGREGATION"Type - Optional. The type of how to aggregate answers.
GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationType, GoogleCloudDatalabelingV1beta1ImageClassificationConfigAnswerAggregationTypeArgs
- String
Aggregation Type Unspecified - STRING_AGGREGATION_TYPE_UNSPECIFIED
- Majority
Vote - MAJORITY_VOTEMajority vote to aggregate answers.
- Unanimous
Vote - UNANIMOUS_VOTEUnanimous answers will be adopted.
- No
Aggregation - NO_AGGREGATIONPreserve all answers by crowd compute.
- Google
Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type String Aggregation Type Unspecified - STRING_AGGREGATION_TYPE_UNSPECIFIED
- Google
Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type Majority Vote - MAJORITY_VOTEMajority vote to aggregate answers.
- Google
Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type Unanimous Vote - UNANIMOUS_VOTEUnanimous answers will be adopted.
- Google
Cloud Datalabeling V1beta1Image Classification Config Answer Aggregation Type No Aggregation - NO_AGGREGATIONPreserve all answers by crowd compute.
- String
Aggregation Type Unspecified - STRING_AGGREGATION_TYPE_UNSPECIFIED
- Majority
Vote - MAJORITY_VOTEMajority vote to aggregate answers.
- Unanimous
Vote - UNANIMOUS_VOTEUnanimous answers will be adopted.
- No
Aggregation - NO_AGGREGATIONPreserve all answers by crowd compute.
- String
Aggregation Type Unspecified - STRING_AGGREGATION_TYPE_UNSPECIFIED
- Majority
Vote - MAJORITY_VOTEMajority vote to aggregate answers.
- Unanimous
Vote - UNANIMOUS_VOTEUnanimous answers will be adopted.
- No
Aggregation - NO_AGGREGATIONPreserve all answers by crowd compute.
- STRING_AGGREGATION_TYPE_UNSPECIFIED
- STRING_AGGREGATION_TYPE_UNSPECIFIED
- MAJORITY_VOTE
- MAJORITY_VOTEMajority vote to aggregate answers.
- UNANIMOUS_VOTE
- UNANIMOUS_VOTEUnanimous answers will be adopted.
- NO_AGGREGATION
- NO_AGGREGATIONPreserve all answers by crowd compute.
- "STRING_AGGREGATION_TYPE_UNSPECIFIED"
- STRING_AGGREGATION_TYPE_UNSPECIFIED
- "MAJORITY_VOTE"
- MAJORITY_VOTEMajority vote to aggregate answers.
- "UNANIMOUS_VOTE"
- UNANIMOUS_VOTEUnanimous answers will be adopted.
- "NO_AGGREGATION"
- NO_AGGREGATIONPreserve all answers by crowd compute.
GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse, GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponseArgs
- Allow
Multi boolLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Answer
Aggregation stringType - Optional. The type of how to aggregate answers.
- Allow
Multi boolLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Answer
Aggregation stringType - Optional. The type of how to aggregate answers.
- allow
Multi BooleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- annotation
Spec StringSet - Annotation spec set resource name.
- answer
Aggregation StringType - Optional. The type of how to aggregate answers.
- allow
Multi booleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- annotation
Spec stringSet - Annotation spec set resource name.
- answer
Aggregation stringType - Optional. The type of how to aggregate answers.
- allow_
multi_ boollabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- annotation_
spec_ strset - Annotation spec set resource name.
- answer_
aggregation_ strtype - Optional. The type of how to aggregate answers.
- allow
Multi BooleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- annotation
Spec StringSet - Annotation spec set resource name.
- answer
Aggregation StringType - Optional. The type of how to aggregate answers.
GoogleCloudDatalabelingV1beta1InputConfig, GoogleCloudDatalabelingV1beta1InputConfigArgs
- Data
Type Pulumi.Google Native. Data Labeling. V1Beta1. Google Cloud Datalabeling V1beta1Input Config Data Type - Data type must be specifed when user tries to import data.
- Annotation
Type Pulumi.Google Native. Data Labeling. V1Beta1. Google Cloud Datalabeling V1beta1Input Config Annotation Type - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Bigquery
Source Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Big Query Source - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Classification
Metadata Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Classification Metadata - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- Gcs
Source Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Gcs Source - Source located in Cloud Storage.
- Text
Metadata Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Text Metadata - Required for text import, as language code must be specified.
- Data
Type GoogleCloud Datalabeling V1beta1Input Config Data Type - Data type must be specifed when user tries to import data.
- Annotation
Type GoogleCloud Datalabeling V1beta1Input Config Annotation Type - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- Gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source - Source located in Cloud Storage.
- Text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata - Required for text import, as language code must be specified.
- data
Type GoogleCloud Datalabeling V1beta1Input Config Data Type - Data type must be specifed when user tries to import data.
- annotation
Type GoogleCloud Datalabeling V1beta1Input Config Annotation Type - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source - Source located in Cloud Storage.
- text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata - Required for text import, as language code must be specified.
- data
Type GoogleCloud Datalabeling V1beta1Input Config Data Type - Data type must be specifed when user tries to import data.
- annotation
Type GoogleCloud Datalabeling V1beta1Input Config Annotation Type - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source - Source located in Cloud Storage.
- text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata - Required for text import, as language code must be specified.
- data_
type GoogleCloud Datalabeling V1beta1Input Config Data Type - Data type must be specifed when user tries to import data.
- annotation_
type GoogleCloud Datalabeling V1beta1Input Config Annotation Type - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery_
source GoogleCloud Datalabeling V1beta1Big Query Source - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification_
metadata GoogleCloud Datalabeling V1beta1Classification Metadata - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- gcs_
source GoogleCloud Datalabeling V1beta1Gcs Source - Source located in Cloud Storage.
- text_
metadata GoogleCloud Datalabeling V1beta1Text Metadata - Required for text import, as language code must be specified.
- data
Type "DATA_TYPE_UNSPECIFIED" | "IMAGE" | "VIDEO" | "TEXT" | "GENERAL_DATA" - Data type must be specifed when user tries to import data.
- annotation
Type "ANNOTATION_TYPE_UNSPECIFIED" | "IMAGE_CLASSIFICATION_ANNOTATION" | "IMAGE_BOUNDING_BOX_ANNOTATION" | "IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION" | "IMAGE_BOUNDING_POLY_ANNOTATION" | "IMAGE_POLYLINE_ANNOTATION" | "IMAGE_SEGMENTATION_ANNOTATION" | "VIDEO_SHOTS_CLASSIFICATION_ANNOTATION" | "VIDEO_OBJECT_TRACKING_ANNOTATION" | "VIDEO_OBJECT_DETECTION_ANNOTATION" | "VIDEO_EVENT_ANNOTATION" | "TEXT_CLASSIFICATION_ANNOTATION" | "TEXT_ENTITY_EXTRACTION_ANNOTATION" | "GENERAL_CLASSIFICATION_ANNOTATION" - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source Property Map - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata Property Map - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- gcs
Source Property Map - Source located in Cloud Storage.
- text
Metadata Property Map - Required for text import, as language code must be specified.
GoogleCloudDatalabelingV1beta1InputConfigAnnotationType, GoogleCloudDatalabelingV1beta1InputConfigAnnotationTypeArgs
- Annotation
Type Unspecified - ANNOTATION_TYPE_UNSPECIFIED
- Image
Classification Annotation - IMAGE_CLASSIFICATION_ANNOTATIONClassification annotations in an image. Allowed for continuous evaluation.
- Image
Bounding Box Annotation - IMAGE_BOUNDING_BOX_ANNOTATIONBounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
- Image
Oriented Bounding Box Annotation - IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATIONOriented bounding box. The box does not have to be parallel to horizontal line.
- Image
Bounding Poly Annotation - IMAGE_BOUNDING_POLY_ANNOTATIONBounding poly annotations in an image.
- Image
Polyline Annotation - IMAGE_POLYLINE_ANNOTATIONPolyline annotations in an image.
- Image
Segmentation Annotation - IMAGE_SEGMENTATION_ANNOTATIONSegmentation annotations in an image.
- Video
Shots Classification Annotation - VIDEO_SHOTS_CLASSIFICATION_ANNOTATIONClassification annotations in video shots.
- Video
Object Tracking Annotation - VIDEO_OBJECT_TRACKING_ANNOTATIONVideo object tracking annotation.
- Video
Object Detection Annotation - VIDEO_OBJECT_DETECTION_ANNOTATIONVideo object detection annotation.
- Video
Event Annotation - VIDEO_EVENT_ANNOTATIONVideo event annotation.
- Text
Classification Annotation - TEXT_CLASSIFICATION_ANNOTATIONClassification for text. Allowed for continuous evaluation.
- Text
Entity Extraction Annotation - TEXT_ENTITY_EXTRACTION_ANNOTATIONEntity extraction for text.
- General
Classification Annotation - GENERAL_CLASSIFICATION_ANNOTATIONGeneral classification. Allowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Annotation Type Unspecified - ANNOTATION_TYPE_UNSPECIFIED
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Image Classification Annotation - IMAGE_CLASSIFICATION_ANNOTATIONClassification annotations in an image. Allowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Image Bounding Box Annotation - IMAGE_BOUNDING_BOX_ANNOTATIONBounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Image Oriented Bounding Box Annotation - IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATIONOriented bounding box. The box does not have to be parallel to horizontal line.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Image Bounding Poly Annotation - IMAGE_BOUNDING_POLY_ANNOTATIONBounding poly annotations in an image.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Image Polyline Annotation - IMAGE_POLYLINE_ANNOTATIONPolyline annotations in an image.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Image Segmentation Annotation - IMAGE_SEGMENTATION_ANNOTATIONSegmentation annotations in an image.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Video Shots Classification Annotation - VIDEO_SHOTS_CLASSIFICATION_ANNOTATIONClassification annotations in video shots.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Video Object Tracking Annotation - VIDEO_OBJECT_TRACKING_ANNOTATIONVideo object tracking annotation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Video Object Detection Annotation - VIDEO_OBJECT_DETECTION_ANNOTATIONVideo object detection annotation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Video Event Annotation - VIDEO_EVENT_ANNOTATIONVideo event annotation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Text Classification Annotation - TEXT_CLASSIFICATION_ANNOTATIONClassification for text. Allowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type Text Entity Extraction Annotation - TEXT_ENTITY_EXTRACTION_ANNOTATIONEntity extraction for text.
- Google
Cloud Datalabeling V1beta1Input Config Annotation Type General Classification Annotation - GENERAL_CLASSIFICATION_ANNOTATIONGeneral classification. Allowed for continuous evaluation.
- Annotation
Type Unspecified - ANNOTATION_TYPE_UNSPECIFIED
- Image
Classification Annotation - IMAGE_CLASSIFICATION_ANNOTATIONClassification annotations in an image. Allowed for continuous evaluation.
- Image
Bounding Box Annotation - IMAGE_BOUNDING_BOX_ANNOTATIONBounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
- Image
Oriented Bounding Box Annotation - IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATIONOriented bounding box. The box does not have to be parallel to horizontal line.
- Image
Bounding Poly Annotation - IMAGE_BOUNDING_POLY_ANNOTATIONBounding poly annotations in an image.
- Image
Polyline Annotation - IMAGE_POLYLINE_ANNOTATIONPolyline annotations in an image.
- Image
Segmentation Annotation - IMAGE_SEGMENTATION_ANNOTATIONSegmentation annotations in an image.
- Video
Shots Classification Annotation - VIDEO_SHOTS_CLASSIFICATION_ANNOTATIONClassification annotations in video shots.
- Video
Object Tracking Annotation - VIDEO_OBJECT_TRACKING_ANNOTATIONVideo object tracking annotation.
- Video
Object Detection Annotation - VIDEO_OBJECT_DETECTION_ANNOTATIONVideo object detection annotation.
- Video
Event Annotation - VIDEO_EVENT_ANNOTATIONVideo event annotation.
- Text
Classification Annotation - TEXT_CLASSIFICATION_ANNOTATIONClassification for text. Allowed for continuous evaluation.
- Text
Entity Extraction Annotation - TEXT_ENTITY_EXTRACTION_ANNOTATIONEntity extraction for text.
- General
Classification Annotation - GENERAL_CLASSIFICATION_ANNOTATIONGeneral classification. Allowed for continuous evaluation.
- Annotation
Type Unspecified - ANNOTATION_TYPE_UNSPECIFIED
- Image
Classification Annotation - IMAGE_CLASSIFICATION_ANNOTATIONClassification annotations in an image. Allowed for continuous evaluation.
- Image
Bounding Box Annotation - IMAGE_BOUNDING_BOX_ANNOTATIONBounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
- Image
Oriented Bounding Box Annotation - IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATIONOriented bounding box. The box does not have to be parallel to horizontal line.
- Image
Bounding Poly Annotation - IMAGE_BOUNDING_POLY_ANNOTATIONBounding poly annotations in an image.
- Image
Polyline Annotation - IMAGE_POLYLINE_ANNOTATIONPolyline annotations in an image.
- Image
Segmentation Annotation - IMAGE_SEGMENTATION_ANNOTATIONSegmentation annotations in an image.
- Video
Shots Classification Annotation - VIDEO_SHOTS_CLASSIFICATION_ANNOTATIONClassification annotations in video shots.
- Video
Object Tracking Annotation - VIDEO_OBJECT_TRACKING_ANNOTATIONVideo object tracking annotation.
- Video
Object Detection Annotation - VIDEO_OBJECT_DETECTION_ANNOTATIONVideo object detection annotation.
- Video
Event Annotation - VIDEO_EVENT_ANNOTATIONVideo event annotation.
- Text
Classification Annotation - TEXT_CLASSIFICATION_ANNOTATIONClassification for text. Allowed for continuous evaluation.
- Text
Entity Extraction Annotation - TEXT_ENTITY_EXTRACTION_ANNOTATIONEntity extraction for text.
- General
Classification Annotation - GENERAL_CLASSIFICATION_ANNOTATIONGeneral classification. Allowed for continuous evaluation.
- ANNOTATION_TYPE_UNSPECIFIED
- ANNOTATION_TYPE_UNSPECIFIED
- IMAGE_CLASSIFICATION_ANNOTATION
- IMAGE_CLASSIFICATION_ANNOTATIONClassification annotations in an image. Allowed for continuous evaluation.
- IMAGE_BOUNDING_BOX_ANNOTATION
- IMAGE_BOUNDING_BOX_ANNOTATIONBounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
- IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION
- IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATIONOriented bounding box. The box does not have to be parallel to horizontal line.
- IMAGE_BOUNDING_POLY_ANNOTATION
- IMAGE_BOUNDING_POLY_ANNOTATIONBounding poly annotations in an image.
- IMAGE_POLYLINE_ANNOTATION
- IMAGE_POLYLINE_ANNOTATIONPolyline annotations in an image.
- IMAGE_SEGMENTATION_ANNOTATION
- IMAGE_SEGMENTATION_ANNOTATIONSegmentation annotations in an image.
- VIDEO_SHOTS_CLASSIFICATION_ANNOTATION
- VIDEO_SHOTS_CLASSIFICATION_ANNOTATIONClassification annotations in video shots.
- VIDEO_OBJECT_TRACKING_ANNOTATION
- VIDEO_OBJECT_TRACKING_ANNOTATIONVideo object tracking annotation.
- VIDEO_OBJECT_DETECTION_ANNOTATION
- VIDEO_OBJECT_DETECTION_ANNOTATIONVideo object detection annotation.
- VIDEO_EVENT_ANNOTATION
- VIDEO_EVENT_ANNOTATIONVideo event annotation.
- TEXT_CLASSIFICATION_ANNOTATION
- TEXT_CLASSIFICATION_ANNOTATIONClassification for text. Allowed for continuous evaluation.
- TEXT_ENTITY_EXTRACTION_ANNOTATION
- TEXT_ENTITY_EXTRACTION_ANNOTATIONEntity extraction for text.
- GENERAL_CLASSIFICATION_ANNOTATION
- GENERAL_CLASSIFICATION_ANNOTATIONGeneral classification. Allowed for continuous evaluation.
- "ANNOTATION_TYPE_UNSPECIFIED"
- ANNOTATION_TYPE_UNSPECIFIED
- "IMAGE_CLASSIFICATION_ANNOTATION"
- IMAGE_CLASSIFICATION_ANNOTATIONClassification annotations in an image. Allowed for continuous evaluation.
- "IMAGE_BOUNDING_BOX_ANNOTATION"
- IMAGE_BOUNDING_BOX_ANNOTATIONBounding box annotations in an image. A form of image object detection. Allowed for continuous evaluation.
- "IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATION"
- IMAGE_ORIENTED_BOUNDING_BOX_ANNOTATIONOriented bounding box. The box does not have to be parallel to horizontal line.
- "IMAGE_BOUNDING_POLY_ANNOTATION"
- IMAGE_BOUNDING_POLY_ANNOTATIONBounding poly annotations in an image.
- "IMAGE_POLYLINE_ANNOTATION"
- IMAGE_POLYLINE_ANNOTATIONPolyline annotations in an image.
- "IMAGE_SEGMENTATION_ANNOTATION"
- IMAGE_SEGMENTATION_ANNOTATIONSegmentation annotations in an image.
- "VIDEO_SHOTS_CLASSIFICATION_ANNOTATION"
- VIDEO_SHOTS_CLASSIFICATION_ANNOTATIONClassification annotations in video shots.
- "VIDEO_OBJECT_TRACKING_ANNOTATION"
- VIDEO_OBJECT_TRACKING_ANNOTATIONVideo object tracking annotation.
- "VIDEO_OBJECT_DETECTION_ANNOTATION"
- VIDEO_OBJECT_DETECTION_ANNOTATIONVideo object detection annotation.
- "VIDEO_EVENT_ANNOTATION"
- VIDEO_EVENT_ANNOTATIONVideo event annotation.
- "TEXT_CLASSIFICATION_ANNOTATION"
- TEXT_CLASSIFICATION_ANNOTATIONClassification for text. Allowed for continuous evaluation.
- "TEXT_ENTITY_EXTRACTION_ANNOTATION"
- TEXT_ENTITY_EXTRACTION_ANNOTATIONEntity extraction for text.
- "GENERAL_CLASSIFICATION_ANNOTATION"
- GENERAL_CLASSIFICATION_ANNOTATIONGeneral classification. Allowed for continuous evaluation.
GoogleCloudDatalabelingV1beta1InputConfigDataType, GoogleCloudDatalabelingV1beta1InputConfigDataTypeArgs
- Data
Type Unspecified - DATA_TYPE_UNSPECIFIEDData type is unspecified.
- Image
- IMAGEAllowed for continuous evaluation.
- Video
- VIDEOVideo data type.
- Text
- TEXTAllowed for continuous evaluation.
- General
Data - GENERAL_DATAAllowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Data Type Data Type Unspecified - DATA_TYPE_UNSPECIFIEDData type is unspecified.
- Google
Cloud Datalabeling V1beta1Input Config Data Type Image - IMAGEAllowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Data Type Video - VIDEOVideo data type.
- Google
Cloud Datalabeling V1beta1Input Config Data Type Text - TEXTAllowed for continuous evaluation.
- Google
Cloud Datalabeling V1beta1Input Config Data Type General Data - GENERAL_DATAAllowed for continuous evaluation.
- Data
Type Unspecified - DATA_TYPE_UNSPECIFIEDData type is unspecified.
- Image
- IMAGEAllowed for continuous evaluation.
- Video
- VIDEOVideo data type.
- Text
- TEXTAllowed for continuous evaluation.
- General
Data - GENERAL_DATAAllowed for continuous evaluation.
- Data
Type Unspecified - DATA_TYPE_UNSPECIFIEDData type is unspecified.
- Image
- IMAGEAllowed for continuous evaluation.
- Video
- VIDEOVideo data type.
- Text
- TEXTAllowed for continuous evaluation.
- General
Data - GENERAL_DATAAllowed for continuous evaluation.
- DATA_TYPE_UNSPECIFIED
- DATA_TYPE_UNSPECIFIEDData type is unspecified.
- IMAGE
- IMAGEAllowed for continuous evaluation.
- VIDEO
- VIDEOVideo data type.
- TEXT
- TEXTAllowed for continuous evaluation.
- GENERAL_DATA
- GENERAL_DATAAllowed for continuous evaluation.
- "DATA_TYPE_UNSPECIFIED"
- DATA_TYPE_UNSPECIFIEDData type is unspecified.
- "IMAGE"
- IMAGEAllowed for continuous evaluation.
- "VIDEO"
- VIDEOVideo data type.
- "TEXT"
- TEXTAllowed for continuous evaluation.
- "GENERAL_DATA"
- GENERAL_DATAAllowed for continuous evaluation.
GoogleCloudDatalabelingV1beta1InputConfigResponse, GoogleCloudDatalabelingV1beta1InputConfigResponseArgs
- Annotation
Type string - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Bigquery
Source Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Big Query Source Response - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Classification
Metadata Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Classification Metadata Response - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- Data
Type string - Data type must be specifed when user tries to import data.
- Gcs
Source Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Gcs Source Response - Source located in Cloud Storage.
- Text
Metadata Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Text Metadata Response - Required for text import, as language code must be specified.
- Annotation
Type string - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source Response - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata Response - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- Data
Type string - Data type must be specifed when user tries to import data.
- Gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source Response - Source located in Cloud Storage.
- Text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata Response - Required for text import, as language code must be specified.
- annotation
Type String - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source Response - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata Response - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- data
Type String - Data type must be specifed when user tries to import data.
- gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source Response - Source located in Cloud Storage.
- text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata Response - Required for text import, as language code must be specified.
- annotation
Type string - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source Response - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata Response - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- data
Type string - Data type must be specifed when user tries to import data.
- gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source Response - Source located in Cloud Storage.
- text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata Response - Required for text import, as language code must be specified.
- annotation_
type str - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery_
source GoogleCloud Datalabeling V1beta1Big Query Source Response - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification_
metadata GoogleCloud Datalabeling V1beta1Classification Metadata Response - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- data_
type str - Data type must be specifed when user tries to import data.
- gcs_
source GoogleCloud Datalabeling V1beta1Gcs Source Response - Source located in Cloud Storage.
- text_
metadata GoogleCloud Datalabeling V1beta1Text Metadata Response - Required for text import, as language code must be specified.
- annotation
Type String - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source Property Map - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata Property Map - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- data
Type String - Data type must be specifed when user tries to import data.
- gcs
Source Property Map - Source located in Cloud Storage.
- text
Metadata Property Map - Required for text import, as language code must be specified.
GoogleCloudDatalabelingV1beta1SentimentConfig, GoogleCloudDatalabelingV1beta1SentimentConfigArgs
- Enable
Label boolSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- Enable
Label boolSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label BooleanSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label booleanSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable_
label_ boolsentiment_ selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label BooleanSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
GoogleCloudDatalabelingV1beta1SentimentConfigResponse, GoogleCloudDatalabelingV1beta1SentimentConfigResponseArgs
- Enable
Label boolSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- Enable
Label boolSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label BooleanSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label booleanSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable_
label_ boolsentiment_ selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label BooleanSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
GoogleCloudDatalabelingV1beta1TextClassificationConfig, GoogleCloudDatalabelingV1beta1TextClassificationConfigArgs
- Annotation
Spec stringSet - Annotation spec set resource name.
- Allow
Multi boolLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- Sentiment
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Sentiment Config - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Allow
Multi boolLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- Sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- annotation
Spec StringSet - Annotation spec set resource name.
- allow
Multi BooleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- annotation
Spec stringSet - Annotation spec set resource name.
- allow
Multi booleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- annotation_
spec_ strset - Annotation spec set resource name.
- allow_
multi_ boollabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- sentiment_
config GoogleCloud Datalabeling V1beta1Sentiment Config - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- annotation
Spec StringSet - Annotation spec set resource name.
- allow
Multi BooleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- sentiment
Config Property Map - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse, GoogleCloudDatalabelingV1beta1TextClassificationConfigResponseArgs
- Allow
Multi boolLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Sentiment
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Sentiment Config Response - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- Allow
Multi boolLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config Response - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- allow
Multi BooleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- annotation
Spec StringSet - Annotation spec set resource name.
- sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config Response - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- allow
Multi booleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- annotation
Spec stringSet - Annotation spec set resource name.
- sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config Response - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- allow_
multi_ boollabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- annotation_
spec_ strset - Annotation spec set resource name.
- sentiment_
config GoogleCloud Datalabeling V1beta1Sentiment Config Response - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- allow
Multi BooleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- annotation
Spec StringSet - Annotation spec set resource name.
- sentiment
Config Property Map - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
GoogleCloudDatalabelingV1beta1TextMetadata, GoogleCloudDatalabelingV1beta1TextMetadataArgs
- Language
Code string - The language of this text, as a BCP-47. Default value is en-US.
- Language
Code string - The language of this text, as a BCP-47. Default value is en-US.
- language
Code String - The language of this text, as a BCP-47. Default value is en-US.
- language
Code string - The language of this text, as a BCP-47. Default value is en-US.
- language_
code str - The language of this text, as a BCP-47. Default value is en-US.
- language
Code String - The language of this text, as a BCP-47. Default value is en-US.
GoogleCloudDatalabelingV1beta1TextMetadataResponse, GoogleCloudDatalabelingV1beta1TextMetadataResponseArgs
- Language
Code string - The language of this text, as a BCP-47. Default value is en-US.
- Language
Code string - The language of this text, as a BCP-47. Default value is en-US.
- language
Code String - The language of this text, as a BCP-47. Default value is en-US.
- language
Code string - The language of this text, as a BCP-47. Default value is en-US.
- language_
code str - The language of this text, as a BCP-47. Default value is en-US.
- language
Code String - The language of this text, as a BCP-47. Default value is en-US.
GoogleRpcStatusResponse, GoogleRpcStatusResponseArgs
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details
List<Immutable
Dictionary<string, string>> - A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details []map[string]string
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Integer
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String,String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code number
- The status code, which should be an enum value of google.rpc.Code.
- details {[key: string]: string}[]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code int
- The status code, which should be an enum value of google.rpc.Code.
- details Sequence[Mapping[str, str]]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message str
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Number
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.