Cloudera Enterprise 5.15.x | Other versions

Search Syntax and Properties

Cloudera Navigator search uses an embedded Solr engine that follows the syntax specified for the LuceneQParserPlugin.

Search Syntax

Search strings are constructed by specifying a default property value and one of the four types of key-value pairs as follows:

  • Technical metadata key-value pairs - key:value
    • key is one of the properties listed in Searchable Properties Reference.
    • value is a single value or range of values specified as [value1 TO value2]. In a value, * is a wildcard. In property values, you must escape special characters :, -, /, and * with the backslash character (\), or enclose the property value in quotes.
    Technical metadata key-value pairs are read-only and cannot be modified.
  • Custom metadata key-value pairs - up_key:value
    • key is a user-defined property.
    • value is a single value or range of values specified as [value1 TO value2]. In a value, * is a wildcard. In property values, you must escape special characters :, -, /, and * with the backslash character (\), or enclose the property value in quotes.
    Custom metadata key-value pairs can be modified.
  • Hive extended attribute key-value pairs - tp_key:value
    • key is an extended attribute set on a Hive entity. The syntax of the attribute is specific to Hive.
    • value is a single value supported by the entity type.
    Hive extended attribute key-value pairs are read-only and cannot be modified.
  • Managed metadata key-value pairs - namespace.key:value
    • namespace is the namespace containing the property. See Defining Properties for Managed Metadata.
    • key is the name of a managed metadata property.
    • value is a single value, a range of values specified as [value1 TO value2], or a set of values separated by spaces. In a value, * is a wildcard. In property values, you must escape special characters :, -, /, and * with the backslash character (\), or enclose the property value in quotes.
    Only the values of managed metadata key-value pairs can be modified.
  • S3 key-value pairs - tp_key:value
    • key is the name of user-defined metadata.
    • value is a single value.
    • Only file metadata is extracted; bucket and folder metadata is not extracted.

Constructing Compound Search Strings

To construct compound search strings, you can join multiple property-value pairs using the Lucene Query Parser Boolean operators:
  • +, -
  • OR, AND, NOT
In both syntax m use () to group multiple clauses into a single field and to form sub-queries.

When you filter Search results in the Cloudera Navigator console, the constructed search strings use the +, - syntax.

Example Search Strings

  • Entities in the path /user/hive that have not been deleted - +("/user/hive") +(-deleted:true)
  • Descriptions that start with the string "Banking" - description:Banking*
  • Entities of type MapReduce or entities of type Hive - sourceType:mapreduce sourceType:hive or sourceType:mapreduce OR sourceType:hive
  • Entities of type HDFS with size equal to or greater than 1024 MiB or entities of type Impala - (+sourceType:hdfs +size:[1073741824 TO *]) sourceType:impala
  • Directories owned by hdfs in the path /user/hdfs/input - +owner:hdfs +type:directory +fileSystemPath:"/user/hdfs/input" or owner:hdfs AND type:directory AND fileSystemPath:"/user/hdfs/input"
  • Job started between 20:00 to 21:00 UTC - started:[2013-10-21T20:00:00.000Z TO 2013-10-21T21:00:00.000Z]
  • Custom key-value - project-customer1 - up_project:customer1
  • Technical key-value - In Hive, specify table properties like this:
    ALTER TABLE table_name SET TBLPROPERTIES ('key1'='value1');
    To search for this property, specify tp_key1:value1.
  • Managed key-value with multivalued property - MailAnnotation.emailTo:"dana@example.com" MailAnnotation.emailTo:"lee@example.com"
  Note: When viewing MapReduce jobs in the Cloudera Manager Activities page, the string that appears in a job Name column equates to the originalName property. To specify a MapReduce job name in a search, use the string (sourceType:mapreduce) and (originalName:jobName), where jobName is the value in the job Name column.

Searchable Properties Reference

Default Properties

The following properties can be searched by specifying a property value: type, fileSystemPath, inputs, jobId, mapper, mimeType, name, originalName, outputs, owner, principal, reducer, and tags.

Common Properties

Name Type Description
description text Description of the entity.
group caseInsensitiveText The group to which the owner of the entity belongs.
name ngramedText The overridden name of the entity. If the name has not been overridden, this value is empty. Names cannot contain spaces.
operationType ngramedText The type of an operation:
  • Pig - SCRIPT
  • Sqoop - Table Export, Query Import
originalName ngramedText Entity name at extraction time.
originalDescription text Entity description at extraction.
owner caseInsensitiveText Entity owner.
principal caseInsensitiveText For entities with type OPERATION_EXECUTION, the initiator of the entity.
properties string Set of key-value pairs that describe the entity.
tags ngramedText Set of tags that describe the entity.
type tokenizedCaseInsensitiveText Entity type. Source type of entity determines set of types available.
  • hdfs - directory, file
  • hive - database, table, field, operation, operation_execution, sub_operation, partition, view
  • impala - operation, operation_execution, sub_operation
  • mapreduce - operation, operation_execution
  • oozie - operation, operation_execution
  • pig - operation, operation_execution
  • spark - operation, operation_execution
  • sqoop - operation, operation_execution, sub_operation
  • yarn - operation, operation_execution
  • cluster - cluster_template,cluster_instance
  • s3 - file, s3bucket
userEntity Boolean Indicates whether an entity was added using the Cloudera Navigator SDK.
Query
queryText string The text of a Hive, Impala, or Sqoop query.
Source
clusterName string The name of the cluster in which the source is managed.
sourceId string The ID of the source type.
sourceType caseInsensitiveText The source type of the entity: hdfs, hive, impala, mapreduce, oozie, pig, spark, sqoop, or yarn.
sourceUrl string The URL of web application for a resource.
Timestamps
Fields vary by source type:
  • hdfs - created, lastAccessed, lastModified
  • hive - created, lastModified
  • impala, mapreduce, pig, spark, sqoop, and yarn - started, ended
date Timestamps in the Solr Date Format. For example:
  • lastAccessed:[* TO NOW]
  • created:[1976-03-06T23:59:59.999Z TO *]
  • started:[1995-12-31T23:59:59.999Z TO 2007-03-06T00:00:00Z]
  • ended:[NOW-1YEAR/DAY TO NOW/DAY+1DAY]
  • created:[1976-03-06T23:59:59.999Z TO 1976-03-06T23:59:59.999Z+1YEAR]
  • lastAccessed:[1976-03-06T23:59:59.999Z/YEAR TO 1976-03-06T23:59:59.999Z]

Entity Types

Source Entity Types
hdfs DIRECTORY,FILE, DATASET, FIELD
hive DATABASE, TABLE, FIELD, OPERATION, OPERATION_EXECUTION, SUB_OPERATION, PARTITION, RESOURCE, VIEW
impala OPERATION, OPERATION_EXECUTION, SUB_OPERATION
mapreduce OPERATION, OPERATION_EXECUTION
oozie OPERATION, OPERATION_EXECUTION
pig OPERATION, OPERATION_EXECUTION
spark OPERATION, OPERATION_EXECUTION
sqoop OPERATION, OPERATION_EXECUTION, SUB_OPERATION
yarn OPERATION, OPERATION_EXECUTION, SUB_OPERATION

Dataset Properties

Name Type Description
compressionType tokenizedCaseInsensitiveText The type of compression of a dataset file.
dataType string The data type: record.
datasetType tokenizedCaseInsensitiveText The type of the dataset: Kite.
fileFormat tokenizedCaseInsensitiveText The format of a dataset file: Avro or Parquet.
fullDataType string The full data type: record.
partitionType string The type of the partition.
schemaName string The name of the dataset schema.
schemaNameSpace string The namespace of the dataset schema.

HDFS Properties

Name Type Description
blockSize long The block size of an HDFS file.
deleted Boolean Indicates whether the entity has been moved to the Trash folder.
deleteTime date The time the entity was moved to the Trash folder.
fileSystemPath path The path to the entity.
mimeType ngramedText The MIME type of an HDFS file.
parentPath string The path to the parent entity of a child entity. For example: parent path:/default/sample_07 for the table sample_07 from the Hive database default.
permissions string The UNIX access permissions of the entity.
replication int The number of copies of HDFS file blocks.
size long The exact size of the entity in bytes or a range of sizes. Range examples: size:[1000 TO *], size: [* TO 2000], and size:[* TO *] to find all fields with a size value.

Hive Properties

Name Type Description
Field
dataType ngramedText The type of data stored in a field (column).
Table
compressed Boolean Indicates whether a table is compressed.
serDeLibName string The name of the library containing the SerDe class.
serDeName string The fully qualified name of the SerDe class.
Partition
partitionColNames string The table columns that define the partition.
partitionColValues string The table column values that define the partition.
technical_properties string Hive extended attributes.
clusteredByColNames string The column names that identify how table content is divided into buckets.
sortByColNames string The column names that identify how table content is sorted within a bucket.

MapReduce and YARN Properties

Name Type Description
inputRecursive Boolean Indicates whether files are searched recursively under the input directories, or only files directly under the input directories are considered.
jobId ngramedText The ID of the job. For a job spawned by Oozie, the workflow ID.
mapper string The fully qualified name of the mapper class.
outputKey string The fully qualified name of the class of the output key.
outputValue string The fully qualified name of the class of the output value.
reducer string The fully qualified name of the reducer class.

Operation Properties

Name Type Description
Operation
inputFormat string The fully qualified name of the class of the input format.
outputFormat string The fully qualified name of the class of the output format.
Operation Execution
inputs string The name of the entity input to an operation execution. For entities of resource type mapreduce, yarn, and spark, it is usually a directory. For entities of resource type hive, it is usually a table.
outputs string The name of the entity output from an operation execution. For entities of resource type mapreduce, yarn, and spark, it is usually a directory. For entities of resource type hive, it is usually a table.
engineType string The type of the engine used for an operation: MR or Spark.

Oozie Properties

Name Type Description
status string The status of the Oozie workflow: RUNNING, SUCCEEDED, or FAILED.

Pig Properties

Name Type Description
scriptId string The ID of the Pig script.

S3 Properties

Name Type Description
Object Properties
region string The geographic region in which the bucket is stored
bucketName string The name of the bucket in which the object is stored
fileSystemPath path The key of the S3 object.
size long Object size in bytes.
lastModified date Object creation date or the last modified date, whichever is the latest.
etag string A hash of the object. The ETag reflects changes only to the contents of an object, not its metadata. The ETag may or may not be an MD5 digest of the object data.
storageClass string Storage class used for storing the object.
owner string Owner of the object.
sequencer string Latest S3 event notification sequencer. Used to order events.
parentPath string Parent of the S3 object.
technicalProperties key-value pairs Custom metadata for each S3 object.
Bucket Properties
region string Region for the bucket.
created date Date the bucket was created.
owner string Owner of the bucket.

Sqoop Properties

Name Type Description
dbURL string The URL of the database from or to which the data was imported or exported.
dbTable string The table from or to which the data was imported or exported.
dbUser string The database user.
dbWhere string A where clause that identifies which rows were imported.
dbColumnExpression string An expression that identifies which columns were imported.
Page generated May 18, 2018.