Skip to main content

Presto

Certified

Important Capabilities

CapabilityStatusNotes
Asset ContainersEnabled by default. Supported for types - Database, Schema.
ClassificationOptionally enabled via classification.enabled.
Column-level LineageEnabled by default to get lineage for views via include_view_column_lineage. Supported for types - View.
Data ProfilingOptionally enabled via configuration.
DescriptionsEnabled by default.
Detect Deleted EntitiesEnabled by default via stateful ingestion.
DomainsSupported via the domain config field.
Schema MetadataEnabled by default.
Table-Level LineageExtract table-level lineage. Supported for types - Table, View.
Test ConnectionEnabled by default.

This plugin extracts the following:

  • Metadata for databases, schemas, and tables
  • Column types and schema associated with each table
  • Table, row, and column statistics via optional SQL profiling

Prerequisites

The Presto source connects directly to your Presto cluster via SQL to extract metadata about tables, views, schemas, and catalogs.

Before configuring the DataHub connector, ensure you have:

  1. Network Access: The machine running DataHub ingestion must be able to reach your Presto coordinator on the configured port (typically 8080 or 443 for HTTPS).

  2. Presto User Account: A Presto user with appropriate permissions to query metadata.

  3. PyHive Dependencies: The connector uses PyHive for connectivity. Install the appropriate dependencies:

    pip install 'acryl-datahub[presto]'

Important: Presto vs. Presto-on-Hive

There are two different ways to ingest Presto metadata into DataHub, depending on your use case:

Option 1: Presto Connector (This Source)

Use when: You want to connect directly to Presto to extract metadata from all catalogs (not just Hive).

Capabilities:

  • Extracts tables and views from all Presto catalogs (Hive, PostgreSQL, MySQL, Cassandra, etc.)
  • Supports table and view metadata
  • Supports data profiling
  • Extracts view SQL definitions
  • Does NOT support storage lineage (no access to underlying storage locations)
  • Limited view lineage for complex Presto-specific SQL

Configuration:

source:
type: presto # ← This connector
config:
host_port: presto-coordinator.company.com:8080
username: datahub_user
password: ${PRESTO_PASSWORD}

Option 2: Hive Metastore Connector with Presto Mode

Use when: You want to ingest Presto views that use the Hive metastore and need storage lineage.

Capabilities:

  • Extracts Presto views stored in Hive metastore
  • Supports storage lineage from S3/HDFS/Azure to Hive tables to Presto views
  • Better Presto view definition parsing
  • Column-level lineage support
  • Faster metadata extraction (direct database access)
  • Only works with Hive-backed catalogs

Configuration:

source:
type: hive-metastore # ← Use this for storage lineage
config:
host_port: metastore-db.company.com:5432
database: metastore
scheme: "postgresql+psycopg2"
mode: presto # ← Set mode to 'presto'

# Enable storage lineage
emit_storage_lineage: true
hive_storage_lineage_direction: upstream

For complete details, see:

Required Permissions

The Presto user account used by DataHub needs minimal permissions:

-- Presto uses catalog-level permissions
-- The user needs SELECT access to system information tables
-- This is typically granted by default to all users

Recommendation: Use a read-only service account with access to all catalogs you want to ingest.

Authentication

Basic Authentication (Username/Password)

The most common authentication method:

source:
type: presto
config:
host_port: presto.company.com:8080
username: datahub_user
password: ${PRESTO_PASSWORD}
database: hive # Optional: default catalog

LDAP Authentication

For LDAP-based authentication:

source:
type: presto
config:
host_port: presto.company.com:8080
username: datahub_user
password: ${LDAP_PASSWORD}
database: hive

HTTPS/TLS Connection

For secure connections:

source:
type: presto
config:
host_port: presto.company.com:443
username: datahub_user
password: ${PRESTO_PASSWORD}
database: hive
options:
connect_args:
protocol: https

Kerberos Authentication

For Kerberos-secured Presto clusters:

source:
type: presto
config:
host_port: presto.company.com:8080
database: hive
options:
connect_args:
auth: KERBEROS
kerberos_service_name: presto

Requirements:

  • Valid Kerberos ticket (use kinit before running ingestion)
  • PyKerberos package installed

Catalog and Schema Filtering

Presto can connect to many different catalogs (Hive, PostgreSQL, MySQL, etc.). Use filtering to control what gets ingested:

Catalog Filtering

source:
type: presto
config:
host_port: presto.company.com:8080
username: datahub_user

# Only ingest specific catalogs
database_pattern:
allow:
- "^hive$"
- "^postgresql$"
deny:
- "system"
- "information_schema"

Schema Filtering

source:
type: presto
config:
host_port: presto.company.com:8080
username: datahub_user
database: hive # Default catalog

# Filter schemas within catalogs
schema_pattern:
allow:
- "^production_.*"
- "analytics"
deny:
- ".*_test$"

Table Filtering

source:
type: presto
config:
host_port: presto.company.com:8080
username: datahub_user

# Filter specific tables
table_pattern:
allow:
- "^fact_.*"
- "^dim_.*"
deny:
- ".*_tmp$"
- ".*_staging$"

Platform Instances

When ingesting from multiple Presto clusters, use platform_instance:

source:
type: presto
config:
host_port: prod-presto.company.com:8080
platform_instance: "prod-presto"

This creates URNs like:

urn:li:dataset:(urn:li:dataPlatform:presto,catalog.schema.table,prod-presto)

Data Profiling

The Presto connector supports optional data profiling:

source:
type: presto
config:
host_port: presto.company.com:8080
username: datahub_user

# Enable profiling
profiling:
enabled: true
profile_table_level_only: false # Include column-level stats

# Limit profiling scope
profile_pattern:
allow:
- "^production_.*"

Warning: Profiling can be expensive on large tables. Start with profile_table_level_only: true and expand as needed.

Performance Considerations

Large Presto Deployments

For Presto clusters with many catalogs and tables:

  1. Catalog Filtering: Limit ingestion to specific catalogs:

    database_pattern:
    allow:
    - "hive"
    - "postgresql"
  2. Disable Profiling: Or limit it to specific tables:

    profiling:
    enabled: true
    profile_table_level_only: true
  3. Stateful Ingestion: Only process changes:

    stateful_ingestion:
    enabled: true
    remove_stale_metadata: true

Query Performance

  • The connector queries Presto's information_schema tables
  • Ensure your Presto cluster has sufficient resources
  • Consider running ingestion during off-peak hours for large deployments

Caveats and Limitations

Storage Lineage

Not Supported: The Presto connector cannot extract storage lineage because it doesn't have access to underlying storage locations.

Solution: Use the Hive Metastore connector with mode: presto to get storage lineage for Presto views backed by Hive.

View Definitions

  • Simple Views: Fully supported with SQL extraction
  • Complex Presto Views: Views with Presto-specific SQL functions may have limited lineage
  • Cross-Catalog Views: Views referencing multiple catalogs are supported

Connector-Specific Tables

Presto's catalog connectors (Hive, PostgreSQL, etc.) may have different metadata available. The connector extracts common metadata that works across all connectors.

Known Issues

  1. Information Schema Latency: Presto's information_schema may have delays in reflecting recent DDL changes.

  2. Large Result Sets: Catalogs with 10,000+ tables may be slow to ingest.

  3. View Lineage Parsing: Complex Presto SQL with window functions, CTEs, or Presto-specific syntax may have incomplete lineage.

  4. Connector-Specific Metadata: Some Presto connectors (e.g., Cassandra) have limited metadata available through information_schema.

Troubleshooting

Connection Issues

Problem: Could not connect to Presto

Solutions:

  • Verify host_port is correct and points to the Presto coordinator
  • Check firewall rules allow traffic on the Presto port
  • Confirm Presto service is running: curl http://<host>:<port>/v1/info
  • Check Presto logs for connection errors

Authentication Failures

Problem: Authentication failed

Solutions:

  • Verify username and password are correct
  • Check authentication method matches Presto configuration
  • For Kerberos: Ensure valid ticket exists (klist)
  • Review Presto coordinator logs: /var/log/presto/

Missing Catalogs or Tables

Problem: Not all catalogs/tables appear in DataHub

Solutions:

  • Verify user has access to catalogs: SHOW CATALOGS; in Presto
  • Check if catalogs are filtered by database_pattern
  • Ensure catalog connectors are properly configured in Presto
  • Review warnings in DataHub ingestion logs

Slow Ingestion

Problem: Metadata extraction takes too long

Solutions:

  • Use catalog/schema filtering to reduce scope
  • Disable profiling or limit to specific tables
  • Enable stateful ingestion
  • Ensure Presto cluster has adequate resources
  • Check Presto query queue and resource groups

View Lineage Not Appearing

Problem: No lineage for Presto views

Solutions:

  • Complex Presto SQL may have limited lineage extraction
  • For Hive-backed views, consider using the Hive Metastore connector with mode: presto
  • Review logs for SQL parsing warnings
  • Simplify view definitions if possible

Migration from presto-on-hive

If you're currently using the deprecated presto-on-hive source:

Old Configuration:

source:
type: presto-on-hive # ← Deprecated
config:
host_port: metastore-db:3306
# ...

New Configuration (Recommended):

source:
type: hive-metastore # ← Use this instead
config:
host_port: metastore-db:3306
mode: presto # ← Set mode to 'presto'
emit_storage_lineage: true # ← Now available!
# ...

Benefits of Migration:

  • Access to storage lineage features
  • Better Presto view parsing
  • Improved performance
  • Active maintenance and new features

Comparison: Presto vs. Hive Metastore Connector

Featurepresto Connectorhive-metastore (mode: presto)
ConnectionDirect to PrestoDirect to metastore database
CatalogsAll Presto catalogsOnly Hive-backed catalogs
Storage LineageNot supportedSupported
Column LineageLimitedFull support
View ParsingBasicEnhanced Presto view parsing
PerformanceGoodBetter (direct DB access)
Data ProfilingSupportedNot supported
Use CaseMulti-catalog PrestoPresto-on-Hive with lineage

Best Practices

  1. Choose the Right Connector:

    • Use presto for multi-catalog Presto deployments
    • Use hive-metastore (mode: presto) for Hive-backed tables with storage lineage
  2. Filter Appropriately:

    • Exclude system catalogs: system, information_schema
    • Use patterns to include only relevant data
  3. Enable Stateful Ingestion:

    • Only process changes on subsequent runs
    • Reduces ingestion time and resource usage
  4. Test First:

    • Start with a small subset of catalogs/schemas
    • Verify metadata quality before expanding scope
  5. Monitor Presto Load:

    • Ingestion queries can impact Presto performance
    • Run during off-peak hours for large deployments

CLI based Ingestion

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: presto
config:
# Coordinates
host_port: localhost:5300
database: dbname

# Credentials
username: foo
password: password

sink:
# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
database 
string
database (catalog)
host_port 
string
host URL
convert_urns_to_lowercase
boolean
Whether to convert dataset urns to lowercase.
Default: False
include_table_location_lineage
boolean
If the source supports it, include table lineage to the underlying storage location.
Default: True
include_tables
boolean
Whether tables should be ingested.
Default: True
include_view_column_lineage
boolean
Populates column-level lineage for view->view and table->view lineage using DataHub's sql parser. Requires include_view_lineage to be enabled.
Default: True
include_view_lineage
boolean
Populates view->view and table->view lineage using DataHub's sql parser.
Default: True
include_views
boolean
Whether views should be ingested.
Default: True
incremental_lineage
boolean
When enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run.
Default: False
ingest_lineage_to_connectors
boolean
Whether lineage of datasets to connectors should be ingested
Default: True
options
object
Any options specified here will be passed to SQLAlchemy.create_engine as kwargs. To set connection arguments in the URL, specify them under connect_args.
password
One of string(password), null
password
Default: None
platform_instance
One of string, null
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.
Default: None
sqlalchemy_uri
One of string, null
URI of database to connect to. See https://docs.sqlalchemy.org/en/14/core/engines.html#database-urls. Takes precedence over other connection parameters.
Default: None
trino_as_primary
boolean
Experimental feature. Whether trino dataset should be primary entity of the set of siblings
Default: True
use_file_backed_cache
boolean
Whether to use a file backed cache for the view definitions.
Default: True
username
One of string, null
username
Default: None
env
string
The environment that all assets produced by this connector belong to
Default: PROD
catalog_to_connector_details
map(str,ConnectorDetail)
catalog_to_connector_details.key.env
string
The environment that all assets produced by this connector belong to
Default: PROD
catalog_to_connector_details.key.connector_database
One of string, null
Default: None
catalog_to_connector_details.key.connector_platform
One of string, null
A connector's actual platform name. If not provided, will take from metadata tablesEg: hive catalog can have a connector platform as 'hive' or 'glue' or some other metastore.
Default: None
catalog_to_connector_details.key.platform_instance
One of string, null
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.
Default: None
domain
map(str,AllowDenyPattern)
A class to store allow deny regexes
domain.key.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
domain.key.allow.string
string
domain.key.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
domain.key.deny
array
List of regex patterns to exclude from ingestion.
Default: []
domain.key.deny.string
string
profile_pattern
AllowDenyPattern
A class to store allow deny regexes
profile_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
schema_pattern
AllowDenyPattern
A class to store allow deny regexes
schema_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
table_pattern
AllowDenyPattern
A class to store allow deny regexes
table_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
view_pattern
AllowDenyPattern
A class to store allow deny regexes
view_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
classification
ClassificationConfig
classification.enabled
boolean
Whether classification should be used to auto-detect glossary terms
Default: False
classification.info_type_to_term
map(str,string)
classification.max_workers
integer
Number of worker processes to use for classification. Set to 1 to disable.
Default: 4
classification.sample_size
integer
Number of sample values used for classification.
Default: 100
classification.classifiers
array
Classifiers to use to auto-detect glossary terms. If more than one classifier, infotype predictions from the classifier defined later in sequence take precedance.
Default: [{'type': 'datahub', 'config': None}]
classification.classifiers.DynamicTypedClassifierConfig
DynamicTypedClassifierConfig
classification.classifiers.DynamicTypedClassifierConfig.type 
string
The type of the classifier to use. For DataHub, use datahub
classification.classifiers.DynamicTypedClassifierConfig.config
One of object, null
The configuration required for initializing the classifier. If not specified, uses defaults for classifer type.
Default: None
classification.column_pattern
AllowDenyPattern
A class to store allow deny regexes
classification.column_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
classification.table_pattern
AllowDenyPattern
A class to store allow deny regexes
classification.table_pattern.ignoreCase
One of boolean, null
Whether to ignore case sensitivity during pattern matching.
Default: True
profiling
GEProfilingConfig
profiling.catch_exceptions
boolean
Default: True
profiling.enabled
boolean
Whether profiling should be done.
Default: False
profiling.field_sample_values_limit
integer
Upper limit for number of sample values to collect for all columns.
Default: 20
profiling.include_field_distinct_count
boolean
Whether to profile for the number of distinct values for each column.
Default: True
profiling.include_field_distinct_value_frequencies
boolean
Whether to profile for distinct value frequencies.
Default: False
profiling.include_field_histogram
boolean
Whether to profile for the histogram for numeric fields.
Default: False
profiling.include_field_max_value
boolean
Whether to profile for the max value of numeric columns.
Default: True
profiling.include_field_mean_value
boolean
Whether to profile for the mean value of numeric columns.
Default: True
profiling.include_field_median_value
boolean
Whether to profile for the median value of numeric columns.
Default: True
profiling.include_field_min_value
boolean
Whether to profile for the min value of numeric columns.
Default: True
profiling.include_field_null_count
boolean
Whether to profile for the number of nulls for each column.
Default: True
profiling.include_field_quantiles
boolean
Whether to profile for the quantiles of numeric columns.
Default: False
profiling.include_field_sample_values
boolean
Whether to profile for the sample values for all columns.
Default: True
profiling.include_field_stddev_value
boolean
Whether to profile for the standard deviation of numeric columns.
Default: True
profiling.limit
One of integer, null
Max number of documents to profile. By default, profiles all documents.
Default: None
profiling.max_number_of_fields_to_profile
One of integer, null
A positive integer that specifies the maximum number of columns to profile for any table. None implies all columns. The cost of profiling goes up significantly as the number of columns to profile goes up.
Default: None
profiling.max_workers
integer
Number of worker threads to use for profiling. Set to 1 to disable.
Default: 20
profiling.offset
One of integer, null
Offset in documents to profile. By default, uses no offset.
Default: None
profiling.partition_datetime
One of string(date-time), null
If specified, profile only the partition which matches this datetime. If not specified, profile the latest partition. Only Bigquery supports this.
Default: None
profiling.partition_profiling_enabled
boolean
Whether to profile partitioned tables. Only BigQuery and Aws Athena supports this. If enabled, latest partition data is used for profiling.
Default: True
profiling.profile_external_tables
boolean
Whether to profile external tables. Only Snowflake and Redshift supports this.
Default: False
profiling.profile_if_updated_since_days
One of number, null
Profile table only if it has been updated since these many number of days. If set to null, no constraint of last modified time for tables to profile. Supported only in snowflake and BigQuery.
Default: None
profiling.profile_nested_fields
boolean
Whether to profile complex types like structs, arrays and maps.
Default: False
profiling.profile_table_level_only
boolean
Whether to perform profiling at table-level only, or include column-level profiling as well.
Default: False
profiling.profile_table_row_count_estimate_only
boolean
Use an approximate query for row count. This will be much faster but slightly less accurate. Only supported for Postgres and MySQL.
Default: False
profiling.profile_table_row_limit
One of integer, null
Profile tables only if their row count is less than specified count. If set to null, no limit on the row count of tables to profile. Supported only in Snowflake, BigQuery. Supported for Oracle based on gathered stats.
Default: 5000000
profiling.profile_table_size_limit
One of integer, null
Profile tables only if their size is less than specified GBs. If set to null, no limit on the size of tables to profile. Supported only in Snowflake, BigQuery and Databricks. Supported for Oracle based on calculated size from gathered stats.
Default: 5
profiling.query_combiner_enabled
boolean
This feature is still experimental and can be disabled if it causes issues. Reduces the total number of queries issued and speeds up profiling by dynamically combining SQL queries where possible.
Default: True
profiling.report_dropped_profiles
boolean
Whether to report datasets or dataset columns which were not profiled. Set to True for debugging purposes.
Default: False
profiling.sample_size
integer
Number of rows to be sampled from table for column level profiling.Applicable only if use_sampling is set to True.
Default: 10000
profiling.turn_off_expensive_profiling_metrics
boolean
Whether to turn off expensive profiling or not. This turns off profiling for quantiles, distinct_value_frequencies, histogram & sample_values. This also limits maximum number of fields being profiled to 10.
Default: False
profiling.use_sampling
boolean
Whether to profile column level stats on sample of table. Only BigQuery and Snowflake support this. If enabled, profiling is done on rows sampled from table. Sampling is not done for smaller tables.
Default: True
profiling.operation_config
OperationConfig
profiling.operation_config.lower_freq_profile_enabled
boolean
Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling.
Default: False
profiling.operation_config.profile_date_of_month
One of integer, null
Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect.
Default: None
profiling.operation_config.profile_day_of_week
One of integer, null
Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect.
Default: None
profiling.tags_to_ignore_sampling
One of array, null
Fixed list of tags to ignore sampling. If not specified, tables will be sampled based on use_sampling.
Default: None
profiling.tags_to_ignore_sampling.string
string
stateful_ingestion
One of StatefulStaleMetadataRemovalConfig, null
Default: None
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.fail_safe_threshold
number
Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.
Default: 75.0
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Code Coordinates

  • Class Name: datahub.ingestion.source.sql.presto.PrestoSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for Presto, feel free to ping us on our Slack.