Postgres
Important Capabilities
| Capability | Status | Notes |
|---|---|---|
| Asset Containers | ✅ | Enabled by default. Supported for types - Database, Schema. |
| Classification | ✅ | Optionally enabled via classification.enabled. |
| Column-level Lineage | ✅ | Enabled by default to get lineage for views via include_view_column_lineage. Supported for types - View. |
| Data Profiling | ✅ | Optionally enabled via configuration. |
| Descriptions | ✅ | Enabled by default. |
| Detect Deleted Entities | ✅ | Enabled by default via stateful ingestion. |
| Domains | ✅ | Enabled by default. |
| Platform Instance | ✅ | Enabled by default. |
| Schema Metadata | ✅ | Enabled by default. |
| Table-Level Lineage | ✅ | Enabled by default to get lineage for views via include_view_lineage. Supported for types - View. |
| Test Connection | ✅ | Enabled by default. |
This plugin extracts the following:
- Metadata for databases, schemas, views, tables, and stored procedures
- Column types associated with each table
- Also supports PostGIS extensions
- Table, row, and column statistics via optional SQL profiling
Query-Based Lineage for PostgreSQL
DataHub can extract table-level lineage from your PostgreSQL query history using the pg_stat_statements extension. This feature analyzes executed SQL queries to automatically discover upstream and downstream dataset dependencies.
Overview
The query-based lineage feature:
- Extracts lineage from
INSERT...SELECT,CREATE TABLE AS SELECT,CREATE VIEW, and other DML/DDL statements - Prioritizes important queries by execution frequency and time
- Respects your filters using configurable exclude patterns
- Generates usage statistics showing which tables are queried and by whom
Prerequisites
0. PostgreSQL Version Requirement
PostgreSQL 13 or later is required for query-based lineage extraction.
Why: PostgreSQL 13 changed column names in the pg_stat_statements view:
- PostgreSQL 12 and earlier:
total_time,min_time,max_time,mean_time - PostgreSQL 13+:
total_exec_time,min_exec_time,max_exec_time,mean_exec_time
The DataHub connector uses the PostgreSQL 13+ column names. If you attempt to use this feature with PostgreSQL 12 or earlier, you'll receive a clear error message:
PostgreSQL version 12.0 detected. Query-based lineage requires PostgreSQL 13+
due to column name changes in pg_stat_statements (total_time -> total_exec_time).
Please upgrade to PostgreSQL 13 or later.
Solution: Upgrade to PostgreSQL 13 or later to use query-based lineage extraction.
1. Enable pg_stat_statements Extension
The pg_stat_statements extension must be installed and loaded. This extension tracks query execution statistics.
Step 1: Load the extension in postgresql.conf
Add or update the following line in your postgresql.conf:
shared_preload_libraries = 'pg_stat_statements'
If you already have other extensions loaded (e.g., 'pg_cron,pg_stat_statements'), append pg_stat_statements to the comma-separated list.
Step 2: Restart PostgreSQL
The extension requires a database restart to load:
# On Linux (systemd)
sudo systemctl restart postgresql
# On macOS (Homebrew)
brew services restart postgresql
# Or using pg_ctl
pg_ctl restart -D /path/to/data/directory
Step 3: Create the extension in your database
Connect to each database you want to monitor and create the extension:
-- Connect to your database
\c your_database
-- Create the extension (requires superuser or CREATE privileges)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Verify installation
SELECT * FROM pg_extension WHERE extname = 'pg_stat_statements';
2. Grant Required Permissions
The DataHub user needs permission to read from pg_stat_statements.
Option 1: Grant pg_read_all_stats role (PostgreSQL 10+, recommended)
-- Grant the pg_read_all_stats role to your DataHub user
GRANT pg_read_all_stats TO datahub_user;
Option 2: Use a superuser account
If your PostgreSQL version doesn't have pg_read_all_stats, you can use a superuser account. However, this is not recommended for production due to security implications.
Verify permissions
-- Check if user has the required role
SELECT
pg_has_role(current_user, 'pg_read_all_stats', 'MEMBER') as has_stats_role,
usesuper as is_superuser
FROM pg_user
WHERE usename = current_user;
The query should return true for at least one column.
3. Configure Query Retention (Optional)
By default, pg_stat_statements stores the last 5000 queries. You can adjust this in postgresql.conf:
# Maximum number of queries tracked
pg_stat_statements.max = 10000
# Track nested statements (functions, procedures)
pg_stat_statements.track = all
After changing these settings, restart PostgreSQL.
Configuration
Enable query-based lineage in your DataHub recipe:
source:
type: postgres
config:
host_port: "localhost:5432"
database: "your_database"
username: "datahub_user"
password: "your_password"
# Enable query-based lineage extraction
include_query_lineage: true
# Optional: Configure lineage extraction
max_queries_to_extract: 1000 # Default: 1000
min_query_calls: 10 # Only extract queries executed ≥10 times
# Optional: Exclude specific query patterns
query_exclude_patterns:
- "%pg_catalog%" # Exclude system catalog queries
- "%temp_%" # Exclude temporary table queries
- "%staging%" # Exclude staging queries
# Optional: Enable usage statistics
include_usage_statistics: true
Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
include_query_lineage | boolean | false | Enable query-based lineage extraction from pg_stat_statements |
max_queries_to_extract | integer | 1000 | Maximum number of queries to extract. Queries are prioritized by execution time and frequency. |
min_query_calls | integer | 1 | Minimum number of times a query must be executed to be included in lineage analysis. Higher values focus on frequently-used queries. |
query_exclude_patterns | list[string] | [] | SQL LIKE patterns to exclude queries. Patterns are case-insensitive. Example: "%pg_catalog%" excludes all queries containing pg_catalog. |
include_usage_statistics | boolean | false | Generate dataset usage metrics from query history. Requires include_query_lineage: true. Shows unique user counts, query frequencies, and column access patterns in the DataHub UI. |
Supported Lineage Patterns
The lineage extractor recognizes common SQL patterns:
INSERT...SELECT
INSERT INTO target_table (col1, col2)
SELECT col1, col2 FROM source_table;
Lineage: source_table → target_table
CREATE TABLE AS SELECT (CTAS)
CREATE TABLE new_table AS
SELECT a.col1, b.col2
FROM table_a a
JOIN table_b b ON a.id = b.id;
Lineage: table_a, table_b → new_table
CREATE VIEW
CREATE VIEW customer_summary AS
SELECT c.customer_id, COUNT(o.order_id) as order_count
FROM customers c
LEFT JOIN orders o ON c.customer_id = o.customer_id
GROUP BY c.customer_id;
Lineage: customers, orders → customer_summary
Complex JOINs and CTEs
WITH monthly_revenue AS (
SELECT customer_id, SUM(amount) as revenue
FROM transactions
WHERE date >= '2024-01-01'
GROUP BY customer_id
)
INSERT INTO customer_metrics (customer_id, total_revenue)
SELECT mr.customer_id, mr.revenue
FROM monthly_revenue mr
JOIN customers c ON mr.customer_id = c.id
WHERE c.active = true;
Lineage: transactions, customers → customer_metrics
Verification
After running ingestion, verify that lineage was extracted:
1. Check ingestion logs
Look for messages like:
INFO - Prerequisites check: Prerequisites met
INFO - Extracted 850 queries from pg_stat_statements in 1.23 seconds
INFO - Processed 850 queries for lineage extraction (12 failed) in 4.56 seconds
2. Query pg_stat_statements directly
-- Check if queries are being tracked
SELECT COUNT(*) FROM pg_stat_statements;
-- View most frequently executed queries
SELECT
calls,
total_exec_time / 1000 as total_seconds,
query
FROM pg_stat_statements
ORDER BY calls DESC
LIMIT 10;
3. Verify in DataHub UI
Navigate to a dataset in DataHub and check the "Lineage" tab. You should see upstream and downstream dependencies derived from query history.
Troubleshooting
PostgreSQL Version Too Old
Error message:
ERROR - PostgreSQL version 12.0 detected. Query-based lineage requires PostgreSQL 13+
due to column name changes in pg_stat_statements (total_time -> total_exec_time).
Please upgrade to PostgreSQL 13 or later.
Solution:
Upgrade your PostgreSQL installation to version 13 or later. PostgreSQL 13 was released in September 2020 and introduced breaking changes to the pg_stat_statements view column names.
Check your current version:
SELECT version();
-- Or
SHOW server_version;
Upgrade path:
- PostgreSQL Upgrade Documentation
- Use
pg_upgradefor in-place upgrades - Consider managed services (AWS RDS, Google Cloud SQL, Azure Database) which support easy version upgrades
Extension Not Installed
Error message:
ERROR - pg_stat_statements extension not installed. Install with: CREATE EXTENSION pg_stat_statements;
Solution:
- Verify
shared_preload_librariesincludespg_stat_statementsinpostgresql.conf - Restart PostgreSQL
- Connect to your database and run
CREATE EXTENSION pg_stat_statements;
Permission Denied
Error message:
ERROR - Insufficient permissions. Grant pg_read_all_stats role: GRANT pg_read_all_stats TO <user>;
Solution:
-- Grant the required role
GRANT pg_read_all_stats TO datahub_user;
-- Or verify current permissions
SELECT
pg_has_role(current_user, 'pg_read_all_stats', 'MEMBER') as has_stats_role,
usesuper as is_superuser
FROM pg_user
WHERE usename = current_user;
No Queries Extracted
Possible causes:
No queries in pg_stat_statements
- The extension tracks queries since the last reset or restart
- Run some queries against your database, then re-run ingestion
Queries excluded by filters
- Check
min_query_calls- lower this value to include less-frequent queries - Review
query_exclude_patterns- ensure you're not excluding too broadly
- Check
Empty pg_stat_statements
-- Check if queries are being tracked
SELECT COUNT(*) FROM pg_stat_statements;
-- If 0, reset and run some test queries
SELECT pg_stat_statements_reset();
-- Run sample queries
SELECT * FROM your_table LIMIT 10;
-- Verify queries were tracked
SELECT COUNT(*) FROM pg_stat_statements;
Query Text Truncated
By default, PostgreSQL truncates query text to 1024 characters. Increase this limit in postgresql.conf:
# Increase max query length tracked (requires restart)
track_activity_query_size = 4096
Performance Considerations
Memory usage:
- Each tracked query consumes ~1KB in shared memory
- Default 5000 queries = ~5MB memory
- Adjust
pg_stat_statements.maxbased on your available memory
Query overhead:
- The extension adds minimal overhead (<1%) to query execution
- Statistics are updated asynchronously
Ingestion performance:
- Extracting 1000 queries takes 1-5 seconds depending on database load
- Use
max_queries_to_extractto limit extraction time - Schedule ingestion during off-peak hours for large query volumes
Limitations
Historical data only
- Lineage is extracted from executed queries, not from schema definitions
- Queries must have been executed since the last
pg_stat_statements_reset()
Dynamic SQL
- Parameterized queries show parameter placeholders, not actual values
- Example:
SELECT * FROM users WHERE id = $1(value not captured)
Complex transformations
- The extractor may not parse extremely complex queries with nested CTEs or exotic syntax
- Failed queries are logged but don't block ingestion
No column-level lineage
- Currently supports table-level lineage only
- Column-level lineage may be added in future releases
Best Practices
Reset pg_stat_statements periodically
-- Reset statistics (caution: clears all tracked queries)
SELECT pg_stat_statements_reset();This prevents unbounded memory growth and focuses on recent query patterns.
Use meaningful filters
- Exclude test, temporary, and system queries with
query_exclude_patterns - Set
min_query_callsto focus on production workloads
- Exclude test, temporary, and system queries with
Monitor memory usage
-- Check current query count
SELECT COUNT(*) as query_count FROM pg_stat_statements;
-- Check memory usage
SELECT pg_size_pretty(
pg_database_size('your_database')
);Schedule regular ingestion
- Run ingestion daily or weekly to capture lineage from new queries
- More frequent ingestion provides more up-to-date lineage graphs
See Also
CLI based Ingestion
Starter Recipe
Check out the following recipe to get started with ingestion! See below for full configuration options.
For general pointers on writing and running a recipe, see our main recipe guide.
source:
type: postgres
config:
# Coordinates
host_port: localhost:5432
database: DemoDatabase
# Credentials
username: user
password: pass
# Optional: SSL configuration.
# options:
# connect_args:
# sslcert: "<<path to sslcert>>"
# sslkey: "<<path to sslkey>>"
# sslrootcert: "<<path to verification ca chain>>"
# sslmode: "verify-full"
# AWS RDS IAM Authentication (alternative to password)
# auth_mode: "AWS_IAM"
# aws_config:
# aws_region: us-west-2
# Advanced AWS configuration with profiles, role assumption, and retry settings:
# auth_mode: "AWS_IAM"
# aws_config:
# aws_region: us-west-2
# aws_profile: production
# aws_role: "arn:aws:iam::123456789:role/DataHubRole"
# aws_retry_num: 10
# aws_retry_mode: adaptive
# When auth_mode is "AWS_IAM", the password field is ignored
# AWS credentials can be configured via AWS CLI, environment variables, or IAM role
# Query-based Lineage (optional)
# Extract lineage from executed queries using pg_stat_statements extension
# Prerequisites:
# 1. PostgreSQL 13 or later (required for pg_stat_statements column names)
# Note: PostgreSQL 12 and earlier use different column names (total_time
# instead of total_exec_time). The connector will detect older versions
# and fail with a clear error message. Upgrade to PostgreSQL 13+ to use
# this feature.
# 2. CREATE EXTENSION pg_stat_statements;
# 3. GRANT pg_read_all_stats TO datahub_user;
# include_query_lineage: true
# max_queries_to_extract: 1000 # Default: 1000, valid range: 1-10000
# min_query_calls: 10 # Default: 1, minimum: 0
# query_exclude_patterns:
# - "%pg_catalog%"
# - "%temp_%"
# Usage Statistics (optional, requires include_query_lineage: true)
# Generates dataset usage metrics from query history:
# - Unique user counts per table
# - Query frequencies and top N most common queries
# - Column-level access patterns
# Results appear in DataHub UI: Dataset Profile > Usage tab
# include_usage_statistics: true
# top_n_queries: 20 # Number of top queries to save per table
# bucket_duration: DAY # Time window for aggregating usage (HOUR, DAY, WEEK)
# format_sql_queries: false # Whether to format SQL queries for readability
sink:
# sink configs
Config Details
- Options
- Schema
Note that a . is used to denote nested fields in the YAML recipe.
| Field | Description |
|---|---|
host_port ✅ string | host URL |
auth_mode Enum | One of: "PASSWORD", "AWS_IAM" |
bucket_duration Enum | One of: "DAY", "HOUR" |
convert_urns_to_lowercase boolean | Whether to convert dataset urns to lowercase. Default: False |
database One of string, null | database (catalog). If set to Null, all databases will be considered for ingestion. Default: None |
end_time string(date-time) | Latest date of lineage/usage to consider. Default: Current time in UTC |
format_sql_queries boolean | Whether to format sql queries Default: False |
include_operational_stats boolean | Whether to display operational stats. Default: True |
include_query_lineage boolean | Enable query-based lineage extraction from pg_stat_statements. Requires the pg_stat_statements extension to be installed and enabled. See documentation for setup instructions. Default: False |
include_read_operational_stats boolean | Whether to report read operational stats. Experimental. Default: False |
include_stored_procedures boolean | Include ingest of stored procedures. Default: True |
include_table_location_lineage boolean | If the source supports it, include table lineage to the underlying storage location. Default: True |
include_tables boolean | Whether tables should be ingested. Default: True |
include_top_n_queries boolean | Whether to ingest the top_n_queries. Default: True |
include_usage_statistics boolean | Generate usage statistics from query history. Requires include_query_lineage to be enabled. Collects metrics like unique user counts, query frequencies, and column access patterns. Statistics appear in DataHub UI under the Dataset Profile > Usage tab. Default: False |
include_view_column_lineage boolean | Populates column-level lineage for view->view and table->view lineage using DataHub's sql parser. Requires include_view_lineage to be enabled. Default: True |
include_view_lineage boolean | Populates view->view and table->view lineage using DataHub's sql parser. Default: True |
include_views boolean | Whether views should be ingested. Default: True |
incremental_lineage boolean | When enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run. Default: False |
initial_database string | Initial database used to query for the list of databases, when ingesting multiple databases. Note: this is not used if database or sqlalchemy_uri are provided. Default: postgres |
max_queries_to_extract integer | Maximum number of queries to extract from pg_stat_statements for lineage analysis. Queries are prioritized by execution time and frequency. Default: 1000 |
min_query_calls integer | Minimum number of executions required for a query to be included. Set higher to focus on frequently-used queries. Default: 1 |
options object | Any options specified here will be passed to SQLAlchemy.create_engine as kwargs. To set connection arguments in the URL, specify them under connect_args. |
password One of string(password), null | password Default: None |
platform_instance One of string, null | The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details. Default: None |
scheme string | database scheme Default: postgresql+psycopg2 |
sqlalchemy_uri One of string, null | URI of database to connect to. See https://docs.sqlalchemy.org/en/14/core/engines.html#database-urls. Takes precedence over other connection parameters. Default: None |
start_time string(date-time) | Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on bucket_duration). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'. Default: None |
top_n_queries integer | Number of top queries to save to each table. Default: 10 |
use_file_backed_cache boolean | Whether to use a file backed cache for the view definitions. Default: True |
username One of string, null | username Default: None |
env string | The environment that all assets produced by this connector belong to Default: PROD |
aws_config AwsConnectionConfig | Common AWS credentials config. Currently used by: - Glue source - SageMaker source - dbt source |
aws_config.aws_access_key_id One of string, null | AWS access key ID. Can be auto-detected, see the AWS boto3 docs for details. Default: None |
aws_config.aws_advanced_config object | Advanced AWS configuration options. These are passed directly to botocore.config.Config. |
aws_config.aws_endpoint_url One of string, null | The AWS service endpoint. This is normally constructed automatically, but can be overridden here. Default: None |
aws_config.aws_profile One of string, null | The named profile to use from AWS credentials. Falls back to default profile if not specified and no access keys provided. Profiles are configured in ~/.aws/credentials or ~/.aws/config. Default: None |
aws_config.aws_proxy One of string, null | A set of proxy configs to use with AWS. See the botocore.config docs for details. Default: None |
aws_config.aws_region One of string, null | AWS region code. Default: None |
aws_config.aws_retry_mode Enum | One of: "legacy", "standard", "adaptive" Default: standard |
aws_config.aws_retry_num integer | Number of times to retry failed AWS requests. See the botocore.retry docs for details. Default: 5 |
aws_config.aws_secret_access_key One of string, null | AWS secret access key. Can be auto-detected, see the AWS boto3 docs for details. Default: None |
aws_config.aws_session_token One of string, null | AWS session token. Can be auto-detected, see the AWS boto3 docs for details. Default: None |
aws_config.read_timeout number | The timeout for reading from the connection (in seconds). Default: 60 |
aws_config.aws_role One of string, array, null | AWS roles to assume. If using the string format, the role ARN can be specified directly. If using the object format, the role can be specified in the RoleArn field and additional available arguments are the same as boto3's STS.Client.assume_role. Default: None |
aws_config.aws_role.union One of string, AwsAssumeRoleConfig | |
aws_config.aws_role.union.RoleArn ❓ string | ARN of the role to assume. |
aws_config.aws_role.union.ExternalId One of string, null | External ID to use when assuming the role. Default: None |
database_pattern AllowDenyPattern | A class to store allow deny regexes |
database_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
domain map(str,AllowDenyPattern) | A class to store allow deny regexes |
domain. key.allowarray | List of regex patterns to include in ingestion Default: ['.*'] |
domain. key.allow.stringstring | |
domain. key.ignoreCaseOne of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
domain. key.denyarray | List of regex patterns to exclude from ingestion. Default: [] |
domain. key.deny.stringstring | |
procedure_pattern AllowDenyPattern | A class to store allow deny regexes |
procedure_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
profile_pattern AllowDenyPattern | A class to store allow deny regexes |
profile_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
query_exclude_patterns One of array, null | SQL LIKE patterns to exclude from query extraction. Example: ['%pgcatalog%', '%temp%'] to exclude catalog and temp tables. Default: None |
query_exclude_patterns.string string | |
schema_pattern AllowDenyPattern | A class to store allow deny regexes |
schema_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
table_pattern AllowDenyPattern | A class to store allow deny regexes |
table_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
user_email_pattern AllowDenyPattern | A class to store allow deny regexes |
user_email_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
view_pattern AllowDenyPattern | A class to store allow deny regexes |
view_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
classification ClassificationConfig | |
classification.enabled boolean | Whether classification should be used to auto-detect glossary terms Default: False |
classification.info_type_to_term map(str,string) | |
classification.max_workers integer | Number of worker processes to use for classification. Set to 1 to disable. Default: 4 |
classification.sample_size integer | Number of sample values used for classification. Default: 100 |
classification.classifiers array | Classifiers to use to auto-detect glossary terms. If more than one classifier, infotype predictions from the classifier defined later in sequence take precedance. Default: [{'type': 'datahub', 'config': None}] |
classification.classifiers.DynamicTypedClassifierConfig DynamicTypedClassifierConfig | |
classification.classifiers.DynamicTypedClassifierConfig.type ❓ string | The type of the classifier to use. For DataHub, use datahub |
classification.classifiers.DynamicTypedClassifierConfig.config One of object, null | The configuration required for initializing the classifier. If not specified, uses defaults for classifer type. Default: None |
classification.column_pattern AllowDenyPattern | A class to store allow deny regexes |
classification.column_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
classification.table_pattern AllowDenyPattern | A class to store allow deny regexes |
classification.table_pattern.ignoreCase One of boolean, null | Whether to ignore case sensitivity during pattern matching. Default: True |
profiling GEProfilingConfig | |
profiling.catch_exceptions boolean | Default: True |
profiling.enabled boolean | Whether profiling should be done. Default: False |
profiling.field_sample_values_limit integer | Upper limit for number of sample values to collect for all columns. Default: 20 |
profiling.include_field_distinct_count boolean | Whether to profile for the number of distinct values for each column. Default: True |
profiling.include_field_distinct_value_frequencies boolean | Whether to profile for distinct value frequencies. Default: False |
profiling.include_field_histogram boolean | Whether to profile for the histogram for numeric fields. Default: False |
profiling.include_field_max_value boolean | Whether to profile for the max value of numeric columns. Default: True |
profiling.include_field_mean_value boolean | Whether to profile for the mean value of numeric columns. Default: True |
profiling.include_field_median_value boolean | Whether to profile for the median value of numeric columns. Default: True |
profiling.include_field_min_value boolean | Whether to profile for the min value of numeric columns. Default: True |
profiling.include_field_null_count boolean | Whether to profile for the number of nulls for each column. Default: True |
profiling.include_field_quantiles boolean | Whether to profile for the quantiles of numeric columns. Default: False |
profiling.include_field_sample_values boolean | Whether to profile for the sample values for all columns. Default: True |
profiling.include_field_stddev_value boolean | Whether to profile for the standard deviation of numeric columns. Default: True |
profiling.limit One of integer, null | Max number of documents to profile. By default, profiles all documents. Default: None |
profiling.max_number_of_fields_to_profile One of integer, null | A positive integer that specifies the maximum number of columns to profile for any table. None implies all columns. The cost of profiling goes up significantly as the number of columns to profile goes up. Default: None |
profiling.max_workers integer | Number of worker threads to use for profiling. Set to 1 to disable. Default: 20 |
profiling.offset One of integer, null | Offset in documents to profile. By default, uses no offset. Default: None |
profiling.partition_datetime One of string(date-time), null | If specified, profile only the partition which matches this datetime. If not specified, profile the latest partition. Only Bigquery supports this. Default: None |
profiling.partition_profiling_enabled boolean | Whether to profile partitioned tables. Only BigQuery and Aws Athena supports this. If enabled, latest partition data is used for profiling. Default: True |
profiling.profile_external_tables boolean | Whether to profile external tables. Only Snowflake and Redshift supports this. Default: False |
profiling.profile_if_updated_since_days One of number, null | Profile table only if it has been updated since these many number of days. If set to null, no constraint of last modified time for tables to profile. Supported only in snowflake and BigQuery. Default: None |
profiling.profile_nested_fields boolean | Whether to profile complex types like structs, arrays and maps. Default: False |
profiling.profile_table_level_only boolean | Whether to perform profiling at table-level only, or include column-level profiling as well. Default: False |
profiling.profile_table_row_count_estimate_only boolean | Use an approximate query for row count. This will be much faster but slightly less accurate. Only supported for Postgres and MySQL. Default: False |
profiling.profile_table_row_limit One of integer, null | Profile tables only if their row count is less than specified count. If set to null, no limit on the row count of tables to profile. Supported only in Snowflake, BigQuery. Supported for Oracle based on gathered stats. Default: 5000000 |
profiling.profile_table_size_limit One of integer, null | Profile tables only if their size is less than specified GBs. If set to null, no limit on the size of tables to profile. Supported only in Snowflake, BigQuery and Databricks. Supported for Oracle based on calculated size from gathered stats. Default: 5 |
profiling.query_combiner_enabled boolean | This feature is still experimental and can be disabled if it causes issues. Reduces the total number of queries issued and speeds up profiling by dynamically combining SQL queries where possible. Default: True |
profiling.report_dropped_profiles boolean | Whether to report datasets or dataset columns which were not profiled. Set to True for debugging purposes. Default: False |
profiling.sample_size integer | Number of rows to be sampled from table for column level profiling.Applicable only if use_sampling is set to True. Default: 10000 |
profiling.turn_off_expensive_profiling_metrics boolean | Whether to turn off expensive profiling or not. This turns off profiling for quantiles, distinct_value_frequencies, histogram & sample_values. This also limits maximum number of fields being profiled to 10. Default: False |
profiling.use_sampling boolean | Whether to profile column level stats on sample of table. Only BigQuery and Snowflake support this. If enabled, profiling is done on rows sampled from table. Sampling is not done for smaller tables. Default: True |
profiling.operation_config OperationConfig | |
profiling.operation_config.lower_freq_profile_enabled boolean | Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling. Default: False |
profiling.operation_config.profile_date_of_month One of integer, null | Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect. Default: None |
profiling.operation_config.profile_day_of_week One of integer, null | Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect. Default: None |
profiling.tags_to_ignore_sampling One of array, null | Fixed list of tags to ignore sampling. If not specified, tables will be sampled based on use_sampling. Default: None |
profiling.tags_to_ignore_sampling.string string | |
stateful_ingestion One of StatefulStaleMetadataRemovalConfig, null | Default: None |
stateful_ingestion.enabled boolean | Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False Default: False |
stateful_ingestion.fail_safe_threshold number | Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'. Default: 75.0 |
stateful_ingestion.remove_stale_metadata boolean | Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled. Default: True |
The JSONSchema for this configuration is inlined below.
{
"$defs": {
"AllowDenyPattern": {
"additionalProperties": false,
"description": "A class to store allow deny regexes",
"properties": {
"allow": {
"default": [
".*"
],
"description": "List of regex patterns to include in ingestion",
"items": {
"type": "string"
},
"title": "Allow",
"type": "array"
},
"deny": {
"default": [],
"description": "List of regex patterns to exclude from ingestion.",
"items": {
"type": "string"
},
"title": "Deny",
"type": "array"
},
"ignoreCase": {
"anyOf": [
{
"type": "boolean"
},
{
"type": "null"
}
],
"default": true,
"description": "Whether to ignore case sensitivity during pattern matching.",
"title": "Ignorecase"
}
},
"title": "AllowDenyPattern",
"type": "object"
},
"AwsAssumeRoleConfig": {
"additionalProperties": true,
"properties": {
"RoleArn": {
"description": "ARN of the role to assume.",
"title": "Rolearn",
"type": "string"
},
"ExternalId": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "External ID to use when assuming the role.",
"title": "Externalid"
}
},
"required": [
"RoleArn"
],
"title": "AwsAssumeRoleConfig",
"type": "object"
},
"AwsConnectionConfig": {
"additionalProperties": false,
"description": "Common AWS credentials config.\n\nCurrently used by:\n - Glue source\n - SageMaker source\n - dbt source",
"properties": {
"aws_access_key_id": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "AWS access key ID. Can be auto-detected, see [the AWS boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) for details.",
"title": "Aws Access Key Id"
},
"aws_secret_access_key": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "AWS secret access key. Can be auto-detected, see [the AWS boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) for details.",
"title": "Aws Secret Access Key"
},
"aws_session_token": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "AWS session token. Can be auto-detected, see [the AWS boto3 docs](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html) for details.",
"title": "Aws Session Token"
},
"aws_role": {
"anyOf": [
{
"type": "string"
},
{
"items": {
"anyOf": [
{
"type": "string"
},
{
"$ref": "#/$defs/AwsAssumeRoleConfig"
}
]
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "AWS roles to assume. If using the string format, the role ARN can be specified directly. If using the object format, the role can be specified in the RoleArn field and additional available arguments are the same as [boto3's STS.Client.assume_role](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html?highlight=assume_role#STS.Client.assume_role).",
"title": "Aws Role"
},
"aws_profile": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html) to use from AWS credentials. Falls back to default profile if not specified and no access keys provided. Profiles are configured in ~/.aws/credentials or ~/.aws/config.",
"title": "Aws Profile"
},
"aws_region": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "AWS region code.",
"title": "Aws Region"
},
"aws_endpoint_url": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The AWS service endpoint. This is normally [constructed automatically](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html), but can be overridden here.",
"title": "Aws Endpoint Url"
},
"aws_proxy": {
"anyOf": [
{
"additionalProperties": {
"type": "string"
},
"type": "object"
},
{
"type": "null"
}
],
"default": null,
"description": "A set of proxy configs to use with AWS. See the [botocore.config](https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html) docs for details.",
"title": "Aws Proxy"
},
"aws_retry_num": {
"default": 5,
"description": "Number of times to retry failed AWS requests. See the [botocore.retry](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html) docs for details.",
"title": "Aws Retry Num",
"type": "integer"
},
"aws_retry_mode": {
"default": "standard",
"description": "Retry mode to use for failed AWS requests. See the [botocore.retry](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html) docs for details.",
"enum": [
"legacy",
"standard",
"adaptive"
],
"title": "Aws Retry Mode",
"type": "string"
},
"read_timeout": {
"default": 60,
"description": "The timeout for reading from the connection (in seconds).",
"title": "Read Timeout",
"type": "number"
},
"aws_advanced_config": {
"additionalProperties": true,
"description": "Advanced AWS configuration options. These are passed directly to [botocore.config.Config](https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html).",
"title": "Aws Advanced Config",
"type": "object"
}
},
"title": "AwsConnectionConfig",
"type": "object"
},
"BucketDuration": {
"enum": [
"DAY",
"HOUR"
],
"title": "BucketDuration",
"type": "string"
},
"ClassificationConfig": {
"additionalProperties": false,
"properties": {
"enabled": {
"default": false,
"description": "Whether classification should be used to auto-detect glossary terms",
"title": "Enabled",
"type": "boolean"
},
"sample_size": {
"default": 100,
"description": "Number of sample values used for classification.",
"title": "Sample Size",
"type": "integer"
},
"max_workers": {
"default": 4,
"description": "Number of worker processes to use for classification. Set to 1 to disable.",
"title": "Max Workers",
"type": "integer"
},
"table_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns to filter tables for classification. This is used in combination with other patterns in parent config. Specify regex to match the entire table name in `database.schema.table` format. e.g. to match all tables starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*'"
},
"column_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns to filter columns for classification. This is used in combination with other patterns in parent config. Specify regex to match the column name in `database.schema.table.column` format."
},
"info_type_to_term": {
"additionalProperties": {
"type": "string"
},
"default": {},
"description": "Optional mapping to provide glossary term identifier for info type",
"title": "Info Type To Term",
"type": "object"
},
"classifiers": {
"default": [
{
"type": "datahub",
"config": null
}
],
"description": "Classifiers to use to auto-detect glossary terms. If more than one classifier, infotype predictions from the classifier defined later in sequence take precedance.",
"items": {
"$ref": "#/$defs/DynamicTypedClassifierConfig"
},
"title": "Classifiers",
"type": "array"
}
},
"title": "ClassificationConfig",
"type": "object"
},
"DynamicTypedClassifierConfig": {
"additionalProperties": false,
"properties": {
"type": {
"description": "The type of the classifier to use. For DataHub, use `datahub`",
"title": "Type",
"type": "string"
},
"config": {
"anyOf": [
{},
{
"type": "null"
}
],
"default": null,
"description": "The configuration required for initializing the classifier. If not specified, uses defaults for classifer type.",
"title": "Config"
}
},
"required": [
"type"
],
"title": "DynamicTypedClassifierConfig",
"type": "object"
},
"GEProfilingConfig": {
"additionalProperties": false,
"properties": {
"enabled": {
"default": false,
"description": "Whether profiling should be done.",
"title": "Enabled",
"type": "boolean"
},
"operation_config": {
"$ref": "#/$defs/OperationConfig",
"description": "Experimental feature. To specify operation configs."
},
"limit": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Max number of documents to profile. By default, profiles all documents.",
"title": "Limit"
},
"offset": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Offset in documents to profile. By default, uses no offset.",
"title": "Offset"
},
"profile_table_level_only": {
"default": false,
"description": "Whether to perform profiling at table-level only, or include column-level profiling as well.",
"title": "Profile Table Level Only",
"type": "boolean"
},
"include_field_null_count": {
"default": true,
"description": "Whether to profile for the number of nulls for each column.",
"title": "Include Field Null Count",
"type": "boolean"
},
"include_field_distinct_count": {
"default": true,
"description": "Whether to profile for the number of distinct values for each column.",
"title": "Include Field Distinct Count",
"type": "boolean"
},
"include_field_min_value": {
"default": true,
"description": "Whether to profile for the min value of numeric columns.",
"title": "Include Field Min Value",
"type": "boolean"
},
"include_field_max_value": {
"default": true,
"description": "Whether to profile for the max value of numeric columns.",
"title": "Include Field Max Value",
"type": "boolean"
},
"include_field_mean_value": {
"default": true,
"description": "Whether to profile for the mean value of numeric columns.",
"title": "Include Field Mean Value",
"type": "boolean"
},
"include_field_median_value": {
"default": true,
"description": "Whether to profile for the median value of numeric columns.",
"title": "Include Field Median Value",
"type": "boolean"
},
"include_field_stddev_value": {
"default": true,
"description": "Whether to profile for the standard deviation of numeric columns.",
"title": "Include Field Stddev Value",
"type": "boolean"
},
"include_field_quantiles": {
"default": false,
"description": "Whether to profile for the quantiles of numeric columns.",
"title": "Include Field Quantiles",
"type": "boolean"
},
"include_field_distinct_value_frequencies": {
"default": false,
"description": "Whether to profile for distinct value frequencies.",
"title": "Include Field Distinct Value Frequencies",
"type": "boolean"
},
"include_field_histogram": {
"default": false,
"description": "Whether to profile for the histogram for numeric fields.",
"title": "Include Field Histogram",
"type": "boolean"
},
"include_field_sample_values": {
"default": true,
"description": "Whether to profile for the sample values for all columns.",
"title": "Include Field Sample Values",
"type": "boolean"
},
"max_workers": {
"default": 20,
"description": "Number of worker threads to use for profiling. Set to 1 to disable.",
"title": "Max Workers",
"type": "integer"
},
"report_dropped_profiles": {
"default": false,
"description": "Whether to report datasets or dataset columns which were not profiled. Set to `True` for debugging purposes.",
"title": "Report Dropped Profiles",
"type": "boolean"
},
"turn_off_expensive_profiling_metrics": {
"default": false,
"description": "Whether to turn off expensive profiling or not. This turns off profiling for quantiles, distinct_value_frequencies, histogram & sample_values. This also limits maximum number of fields being profiled to 10.",
"title": "Turn Off Expensive Profiling Metrics",
"type": "boolean"
},
"field_sample_values_limit": {
"default": 20,
"description": "Upper limit for number of sample values to collect for all columns.",
"title": "Field Sample Values Limit",
"type": "integer"
},
"max_number_of_fields_to_profile": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "A positive integer that specifies the maximum number of columns to profile for any table. `None` implies all columns. The cost of profiling goes up significantly as the number of columns to profile goes up.",
"title": "Max Number Of Fields To Profile"
},
"profile_if_updated_since_days": {
"anyOf": [
{
"exclusiveMinimum": 0,
"type": "number"
},
{
"type": "null"
}
],
"default": null,
"description": "Profile table only if it has been updated since these many number of days. If set to `null`, no constraint of last modified time for tables to profile. Supported only in `snowflake` and `BigQuery`.",
"schema_extra": {
"supported_sources": [
"snowflake",
"bigquery"
]
},
"title": "Profile If Updated Since Days"
},
"profile_table_size_limit": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": 5,
"description": "Profile tables only if their size is less than specified GBs. If set to `null`, no limit on the size of tables to profile. Supported only in `Snowflake`, `BigQuery` and `Databricks`. Supported for `Oracle` based on calculated size from gathered stats.",
"schema_extra": {
"supported_sources": [
"snowflake",
"bigquery",
"unity-catalog",
"oracle"
]
},
"title": "Profile Table Size Limit"
},
"profile_table_row_limit": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": 5000000,
"description": "Profile tables only if their row count is less than specified count. If set to `null`, no limit on the row count of tables to profile. Supported only in `Snowflake`, `BigQuery`. Supported for `Oracle` based on gathered stats.",
"schema_extra": {
"supported_sources": [
"snowflake",
"bigquery",
"oracle"
]
},
"title": "Profile Table Row Limit"
},
"profile_table_row_count_estimate_only": {
"default": false,
"description": "Use an approximate query for row count. This will be much faster but slightly less accurate. Only supported for Postgres and MySQL. ",
"schema_extra": {
"supported_sources": [
"postgres",
"mysql"
]
},
"title": "Profile Table Row Count Estimate Only",
"type": "boolean"
},
"query_combiner_enabled": {
"default": true,
"description": "*This feature is still experimental and can be disabled if it causes issues.* Reduces the total number of queries issued and speeds up profiling by dynamically combining SQL queries where possible.",
"title": "Query Combiner Enabled",
"type": "boolean"
},
"catch_exceptions": {
"default": true,
"description": "",
"title": "Catch Exceptions",
"type": "boolean"
},
"partition_profiling_enabled": {
"default": true,
"description": "Whether to profile partitioned tables. Only BigQuery and Aws Athena supports this. If enabled, latest partition data is used for profiling.",
"schema_extra": {
"supported_sources": [
"athena",
"bigquery"
]
},
"title": "Partition Profiling Enabled",
"type": "boolean"
},
"partition_datetime": {
"anyOf": [
{
"format": "date-time",
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "If specified, profile only the partition which matches this datetime. If not specified, profile the latest partition. Only Bigquery supports this.",
"schema_extra": {
"supported_sources": [
"bigquery"
]
},
"title": "Partition Datetime"
},
"use_sampling": {
"default": true,
"description": "Whether to profile column level stats on sample of table. Only BigQuery and Snowflake support this. If enabled, profiling is done on rows sampled from table. Sampling is not done for smaller tables. ",
"schema_extra": {
"supported_sources": [
"bigquery",
"snowflake"
]
},
"title": "Use Sampling",
"type": "boolean"
},
"sample_size": {
"default": 10000,
"description": "Number of rows to be sampled from table for column level profiling.Applicable only if `use_sampling` is set to True.",
"schema_extra": {
"supported_sources": [
"bigquery",
"snowflake"
]
},
"title": "Sample Size",
"type": "integer"
},
"profile_external_tables": {
"default": false,
"description": "Whether to profile external tables. Only Snowflake and Redshift supports this.",
"schema_extra": {
"supported_sources": [
"redshift",
"snowflake"
]
},
"title": "Profile External Tables",
"type": "boolean"
},
"tags_to_ignore_sampling": {
"anyOf": [
{
"items": {
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "Fixed list of tags to ignore sampling. If not specified, tables will be sampled based on `use_sampling`.",
"title": "Tags To Ignore Sampling"
},
"profile_nested_fields": {
"default": false,
"description": "Whether to profile complex types like structs, arrays and maps. ",
"title": "Profile Nested Fields",
"type": "boolean"
}
},
"title": "GEProfilingConfig",
"type": "object"
},
"OperationConfig": {
"additionalProperties": false,
"properties": {
"lower_freq_profile_enabled": {
"default": false,
"description": "Whether to do profiling at lower freq or not. This does not do any scheduling just adds additional checks to when not to run profiling.",
"title": "Lower Freq Profile Enabled",
"type": "boolean"
},
"profile_day_of_week": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Number between 0 to 6 for day of week (both inclusive). 0 is Monday and 6 is Sunday. If not specified, defaults to Nothing and this field does not take affect.",
"title": "Profile Day Of Week"
},
"profile_date_of_month": {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
],
"default": null,
"description": "Number between 1 to 31 for date of month (both inclusive). If not specified, defaults to Nothing and this field does not take affect.",
"title": "Profile Date Of Month"
}
},
"title": "OperationConfig",
"type": "object"
},
"PostgresAuthMode": {
"description": "Authentication mode for PostgreSQL connection.",
"enum": [
"PASSWORD",
"AWS_IAM"
],
"title": "PostgresAuthMode",
"type": "string"
},
"StatefulStaleMetadataRemovalConfig": {
"additionalProperties": false,
"description": "Base specialized config for Stateful Ingestion with stale metadata removal capability.",
"properties": {
"enabled": {
"default": false,
"description": "Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or `datahub_api` is specified, otherwise False",
"title": "Enabled",
"type": "boolean"
},
"remove_stale_metadata": {
"default": true,
"description": "Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.",
"title": "Remove Stale Metadata",
"type": "boolean"
},
"fail_safe_threshold": {
"default": 75.0,
"description": "Prevents large amount of soft deletes & the state from committing from accidental changes to the source configuration if the relative change percent in entities compared to the previous state is above the 'fail_safe_threshold'.",
"maximum": 100.0,
"minimum": 0.0,
"title": "Fail Safe Threshold",
"type": "number"
}
},
"title": "StatefulStaleMetadataRemovalConfig",
"type": "object"
}
},
"additionalProperties": false,
"properties": {
"bucket_duration": {
"$ref": "#/$defs/BucketDuration",
"default": "DAY",
"description": "Size of the time window to aggregate usage stats."
},
"end_time": {
"description": "Latest date of lineage/usage to consider. Default: Current time in UTC",
"format": "date-time",
"title": "End Time",
"type": "string"
},
"start_time": {
"default": null,
"description": "Earliest date of lineage/usage to consider. Default: Last full day in UTC (or hour, depending on `bucket_duration`). You can also specify relative time with respect to end_time such as '-7 days' Or '-7d'.",
"format": "date-time",
"title": "Start Time",
"type": "string"
},
"top_n_queries": {
"default": 10,
"description": "Number of top queries to save to each table.",
"exclusiveMinimum": 0,
"title": "Top N Queries",
"type": "integer"
},
"user_email_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "regex patterns for user emails to filter in usage."
},
"include_operational_stats": {
"default": true,
"description": "Whether to display operational stats.",
"title": "Include Operational Stats",
"type": "boolean"
},
"include_read_operational_stats": {
"default": false,
"description": "Whether to report read operational stats. Experimental.",
"title": "Include Read Operational Stats",
"type": "boolean"
},
"format_sql_queries": {
"default": false,
"description": "Whether to format sql queries",
"title": "Format Sql Queries",
"type": "boolean"
},
"include_top_n_queries": {
"default": true,
"description": "Whether to ingest the top_n_queries.",
"title": "Include Top N Queries",
"type": "boolean"
},
"schema_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [
"information_schema"
],
"ignoreCase": true
}
},
"table_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns for tables to filter in ingestion. Specify regex to match the entire table name in database.schema.table format. e.g. to match all tables starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*'"
},
"view_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns for views to filter in ingestion. Note: Defaults to table_pattern if not specified. Specify regex to match the entire view name in database.schema.view format. e.g. to match all views starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*'"
},
"classification": {
"$ref": "#/$defs/ClassificationConfig",
"default": {
"enabled": false,
"sample_size": 100,
"max_workers": 4,
"table_pattern": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"column_pattern": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"info_type_to_term": {},
"classifiers": [
{
"config": null,
"type": "datahub"
}
]
},
"description": "For details, refer to [Classification](../../../../metadata-ingestion/docs/dev_guides/classification.md)."
},
"incremental_lineage": {
"default": false,
"description": "When enabled, emits lineage as incremental to existing lineage already in DataHub. When disabled, re-states lineage on each run.",
"title": "Incremental Lineage",
"type": "boolean"
},
"convert_urns_to_lowercase": {
"default": false,
"description": "Whether to convert dataset urns to lowercase.",
"title": "Convert Urns To Lowercase",
"type": "boolean"
},
"env": {
"default": "PROD",
"description": "The environment that all assets produced by this connector belong to",
"title": "Env",
"type": "string"
},
"platform_instance": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://docs.datahub.com/docs/platform-instances/ for more details.",
"title": "Platform Instance"
},
"stateful_ingestion": {
"anyOf": [
{
"$ref": "#/$defs/StatefulStaleMetadataRemovalConfig"
},
{
"type": "null"
}
],
"default": null
},
"options": {
"additionalProperties": true,
"description": "Any options specified here will be passed to [SQLAlchemy.create_engine](https://docs.sqlalchemy.org/en/14/core/engines.html#sqlalchemy.create_engine) as kwargs. To set connection arguments in the URL, specify them under `connect_args`.",
"title": "Options",
"type": "object"
},
"profile_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns to filter tables (or specific columns) for profiling during ingestion. Note that only tables allowed by the `table_pattern` will be considered."
},
"domain": {
"additionalProperties": {
"$ref": "#/$defs/AllowDenyPattern"
},
"default": {},
"description": "Attach domains to databases, schemas or tables during ingestion using regex patterns. Domain key can be a guid like *urn:li:domain:ec428203-ce86-4db3-985d-5a8ee6df32ba* or a string like \"Marketing\".) If you provide strings, then datahub will attempt to resolve this name to a guid, and will error out if this fails. There can be multiple domain keys specified.",
"title": "Domain",
"type": "object"
},
"include_views": {
"default": true,
"description": "Whether views should be ingested.",
"title": "Include Views",
"type": "boolean"
},
"include_tables": {
"default": true,
"description": "Whether tables should be ingested.",
"title": "Include Tables",
"type": "boolean"
},
"include_table_location_lineage": {
"default": true,
"description": "If the source supports it, include table lineage to the underlying storage location.",
"title": "Include Table Location Lineage",
"type": "boolean"
},
"include_view_lineage": {
"default": true,
"description": "Populates view->view and table->view lineage using DataHub's sql parser.",
"title": "Include View Lineage",
"type": "boolean"
},
"include_view_column_lineage": {
"default": true,
"description": "Populates column-level lineage for view->view and table->view lineage using DataHub's sql parser. Requires `include_view_lineage` to be enabled.",
"title": "Include View Column Lineage",
"type": "boolean"
},
"use_file_backed_cache": {
"default": true,
"description": "Whether to use a file backed cache for the view definitions.",
"title": "Use File Backed Cache",
"type": "boolean"
},
"profiling": {
"$ref": "#/$defs/GEProfilingConfig",
"default": {
"enabled": false,
"operation_config": {
"lower_freq_profile_enabled": false,
"profile_date_of_month": null,
"profile_day_of_week": null
},
"limit": null,
"offset": null,
"profile_table_level_only": false,
"include_field_null_count": true,
"include_field_distinct_count": true,
"include_field_min_value": true,
"include_field_max_value": true,
"include_field_mean_value": true,
"include_field_median_value": true,
"include_field_stddev_value": true,
"include_field_quantiles": false,
"include_field_distinct_value_frequencies": false,
"include_field_histogram": false,
"include_field_sample_values": true,
"max_workers": 20,
"report_dropped_profiles": false,
"turn_off_expensive_profiling_metrics": false,
"field_sample_values_limit": 20,
"max_number_of_fields_to_profile": null,
"profile_if_updated_since_days": null,
"profile_table_size_limit": 5,
"profile_table_row_limit": 5000000,
"profile_table_row_count_estimate_only": false,
"query_combiner_enabled": true,
"catch_exceptions": true,
"partition_profiling_enabled": true,
"partition_datetime": null,
"use_sampling": true,
"sample_size": 10000,
"profile_external_tables": false,
"tags_to_ignore_sampling": null,
"profile_nested_fields": false
}
},
"username": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "username",
"title": "Username"
},
"password": {
"anyOf": [
{
"format": "password",
"type": "string",
"writeOnly": true
},
{
"type": "null"
}
],
"default": null,
"description": "password",
"title": "Password"
},
"host_port": {
"description": "host URL",
"title": "Host Port",
"type": "string"
},
"database": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "database (catalog). If set to Null, all databases will be considered for ingestion.",
"title": "Database"
},
"scheme": {
"default": "postgresql+psycopg2",
"description": "database scheme",
"title": "Scheme",
"type": "string"
},
"sqlalchemy_uri": {
"anyOf": [
{
"type": "string"
},
{
"type": "null"
}
],
"default": null,
"description": "URI of database to connect to. See https://docs.sqlalchemy.org/en/14/core/engines.html#database-urls. Takes precedence over other connection parameters.",
"title": "Sqlalchemy Uri"
},
"auth_mode": {
"$ref": "#/$defs/PostgresAuthMode",
"default": "PASSWORD",
"description": "Authentication mode to use for the PostgreSQL connection. Options are 'PASSWORD' (default) for standard username/password authentication, or 'AWS_IAM' for AWS RDS IAM authentication."
},
"aws_config": {
"$ref": "#/$defs/AwsConnectionConfig",
"description": "AWS configuration for RDS IAM authentication (only used when auth_mode is AWS_IAM). Provides full control over AWS credentials, region, profiles, role assumption, retry logic, and proxy settings. If not explicitly configured, boto3 will automatically use the default credential chain and region from environment variables (AWS_DEFAULT_REGION, AWS_REGION), AWS config files (~/.aws/config), or IAM role metadata."
},
"database_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns for databases to filter in ingestion. Note: this is not used if `database` or `sqlalchemy_uri` are provided."
},
"initial_database": {
"default": "postgres",
"description": "Initial database used to query for the list of databases, when ingesting multiple databases. Note: this is not used if `database` or `sqlalchemy_uri` are provided.",
"title": "Initial Database",
"type": "string"
},
"include_stored_procedures": {
"default": true,
"description": "Include ingest of stored procedures.",
"title": "Include Stored Procedures",
"type": "boolean"
},
"procedure_pattern": {
"$ref": "#/$defs/AllowDenyPattern",
"default": {
"allow": [
".*"
],
"deny": [],
"ignoreCase": true
},
"description": "Regex patterns for stored procedures to filter in ingestion.Specify regex to match the entire procedure name in database.schema.procedure_name format. e.g. to match all procedures starting with customer in Customer database and public schema, use the regex 'Customer.public.customer.*'"
},
"include_query_lineage": {
"default": false,
"description": "Enable query-based lineage extraction from pg_stat_statements. Requires the pg_stat_statements extension to be installed and enabled. See documentation for setup instructions.",
"title": "Include Query Lineage",
"type": "boolean"
},
"max_queries_to_extract": {
"default": 1000,
"description": "Maximum number of queries to extract from pg_stat_statements for lineage analysis. Queries are prioritized by execution time and frequency.",
"title": "Max Queries To Extract",
"type": "integer"
},
"min_query_calls": {
"default": 1,
"description": "Minimum number of executions required for a query to be included. Set higher to focus on frequently-used queries.",
"title": "Min Query Calls",
"type": "integer"
},
"query_exclude_patterns": {
"anyOf": [
{
"items": {
"type": "string"
},
"type": "array"
},
{
"type": "null"
}
],
"default": null,
"description": "SQL LIKE patterns to exclude from query extraction. Example: ['%pg_catalog%', '%temp_%'] to exclude catalog and temp tables.",
"title": "Query Exclude Patterns"
},
"include_usage_statistics": {
"default": false,
"description": "Generate usage statistics from query history. Requires include_query_lineage to be enabled. Collects metrics like unique user counts, query frequencies, and column access patterns. Statistics appear in DataHub UI under the Dataset Profile > Usage tab.",
"title": "Include Usage Statistics",
"type": "boolean"
}
},
"required": [
"host_port"
],
"title": "PostgresConfig",
"type": "object"
}
Code Coordinates
- Class Name:
datahub.ingestion.source.sql.postgres.source.PostgresSource - Browse on GitHub
Questions
If you've got any questions on configuring ingestion for Postgres, feel free to ping us on our Slack.