Cosmos Config#
This page lists all available Airflow configurations that affect astronomer-cosmos Astronomer Cosmos behavior. They can be set in the airflow.cfg file or using environment variables.
Note
For more information, see Setting Configuration Options.
Sections:
[cosmos]
[openlineage]
[cosmos]#
- cache_dir:
The directory used for caching Cosmos data.
Default:
{TMPDIR}/cosmos(where{TMPDIR}is the system temporary directory)Environment Variable:
AIRFLOW__COSMOS__CACHE_DIR
- enable_cache:
Enable or disable caching of Cosmos data.
Default:
TrueEnvironment Variable:
AIRFLOW__COSMOS__ENABLE_CACHE
- enable_cache_dbt_ls:
Enable or disable caching of the dbt ls command in case using
LoadMode.DBT_LSin an Airflow Variable.Default:
TrueEnvironment Variable:
AIRFLOW__COSMOS__ENABLE_CACHE_DBT_LS
- `enable_cache_dbt_yaml_selectors`_:
Enable or disable caching of the YAML selectors in case using
LoadMode.DBT_MANIFESTwithRenderConfig.selectorin an Airflow Variable.Default:
TrueEnvironment Variable:
AIRFLOW__COSMOS__ENABLE_CACHE_DBT_YAML_SELECTORS
- enable_cache_partial_parse:
Enable or disable caching of dbt partial parse files in the local disk.
Default:
TrueEnvironment Variable:
AIRFLOW__COSMOS__ENABLE_CACHE_PARTIAL_PARSE
- enable_cache_package_lockfile:
Enable or disable caching of dbt project package lockfile.
Default:
TrueEnvironment Variable:
AIRFLOW__COSMOS__ENABLE_CACHE_PACKAGE_LOCKFILE
- propagate_logs:
Whether to propagate logs in the Cosmos module.
Default:
TrueEnvironment Variable:
AIRFLOW__COSMOS__PROPAGATE_LOGS
- dbt_docs_projects:
(Introduced in Cosmos 1.11.0; applicable to Airflow >= 3.1): JSON mapping configuring one or more dbt docs projects for the Airflow 3 UI plugin.
Structure: mapping of slug to a dict with keys
dir(required),index(optional, defaultindex.html),name(optional, label in the menu), andconn_id(optional connection to read remote storage). A “slug” here means a short, URL-safe identifier you choose for each docs project. It’s used in the path segment/cosmos/<slug>/…and in the UI menu label mapping. Prefer lowercase letters, numbers, and hyphens/underscores (e.g., core, mart, jaffle-shop).Example:
[cosmos] dbt_docs_projects = { "core": {"dir": "/path/to/core/target", "index": "index.html", "name": "dbt Docs (Core)"}, "mart": {"dir": "s3://bucket/path/to/mart/target", "conn_id": "aws_default", "name": "dbt Docs (Mart)"} }
Environment Variable:
AIRFLOW__COSMOS__DBT_DOCS_PROJECTSexport AIRFLOW__COSMOS__DBT_DOCS_PROJECTS='{"core":{"dir":"/path/to/core/target","index":"index.html","name":"dbt Docs (Core)"},"mart":{"dir":"s3://bucket/path/to/mart/target","conn_id":"aws_default","name":"dbt Docs (Mart)"}}'
- dbt_docs_dir:
(Applicable to Airflow 2): The directory path for dbt documentation.
Default:
NoneEnvironment Variable:
AIRFLOW__COSMOS__DBT_DOCS_DIR
- dbt_docs_conn_id:
(Applicable to Airflow 2): The connection ID for dbt documentation.
Default:
NoneEnvironment Variable:
AIRFLOW__COSMOS__DBT_DOCS_CONN_ID
- default_copy_dbt_packages:
(Introduced in Cosmos 1.10.0): By default, Cosmos 1.x either installs
dbt depsor creates a symbolic link to the originaldbt_packagesfolder. This configuration changes this behaviour, by copying the dbt projectdbt_packagesinstead of creating symbolic links, so Cosmos can rundbt depsincrementally. Can be overridden at aDbtDagandDbtTaskGroup, viaProjectConfig.copy_dbt_packages, or at an operator level, viaoperator_args={"copy_dbt_packages"}.Default:
FalseEnvironment Variable:
AIRFLOW__COSMOS__DEFAULT_COPY_DBT_PACKAGES
- enable_cache_profile:
Enable caching for the DBT profile.
Default:
TrueEnvironment Variable:
AIRFLOW__COSMOS__ENABLE_CACHE_PROFILE
Default:
FalseEnvironment Variable:
AIRFLOW__COSMOS__PRE_DBT_FUSION
- profile_cache_dir_name:
Folder name to store the DBT cached profiles. This will be a sub-folder of
cache_dirDefault:
profileEnvironment Variable:
AIRFLOW__COSMOS__PROFILE_CACHE_DIR_NAME
- remote_cache_dir:
The remote directory to store the dbt cache. Starting with Cosmos 1.6.0, you can store the dbt ls output as cache in a remote location (an alternative to the Variable cache approach released previously since Cosmos 1.5.0) using this configuration. The value for the remote cache directory can be any of the schemes that are supported by the Airflow Object Store feature (e.g.
s3://your_s3_bucket/cache_dir/,gs://your_gs_bucket/cache_dir/,abfs://your_azure_container/cache_dir, etc.)This is an experimental feature available since Cosmos 1.6 to gather user feedback and will be merged into the
cache_dirsetting in upcoming releases.Default:
NoneEnvironment Variable:
AIRFLOW__COSMOS__REMOTE_CACHE_DIR
- remote_cache_dir_conn_id:
The connection ID for the remote cache directory. If this is not set, the default Airflow connection ID identified for the scheme will be used.
Default:
NoneEnvironment Variable:
AIRFLOW__COSMOS__REMOTE_CACHE_DIR_CONN_ID
- remote_target_path:
(Introduced since Cosmos 1.7.0) The path to the remote target directory. This is the directory designated to remotely copy & store in the files generated and stored by dbt in the dbt project’s target directory. While this remote path is intended to copy files from the dbt project’s target directory, Cosmos currently only supports copying files from the
compileddirectory within thetargetfolder — and only when the execution mode is set toExecutionMode.AIRFLOW_ASYNC. Future releases will add support for copying additional files from the target directory. The value for the remote target path can be any of the schemes that are supported by the Airflow Object Store feature (e.g.s3://your_s3_bucket/target_dir/,gs://your_gs_bucket/target_dir/,abfs://your_azure_container/cache_dir, etc.)Default:
NoneEnvironment Variable:
AIRFLOW__COSMOS__REMOTE_TARGET_PATH
- remote_target_path_conn_id:
(Introduced since Cosmos 1.7.0) The connection ID for the remote target path. If this is not set, the default Airflow connection ID identified for the scheme will be used.
Default:
NoneEnvironment Variable:
AIRFLOW__COSMOS__REMOTE_TARGET_PATH_CONN_ID
- enable_setup_async_task:
(Introduced in Cosmos 1.9.0): Enables a setup task for
ExecutionMode.AIRFLOW_ASYNCto generate SQL files and upload them to a remote location (S3/GCS), preventing theruncommand from being executed on every node. You need to specifyremote_target_path_conn_idandremote_target_pathconfiguration to upload the artifact to the remote location.Default:
TrueEnvironment Variable:
AIRFLOW__COSMOS__ENABLE_SETUP_ASYNC_TASK
- enable_teardown_async_task:
(Introduced in Cosmos 1.9.0): Enables a teardown task for
ExecutionMode.AIRFLOW_ASYNCto delete the SQL files from remote location (S3/GCS). You need to specifyremote_target_path_conn_idandremote_target_pathconfiguration to delete the artifact from the remote location.Default:
TrueEnvironment Variable:
AIRFLOW__COSMOS__ENABLE_TEARDOWN_ASYNC_TASK
- upload_sql_to_xcom:
(Introduced in Cosmos 1.11.0): Enable this if the setup async task is enabled for
ExecutionMode.AIRFLOW_ASYNCand you want to upload the compiled SQL to Airflow XCom instead of a remote location (e.g., S3 or GCS).Default:
TrueEnvironment Variable:
AIRFLOW__COSMOS__UPLOAD_SQL_TO_XCOM
- use_dataset_airflow3_uri_standard:
(Introduced in Cosmos 1.10.0): Changes Cosmos Dataset (Asset) URIs to be Airflow 3 compliant. Since this would be a breaking change, it is False by default in Cosmos 1.x. - Default:
False- Environment Variable:AIRFLOW__COSMOS__USE_DATASET_AIRFLOW3_URI_STANDARD
- enable_memory_optimised_imports:
(Introduced in Cosmos 1.10.1): Eager imports in cosmos/__init__.py expose all Cosmos classes at the top level, which can significantly increase memory usage—even when Cosmos is just installed but not actively used. This option allows disabling those eager imports to reduce memory footprint. When enabled, users must access Cosmos classes via their full module paths, avoiding the overhead of importing unused modules and classes.
Default:
FalseEnvironment Variable:
AIRFLOW__COSMOS__ENABLE_MEMORY_OPTIMISED_IMPORTS
Note
This option will become the default behavior in Cosmos 2.0.0, where all eager imports will be removed from
cosmos/__init__.py.As an example, when this option is enabled, the following is an example of specifying the imports with full module paths:
from cosmos.airflow.dag import DbtDag from cosmos.config import ProfileConfig, ProjectConfig
as opposed to the following approach you might have when this option is disabled (default):
from cosmos import DbtDag, ProfileConfig, ProjectConfig
- enable_debug_mode:
Enable or disable debug mode. When enabled, Cosmos will track memory utilization for its tasks and push the peak memory usage (in MB) to XCom under the key
cosmos_debug_max_memory_mb. This is useful for profiling and optimizing resource allocation for dbt tasks. Requirespsutilto be installed.Default:
FalseEnvironment Variable:
AIRFLOW__COSMOS__ENABLE_DEBUG_MODE
- debug_memory_poll_interval_seconds:
The interval (in seconds) at which memory utilization is polled when debug mode is enabled. Lower values provide more accurate peak memory measurements but may add slight overhead.
Default:
0.5Environment Variable:
AIRFLOW__COSMOS__DEBUG_MEMORY_POLL_INTERVAL_SECONDS
- watcher_dbt_execution_queue:
(Introduced in Cosmos 1.14.0) When using watcher execution mode, tasks may need to run dbt or not, depending on their type (producer vs. consumer) and the retry number. When running the dbt command, tasks use more resources (CPU and memory) than when behaving as sensors. The computational cost of running these tasks can vary widely. For example, a Cosmos watcher sensor consumes approximately 200MB, compared to 700MB consumed by a dbt build task running a project with almost 200 dbt models. This configuration allows users to define which queue to use when dbt commands are run, optimising their Airflow deployment. Internally, Cosmos leverages the [Airflow cluster policy feature](https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/cluster-policies.html). As of now, this configuration will be used: - for watcher producer tasks, during their first execution - for watcher consumer tasks, from their first retry onwards - it will automatically be assigned to the specified queue.
This behavior is enforced by Cosmos via an Airflow policy (
task_instance_mutation_hook) that mutatestask_instance.queueat runtime for retry attempts. As a result, the configuredwatcher_dbt_execution_queuecan overwrite any queue set directly on the operator, but only for retries; the initial run continues to use the operator’s original queue.Default:
NoneEnvironment Variable:
AIRFLOW__COSMOS__WATCHER_DBT_EXECUTION_QUEUE
[openlineage]#
- namespace:
The OpenLineage namespace for tracking lineage.
Default: If not configured in Airflow configuration, it falls back to the environment variable
OPENLINEAGE_NAMESPACE, otherwise it usesDEFAULT_OPENLINEAGE_NAMESPACE.Environment Variable:
AIRFLOW__OPENLINEAGE__NAMESPACE
Note
For more information, see OpenLineage Configuration Options.
Environment Variables#
- LINEAGE_NAMESPACE:
The OpenLineage namespace for tracking lineage.
Default: If not configured in Airflow configuration, it falls back to the environment variable
OPENLINEAGE_NAMESPACE, otherwise it usesDEFAULT_OPENLINEAGE_NAMESPACE.