Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
6a11f6d
Rename product names: Workflows→Jobs, Delta Live Tables→Spark Declara…
lennartkats-db Apr 14, 2026
2f72a10
Regenerate acceptance test outputs for product name changes
lennartkats-db Apr 14, 2026
d4c3378
Regenerate JSON schema for annotation changes
lennartkats-db Apr 14, 2026
ecf3c20
Fix combinations test: remove !0 unchanged assertion
lennartkats-db Apr 14, 2026
766df90
Merge branch 'main' into product-name-changes
simonfaltum Apr 14, 2026
37c7dd4
Revert default-python combinations test parameter rename
lennartkats-db Apr 14, 2026
4c772d8
Restore deploy-experimental output.txt corrupted by local test run
lennartkats-db Apr 14, 2026
05ee748
Simplify pipeline comment, rename dltActions to pipelineActions in de…
lennartkats-db Apr 16, 2026
5890418
Regenerate schema and docs from annotations instead of manual edits
lennartkats-db Apr 16, 2026
ff8a2dc
Address juliacrawf-db review: fix product naming per style guide
lennartkats-db Apr 17, 2026
41d318b
Keep full product name inside link: "create [Spark Declarative Pipeli…
lennartkats-db Apr 17, 2026
f6b016e
Apply suggestions from code review
lennartkats-db Apr 17, 2026
c2fd8a3
Merge remote-tracking branch 'origin/product-name-changes' into produ…
lennartkats-db Apr 17, 2026
3891f3f
Add Lakeflow prefix to remaining Spark Declarative Pipelines references
lennartkats-db Apr 17, 2026
d699554
Fix template description and regenerate acceptance test outputs
lennartkats-db Apr 17, 2026
220b427
Revert all changes to hidden experimental-jobs-as-code template
lennartkats-db Apr 17, 2026
1aa1a31
Simplify pipeline references: use generic 'pipeline' instead of produ…
lennartkats-db Apr 17, 2026
92acf39
Merge remote-tracking branch 'origin/main' into product-name-changes
lennartkats-db Apr 17, 2026
8eb6e9d
Fix deploy-experimental acceptance test output for direct engine variant
lennartkats-db Apr 17, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions acceptance/bundle/help/bundle-generate-pipeline/output.txt
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@

>>> [CLI] bundle generate pipeline --help
Generate bundle configuration for an existing Delta Live Tables pipeline.
Generate bundle configuration for an existing pipeline.

This command downloads an existing Lakeflow Spark Declarative Pipeline's configuration and any associated
This command downloads an existing pipeline's configuration and any associated
notebooks, creating bundle files that you can use to deploy the pipeline to other
environments or manage it as code.

Examples:
# Import a production Lakeflow Spark Declarative Pipeline
# Import a production pipeline
databricks bundle generate pipeline --existing-pipeline-id abc123 --key etl_pipeline

# Organize files in custom directories
Expand Down
2 changes: 1 addition & 1 deletion acceptance/bundle/help/bundle-open/output.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Open a deployed bundle resource in the Databricks workspace.

Examples:
databricks bundle open # Prompts to select a resource to open
databricks bundle open my_job # Open specific job in Workflows UI
databricks bundle open my_job # Open specific job in Jobs UI
databricks bundle open my_dashboard # Open dashboard in browser

Use after deployment to quickly navigate to your resources in the workspace.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,5 @@ resources:

variables:
notebook_dir:
description: Directory with DLT notebooks
description: Directory with SDP notebooks
default: non-existent
Original file line number Diff line number Diff line change
Expand Up @@ -6,5 +6,5 @@ include:

variables:
notebook_dir:
description: Directory with DLT notebooks
description: Directory with SDP notebooks
default: notebooks
4 changes: 2 additions & 2 deletions acceptance/bundle/paths/pipeline_globs/root/databricks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ include:

variables:
notebook_dir:
description: Directory with DLT notebooks
description: Directory with SDP notebooks
default: notebooks
file_dir:
description: Directory with DLT files
description: Directory with SDP files
default: files
2 changes: 1 addition & 1 deletion acceptance/bundle/run_as/pipelines_legacy/output.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@

>>> [CLI] bundle validate -o json
Warning: You are using the legacy mode of run_as. The support for this mode is experimental and might be removed in a future release of the CLI. In order to run the DLT pipelines in your DAB as the run_as user this mode changes the owners of the pipelines to the run_as identity, which requires the user deploying the bundle to be a workspace admin, and also a Metastore admin if the pipeline target is in UC.
Warning: You are using the legacy mode of run_as. The support for this mode is experimental and might be removed in a future release of the CLI. In order to run the pipelines in your DABs project as the run_as user this mode changes the owners of the pipelines to the run_as identity, which requires the user deploying the bundle to be a workspace admin, and also a Metastore admin if the pipeline target is in UC.
at experimental.use_legacy_run_as
in databricks.yml:8:22

Expand Down
2 changes: 1 addition & 1 deletion acceptance/bundle/telemetry/deploy-experimental/output.txt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@

>>> [CLI] bundle deploy
Warning: You are using the legacy mode of run_as. The support for this mode is experimental and might be removed in a future release of the CLI. In order to run the DLT pipelines in your DAB as the run_as user this mode changes the owners of the pipelines to the run_as identity, which requires the user deploying the bundle to be a workspace admin, and also a Metastore admin if the pipeline target is in UC.
Warning: You are using the legacy mode of run_as. The support for this mode is experimental and might be removed in a future release of the CLI. In order to run the pipelines in your DABs project as the run_as user this mode changes the owners of the pipelines to the run_as identity, which requires the user deploying the bundle to be a workspace admin, and also a Metastore admin if the pipeline target is in UC.
at experimental.use_legacy_run_as
in databricks.yml:5:22

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ on CI/CD setup.
## Manually deploying to Databricks with Declarative Automation Bundles

Declarative Automation Bundles can be used to deploy to Databricks and to execute
dbt commands as a job using Databricks Workflows. See
dbt commands as a job using Databricks Jobs. See
https://docs.databricks.com/dev-tools/bundles/index.html to learn more.

Use the Databricks CLI to deploy a development copy of this project to a workspace:
Expand All @@ -117,7 +117,7 @@ is optional here.)
This deploys everything that's defined for this project.
For example, the default template would deploy a job called
`[dev yourname] my_dbt_sql_job` to your workspace.
You can find that job by opening your workpace and clicking on **Workflows**.
You can find that job by opening your workpace and clicking on **Jobs & Pipelines**.

You can also deploy to your production target directly from the command-line.
The warehouse, catalog, and schema for that target are configured in `dbt_profiles/profiles.yml`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The 'my_default_scala' project was generated by using the default-scala template
This deploys everything that's defined for this project.
For example, the default template would deploy a job called
`[dev yourname] my_default_scala_job` to your workspace.
You can find that job by opening your workspace and clicking on **Workflows**.
You can find that job by opening your workspace and clicking on **Jobs & Pipelines**.

4. Similarly, to deploy a production copy, type:
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The 'my_default_sql' project was generated by using the default-sql template.
This deploys everything that's defined for this project.
For example, the default template would deploy a job called
`[dev yourname] my_default_sql_job` to your workspace.
You can find that job by opening your workpace and clicking on **Workflows**.
You can find that job by opening your workpace and clicking on **Jobs & Pipelines**.

4. Similarly, to deploy a production copy, type:
```
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
-- This query is executed using Databricks Workflows (see resources/my_default_sql_sql.job.yml)
-- This query is executed using Databricks Jobs (see resources/my_default_sql_sql.job.yml)

USE CATALOG {{catalog}};
USE IDENTIFIER({{schema}});
Expand Down
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
-- This query is executed using Databricks Workflows (see resources/my_default_sql_sql.job.yml)
-- This query is executed using Databricks Jobs (see resources/my_default_sql_sql.job.yml)
--
-- The streaming table below ingests all JSON files in /databricks-datasets/retail-org/sales_orders/
-- See also https://docs.databricks.com/sql/language-manual/sql-ref-syntax-ddl-create-streaming-table.html
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ import (

type captureUCDependencies struct{}

// If a user defines a UC schema in the bundle, they can refer to it in DLT pipelines,
// If a user defines a UC schema in the bundle, they can refer to it in SDP pipelines,
// UC Volumes, Registered Models, Quality Monitors, or Model Serving Endpoints using the
// `${resources.schemas.<schema_key>.name}` syntax. Using this syntax allows TF to capture
// the deploy time dependency this resource has on the schema and deploy changes to the
Expand Down Expand Up @@ -110,7 +110,7 @@ func (m *captureUCDependencies) Apply(ctx context.Context, b *bundle.Bundle) dia
if p == nil {
continue
}
// "schema" and "target" have the same semantics in the DLT API but are mutually
// "schema" and "target" have the same semantics in the SDP API but are mutually
// exclusive i.e. only one can be set at a time.
p.Schema = resolveSchema(b, p.Catalog, p.Schema)
p.Target = resolveSchema(b, p.Catalog, p.Target)
Expand Down
4 changes: 2 additions & 2 deletions bundle/config/mutator/resourcemutator/run_as.go
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ func setRunAsForAlerts(b *bundle.Bundle) {
}
}

// Legacy behavior of run_as for DLT pipelines. Available under the experimental.use_run_as_legacy flag.
// Legacy behavior of run_as for SDP pipelines. Available under the experimental.use_run_as_legacy flag.
// Only available to unblock customers stuck due to breaking changes in https://github.com/databricks/cli/pull/1233
func setPipelineOwnersToRunAsIdentity(b *bundle.Bundle) {
runAs := b.Config.RunAs
Expand Down Expand Up @@ -233,7 +233,7 @@ func (m *setRunAs) Apply(_ context.Context, b *bundle.Bundle) diag.Diagnostics {
return diag.Diagnostics{
{
Severity: diag.Warning,
Summary: "You are using the legacy mode of run_as. The support for this mode is experimental and might be removed in a future release of the CLI. In order to run the DLT pipelines in your DAB as the run_as user this mode changes the owners of the pipelines to the run_as identity, which requires the user deploying the bundle to be a workspace admin, and also a Metastore admin if the pipeline target is in UC.",
Summary: "You are using the legacy mode of run_as. The support for this mode is experimental and might be removed in a future release of the CLI. In order to run the pipelines in your DABs project as the run_as user this mode changes the owners of the pipelines to the run_as identity, which requires the user deploying the bundle to be a workspace admin, and also a Metastore admin if the pipeline target is in UC.",
Paths: []dyn.Path{dyn.MustPathFromString("experimental.use_legacy_run_as")},
Locations: b.Config.GetLocations("experimental.use_legacy_run_as"),
},
Expand Down
Loading
Loading