Getting Started with Dataform on Google Cloud Platform (GCP)

 Dataform is a powerful tool for managing and automating data workflows, specifically designed for teams working with Google BigQuery. It allows you to create efficient data pipelines, organize SQL code, and implement practices like dependency management and automatic documentation.

In this blog, we'll walk you through how to set up a workspace inside a Dataform repository, explain the default folder structure, explore the main tabs, and provide some JavaScript functions to make your workflow more dynamic and reusable.

Setting Up Your Dataform Workspace

Assuming you already have a Google Cloud Platform (GCP) project and BigQuery set up, the next step is to navigate to Dataform within the BigQuery Studio.

Once inside Dataform, you will work within a repository. Within this repository, you can create multiple workspaces for different tasks or environments, such as development, staging, or production.

To get started:

  1. Navigate to your Dataform repository.
  2. Create or open an existing workspace to begin working on your data pipelines.

Default Folder Structure in Dataform

When you create a new workspace, Dataform provides a predefined folder structure:

  • definitions/: Contains SQL or data transformation scripts.
  • includes/: Holds reusable SQL snippets or helper functions in JavaScript.
  • models/: Where you define your core SQL models for data transformations.
  • datasets/: Organize the datasets, views, or incremental outputs.
  • tests/: Write unit tests to ensure the correctness and quality of your data transformations.

This structure helps to keep your codebase neat and manageable, especially as your project scales.

Managing Workflow in Dataform

Workflows in Dataform are built using SQL actions and transformations defined in the models/ directory. Dataform automatically generates a dependency graph, which ensures that each transformation runs in the correct order based on table dependencies. This feature is especially useful for tracking data lineage and understanding how data flows through your pipeline.

Tabs in Dataform

When working in Dataform, you'll come across several key tabs that make it easy to manage your transformations and workflows:

  • Code: This is where you write, edit, and organize your SQL code, as well as define models and scripts.
  • Compile Graph: This tab shows the dependency graph between all tables, allowing you to visualize and manage dependencies.
  • Executions: This tab lets you view previous runs of your workflow, showing details of each execution's performance and errors.
  • Start Execution: This tab allows you to manually run the entire workflow or specific parts of it. It’s particularly useful when you need to trigger an on-demand refresh of your tables.

Making Workflows Dynamic with JavaScript

Dataform allows you to incorporate JavaScript within your SQL transformations, making it easier to handle repetitive tasks, manage variables, and make your queries more flexible. Below are a few useful functions that you can implement.

Dynamically Declare Tables

Instead of hardcoding table names throughout your scripts, you can store them in an array and declare them dynamically. Here's an example of a JavaScript function that does this:


const tables = [
  { name: "table1", database: "my-database", schema: "my-schema" },
  { name: "table2", database: "my-database", schema: "my-schema" },
  { name: "table3", database: "my-database", schema: "my-schema" },
];

// Function to generate and apply table declarations
function generateTableDeclarations() {
  tables.forEach(table => {
    declare({
      type: "declaration",
      schema: table.schema,
      name: table.name,
      database: table.database
    });
  });
}

// Call the function to generate the declarations
generateTableDeclarations();

With this function, you can easily add, remove, or update tables without modifying multiple files.

Using Variables for Dynamic Queries

You can also use JavaScript to create variables that make your queries dynamic, ensuring that they can be reused across different queries. Here’s an example:


// Create variables for multiple queries
const selling_date = "2023-01-01";

module.exports = { selling_date };


You can now reference selling_date in your SQL transformations without hardcoding the value in multiple places.

Conclusion

Dataform is a powerful tool for managing and automating data pipelines on GCP, especially when working with BigQuery. By leveraging its workspace structure, the flexibility of JavaScript, and the built-in dependency graph, you can build efficient, dynamic workflows that are easy to manage and scale.

If you liked this post, don’t forget to share and comment below!




Share:

How to manage SQL Data Analytics Workflows at Enterprise Level

 

In today’s fast-paced enterprise environment, managing SQL data analytics workflows effectively is critical to ensure data-driven decision-making. However, many companies still face significant challenges in their data processes, such as:

  • Monolithic SQL Queries: Large, complex queries often grow into unmanageable blocks of code. These monolithic SQL scripts can be difficult to understand, debug, or scale, especially as the business logic becomes more complex with added transformations and KPIs.

  • Lack of Collaboration: It's common for analysts and data engineers to save queries locally on their hard drives or scattered across shared folders. This ad-hoc way of working leads to siloed knowledge, lack of version control, and no easy way for others to collaborate or improve on existing work.

  • No Knowledge Sharing: Without centralized documentation or tools that encourage modularity, knowledge gets trapped in individuals’ heads. If a key analyst leaves the company or moves to a different role, the company often loses valuable institutional knowledge.

  • Ad-hoc Development: Many teams operate in an ad-hoc mode, where queries are quickly hacked together to get immediate results but aren’t built with scalability, reusability, or long-term maintenance in mind. This often leads to duplication of efforts, inconsistent metrics, and brittle workflows.

These challenges not only slow down workflows but also introduce risks in data quality, reporting accuracy, and scalability. To overcome these, companies need structured, modular, and version-controlled workflows that enable collaboration, automation, and knowledge sharing across teams.

In this blog, we’ll explore how to manage two distinct types of data analytics workflows—ETL & Descriptive Report Generation and Exploratory Data Analysis (EDA)—at the enterprise level. We'll look at the tools available on various cloud platforms (Google Cloud Platform, Microsoft Azure, and AWS) as well as cross-platform tools like Databricks and Snowflake, which can address these challenges.


1. ETL & Descriptive Report Generation

ETL (Extract, Transform, Load) and Descriptive Report Generation workflows are essential for transforming raw data into structured reports and KPIs that drive business decisions. These workflows are typically designed around well-defined schemas (star or snowflake) and produce outputs consumed by BI tools like Power BI.

Key Characteristics:

  • Data Transformation-Heavy: The primary focus is on transforming raw, often unstructured data into clean, organized, and structured datasets.
  • KPI & Business Logic: These workflows incorporate a heavy amount of business rules to calculate metrics that are critical for day-to-day operations.
  • Structured Data Schemas: Data is typically organized in star or snowflake schemas for efficient querying and reporting.
  • BI Tool Integration: Outputs are often integrated into BI tools such as Power BI for dashboards, reports, and further analysis by business users.

Common Challenges:

  1. Monolithic SQL Queries: Many ETL workflows still rely on huge SQL scripts with numerous subqueries, Common Table Expressions (CTEs), and hardcoded business logic. These scripts are difficult to maintain and debug as they grow in complexity.

  2. Redundant Code: When SQL logic is not modular, it’s common for different team members to replicate the same transformations or KPIs across different queries, leading to duplication of efforts and inconsistencies in the data.

  3. Hard to Scale: As data volume grows, monolithic queries become slow and less efficient. Without proper modularity, it’s hard to optimize individual components of a query or pipeline.

  4. Collaboration Gaps: Saving queries on local machines or in isolated folders leads to a lack of collaboration. Team members cannot easily share or reuse code, resulting in fragmented knowledge and inconsistent reporting across departments.

Solution: Modular SQL Workflows

To address these challenges, modular SQL tools and workflows can break down complex SQL logic into smaller, reusable components. This ensures consistency, maintainability, and scalability. Here’s how modular SQL tools help:

  • Reusability: Centralize business logic, KPIs, and transformations in reusable SQL modules.
  • Collaboration: Use version control systems like Git to manage SQL queries, making them accessible and shareable across teams.
  • Maintainability: Smaller SQL components are easier to debug and improve, reducing the risk of human error and improving performance.
  • Automation: Automate scheduling and execution of these modular SQL workflows, ensuring that reports and transformations are always up-to-date.


2. Exploratory Data Analysis (EDA)

In contrast to ETL and Descriptive Report Generation, Exploratory Data Analysis (EDA) is an iterative, experiment-driven approach used by data scientists and analysts to uncover insights, trends, and patterns in data. Unlike ETL processes, EDA is not about generating structured outputs for BI tools but rather about flexibility, exploration, and discovery.

Key Characteristics:

  • Statistics-Heavy: EDA often involves statistical techniques to understand data distributions, correlations, and anomalies.
  • Python and Open-Source Libraries: The Python programming language, along with libraries like Pandas, Matplotlib, Seaborn, and Plotly, is commonly used for data wrangling and visualization in EDA workflows.
  • Interactive Visualization: Visualization is critical for uncovering insights in EDA, with tools that allow analysts to quickly plot and visualize datasets.
  • Iterative and Exploratory: The focus here is on flexibility. EDA workflows allow for rapid prototyping and hypothesis testing, with analysts frequently switching between different coding environments.

Common Challenges:

  1. Fragmented Tools: Many enterprises lack unified platforms that allow data scientists to switch seamlessly between SQL, Python, and visualization tools. This fragmentation can slow down the analysis process and hinder collaboration.

  2. Lack of Version Control: Notebooks are often developed ad-hoc, leading to poor version control, making it hard to track changes or reproduce results later.

  3. Collaboration Barriers: Unlike SQL-based ETL workflows, EDA can be harder to share across teams due to the reliance on notebooks and Python environments.

Solution: Collaborative Notebook Environments

To support the exploratory nature of EDA workflows, notebook environments like Databricks, Google Cloud’s Vertex AI Notebooks, and Azure Fabric can be used. These environments allow data scientists to:

  • Switch Between SQL and Python: Seamlessly integrate SQL queries with Python code for data wrangling and statistics.
  • Visualization: Use Python-based libraries like Matplotlib, Seaborn, and Plotly for quick visualizations.
  • Prototyping: Rapidly experiment with different approaches without needing to build and maintain large SQL workflows.
  • Version Control: Incorporate Git and other version control tools directly into the notebook environment to track changes and collaborate effectively.

Recommended Tools for Enterprise Workflows

1. Google Cloud Platform (GCP)

  • ETL & Descriptive Reports: Use Dataform to modularize SQL queries and manage ETL processes. Dataform enables teams to centralize business logic and automate SQL workflows.
  • Exploratory Data Analysis: Leverage Vertex AI Notebooks for an interactive, exploratory environment that integrates SQL and Python, with support for version control and collaboration.

2. Microsoft Azure

  • ETL & Descriptive Reports: Azure Fabric can be used to manage data pipelines and build modular SQL workflows that feed into Power BI.
  • Exploratory Data Analysis: Azure Fabric also provides notebook environments similar to Databricks, allowing teams to conduct EDA, statistical analysis, and rapid prototyping.

3. Amazon Web Services (AWS)

  • ETL & Descriptive Reports: AWS offers AWS Glue for ETL operations and Amazon Redshift with stored procedures for modular SQL workflows. Redshift integrates well with Power BI.
  • Exploratory Data Analysis: SageMaker Studio Notebooks offer an integrated environment for Python-heavy EDA workflows with support for multiple languages, making it easy to conduct interactive analyses.

4. Cross-Cloud Tools: Databricks & Snowflake

  • Databricks: Works across all major cloud platforms (GCP, Azure, AWS) and offers robust notebook environments for both ETL and EDA. Databricks allows modular SQL and Python to be seamlessly combined in one workflow, making it a versatile tool for data analytics.
  • Snowflake: Snowflake is another cross-cloud solution that provides excellent support for modular SQL workflows, making it ideal for structured ETL processes. It also supports Python-based data manipulation through its Snowpark feature.

Conclusion

Effectively managing SQL data analytics workflows at an enterprise level requires using the right tools to address specific challenges. Modularizing SQL queries, ensuring collaboration, version control, and adopting flexible notebook environments are critical components of building scalable, maintainable data workflows. With cloud providers like GCP, Azure, and AWS offering robust solutions, and cross-cloud platforms like Databricks and Snowflake providing flexibility and advanced functionality, companies have powerful tools at their disposal to optimize both ETL and Exploratory Data Analysis workflows.

Share:

Recent Posts