Grafik: Mann sitzt auf großem Monitor mit Code. Eine Lupe scannt den Code.
Data Engineering

Data Quality Made Easy with Soda

Lesezeit
14 ​​min

The term data quality is generally used to describe the degree to which data corresponds to the real things or facts it represents. As it is often difficult or impossible in practice to assess the quality of data based on this definition, it is usually estimated by evaluating the deviation from predefined assumptions. The assumptions originate from the specific domain and are to be identified and recorded first, e.g. “The measured temperature is always between -10°C and +50°C due to the technical limitations of the sensor“. Assumptions can refer to the semantic or syntactic correctness, as well as the up-to-dateness or completeness of a data set. In addition to the specification of assumptions, ensuring data quality also involves continuous updating and regular validation, as well as the implementation of a process for handling anomalies. A detailed introduction to data quality can be found here.

Soda as a Tool for Ensuring Data Quality

To use Soda to ensure data quality, you can choose between Soda Core and Soda Cloud. While Soda Core is free to use, Soda Cloud is a subscription-based SaaS offering that can be interacted with via the Soda Library. The range of functions and support for external systems differ significantly between Soda Core and Soda Cloud. With the Soda Checks Language (SodaCL), Soda offers a domain-specific language based on YAML that can be used to define assumptions about data, among other things. More than 25 metrics* are already provided for this purpose, which can be used to define assumptions about data types, missing values, data set size, and much more. This low-code approach allows Soda to be used without extensive programming knowledge and decouples the assumptions of a specific data set from the generic validation process.

In cases where these metrics are not sufficient, user-defined checks can be implemented based on SQL. Soda Checks are defined using the YAML format and provide the basis for validations (Soda Scans). With alert levels and fail conditions* SodaCL offers additional options for implementing more complex strategies to handle anomalies. Last but not least, Soda offers a wide range of integration options*, e.g. to common databases and communication systems. In addition to common cloud data warehouses such as Snowflake, BigQuery, and Redshift, relational database systems such as PostgreSQL, MySQL, and Microsoft SQL Server are also supported. In addition, Soda offers integration options* for dbt and Spark as well as for Slack, Jira, and GitHub.

* Limited functionality in the Soda Core OS version compared to Soda Cloud

Soda Core, Soda Cloud, and Soda Library

While the open-source version of Soda Core is limited to ad-hoc analyses, Soda Cloud and the associated Soda Library allow analysis results to be stored in the cloud and made permanently available for retrieval. To comply with data protection regulations, you can choose between the two regions of the EU and the USA for storage.

Startseite des Soda Cloud Dashboards
Soda Cloud Dashboard

With Soda Cloud comes the Soda Cloud Dashboard, which allows for easy tracking of your historic data quality checks. In addition to monitoring in the Soda Cloud Dashboard, Soda Cloud also enables the implementation of more complex data quality checks based on a history of previous runs. Some examples of complex checks that are only available in Soda Cloud include:

Übersicht von Checks im Soda Cloud Dashboard
Soda Cloud Checks

To identify the cause of failed checks, Soda Library can send a sample of the affected data records to Soda Cloud for viewing and analysis. However, it should be noted that potentially sensitive data may be transferred to Soda Cloud.

Übersicht von Datasets im Soda Cloud Dashboard
Soda Cloud Datasets

Soda Cloud offers insights into your data quality check on different levels (e.g. overall data quality state, datasets, and individual checks) which allows for a good overview.

Soda-hosted and self-hosted Agents

With Soda Agents, it is possible to run scheduled and regular data quality checks. A Soda Agent includes a complete installation of the Soda Library, is ready to run, and can be managed in Soda Cloud. The user can choose between two variants:

The Soda-hosted agent that can be provided with just a few clicks is operated in the Soda Cloud and can only be used on publicly accessible databases (e.g. MySQL, PostgreSQL, Snowflake, and BigQuery are supported). Credentials for the data sources must be stored in the Soda Cloud. Using the Soda-hosted agent variant is very easy, as Soda takes care of the infrastructure and operations, i.e. setup, maintenance, and scaling of the agent. The costs for this variant are calculated by usage (i.e. pay-per-use).

On the other hand, self-hosted agents offer the option of operating them within your own data infrastructure, with all data sources being supported. In this case, credentials and data sources can also remain in your infrastructure and do not need to be stored in the Soda Cloud. The advantage of the self-hosted agent variant is that you have more control over the operations and database credentials and that you can adapt it flexibly to your own needs. The disadvantage is the effort and costs involved in operating it yourself.

Demo: Ensuring Data Quality with Soda

In the following, the Soda Checks Language (SodaCL) is used to define assumptions in the form of soda checks, which are then validated with data scans. As a preparation for this demo, a local instance of PostgreSQL was used, which was populated with data from the DVD Rental PostgreSQL database.

Installation and Configuration of Soda Core

First, the necessary Python packages for the respective data source need to be installed. To integrate PostgreSQL, it is therefore required to install the Python package soda-core-postgres as follows:

Furthermore, Soda requires a configuration to ensure database connectivity, depending on the data source. This is created in YAML format and looks as follows in our case:

Optionally, to establish a connection with Soda Cloud, an API key and a secret must be provided in the configuration, which can be generated in the Soda Cloud account under Your Avatar > Profile > API Keys.

To check the configuration and the connection to the data source, Soda CLI offers the following command:

Creation and Execution of Soda Checks

Similar to the configuration files defined above for the database connection, YAML configuration files are also used for the soda checks.

Value Checks

In the data model of the DVD Rental data set, the actor table contains information about movie actors. In addition to the ID of each actor, the table also includes the first and last names of the actors, as well as a metadata column last_update with the last update of the corresponding rows.

We start by defining checks to verify the presence of the first name and surname in each record:

We can soften the requirement that not a single first name may be missing by making use of the alert levels warn and fail:

If the number of missing first names in the actor table is greater than 0, but less than or equal to 10, this now only appears as a warning in the validation result. The validation itself returns a positive result in such a case. Only if there are more than 10 first names missing in the actor table, the validation would fail.

In the table customer, we can find the column email, which contains the email addresses of the customers. In the following, we define a check to examine whether a unique email address in a valid format has been provided for each customer:

By default, the check is carried out for all records in the customer table. If only a subset of the data is to be checked, this can be implemented by using a filter configuration either for a single check (in-check filter) or an entire table (dataset-filter). In the following, a check is defined analogously to the example above to check the email column. However, only active customers are taken into account here, which are identified by the value 1 in the active column:

Here you can see that in-check filters are defined for each check and cannot be reused. Dataset filters, on the other hand, are defined per table and can be reused in several checks. The same check can be implemented as a dataset filter as follows:

An advantage of this is that you only have to make changes to the filter condition in one central location and these changes will affect all associated checks.

Custom Checks

You can also define your custom checks in addition to the existing soda checks. Custom checks use SQL queries to evaluate a user-defined metric. In the following, a custom check is created for the rental table to check whether the rental date is before the return date:

One disadvantage of custom checks is that they do not support dataset filters at the moment. In the example above, a failed rows query was defined in addition to the custom check, with which invalid data records can be sent to Soda Cloud to view them there.

Schema Checks

In addition to checking table values, table schemas can also be checked in Soda. Schema checks are particularly useful at the beginning of ETL pipelines to check the presence and data types of critical columns. A schema check for the actor table could look like this:

Similar to the previous example, the two alert levels fail and warn are used to differentiate between the severity of the different violations: If actor_id, first_name or last_name are missing or the data type of these columns is incorrect, Soda recognizes this as an error and returns a negative result. If the last_update column does not exist or the format is incorrect, Soda only issues a warning.

If you want to execute the previously defined checks, the following Soda CLI command can be used:

A successful scan returns the following result:

In case there is a duplicate in the email addresses of the customer table, the check for duplicates would return a negative result. The scan result in the CLI would be as follows:

Programmatic Scans with Python

Data validation can also be done programmatically using soda scans with Python. The functionality of the Python library is similar to that of the CLI.

Configure and Execute Scan Object

In the following example, a soda scan is created and then executed. We configure the scan using existing YAML files in this case. It would also be possible to configure it directly in the code using YAML strings.

Save and Process Scan Results

We now use the scan object to analyze the result of the validation. Using the functions of the scan object, we can react to the presence of errors and warnings:

If you access the individual results of the checks via get_checks_warn_or_fail, specific information such as totalRowCount (i.e. the number of invalid records) can be viewed for custom checks, which cannot be viewed via the log. By default, individual records that have not passed the check can only be viewed in the Soda Cloud. However, in order to be able to view these records without Soda Cloud, it is possible to implement a CustomSampler class that adapts the standard sampler.

Conclusion

Soda offers extensive options for ensuring data quality, both in terms of the range of checks and the supported backends. Particularly worth highlighting is the intuitive usage concept, which stands out positively in comparison to its competitor, Great Expectations. The framework already appears to be quite mature, although the open-source version Soda Core is limited to the core functionality, as the name suggests. To implement more complex mechanisms for ensuring data quality without major implementation effort, it is a good idea to use the SaaS version Soda Cloud. This also offers built-in dashboards to monitor the results of validations, including their history.

At the moment, the pricing for using Soda Cloud is not yet publicly available but is subject to individual offers. When using SaaS offerings, special caution is required when regarding data protection, especially when processing personal data. For example, the transfer of personal data to third-party providers must be contractually regulated and requires the consent of the data subjects.

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert