Manage Expectations
You can manually create Expectations via both the UI and API, and use several different GX Cloud features to generate Expectations via the UI.
This page provides instructions for working with Expectations. To learn about Expectation types and options, see the Expectations overview. To learn about the Expectation changelog at the Data Asset level, visit Manage Data Assets.
Create an Expectation
- UI
- API
You must have a Data Asset before creating an Expectation.
-
In GX Cloud, select the relevant Workspace and then click Data Assets.
-
In the Data Assets list, click the Data Asset name.
-
Click New Expectation.
-
Select a data quality issue to test for or an option for writing your own test.
Options for accelerating test coverageIf you are using a supported Data Source, you can use the following to speed up test creation:
- If you selected the Schema, Volume, or Completeness data quality issue, you will have the Automatic option to generate Expectations for Anomaly Detection. Generated Expectations will default to warning severity, which you can edit later. If you instead want to create your own rules, click Manual.
- If you selected custom SQL, you will have the option to Generate SQL with ExpectAI. You can write your own SQL if you prefer.
-
Select an Expectation type.
-
Complete the mandatory and optional fields for the Expectation.
-
Click Save or click Save & Add More and then repeat steps 4 through 7 to add additional Expectations.
-
Optional. Run an ad hoc Validation.
-
Optional. Configure recurring Validations. See Manage schedules.
You must have the following prerequisites fulfilled before creating an Expectation:
- Python version 3.10 to 3.13.
- An installation of the Great Expectations Python library.
- A Data Context connected to your GX Cloud organization.
-
Choose an Expectation to create.
GX Cloud comes with many Expectations to cover your data quality needs. You can find a catalog of these Expectations in the Expectation Gallery. When browsing the Expectation Gallery, you can filter the available Expectations by the data quality issue they address and by the Data Sources they support. There is also a search bar that will let you filter Expectations by matching text in their name or description.
-
Determine the Expectation's required parameters.
To determine the parameters your Expectation uses to evaluate data, reference the Expectation's entry in the Expectation Gallery. Under the Args section you will find a list of parameters that are necessary for the Expectation to be evaluated, along with a description of the value(s) that should be provided.
-
Optional. Determine the Expectation's other parameters.
In addition to the parameters that are required for an Expectation to evaluate data, Expectations also support some optional parameters. In the Expectations Gallery these are found under each Expectation's Other Parameters section.
Examples of these parameters are:
meta: A dictionary of user-supplied metadata to store with an Expectation. This dictionary can be used to add notes about the purpose and intended use of an Expectation.mostly: A special argument that allows for fuzzy validation based on a percentage of successfully validated rows. If the percentage is at least the value set in themostlyparameter, the Expectation will return asuccessvalue oftrue.severity: Indicates the impact of the Expectation failing. Accepted values arecritical,warning, orinfo. Defaults tocriticalif not explicitly set. You can trigger Actions based on severity levels or you can condition your data pipeline with theget_maximum_severity_failurehelper method in theExpectationSuiteValidationResultclass. Note that if an Expectation fails to execute, the failure will be recorded as critical, regardless of the Expectation configuration, to bring your attention to the fact that your data is not being tested as intended.
Restrict an Expectation to specific rows
To restrict an Expectation to a subset of the data retrieved in a Batch, use the
row_conditionargument. Therow_conditionargument takes a boolean expression built with Python objects. Rows will be validated for the Expectation when therow_conditionexpression evaluates toTrue. Conversely, if therow_conditionevaluates toFalse, the corresponding row will not be validated for the Expectation.To support complex business use cases, logical clauses can be combined with AND / OR relationships within the
row_conditionargument.Pythonfrom great_expectations.expectations.row_conditions import Column
# Create condition statements with column references and Python comparisons.
statement_1 = Column("tenure") > 2
statement_2 = Column("salary") <= 50000
statement_3 = Column("department") == "Sales"
# Combine condition statements with an AND relationship into condition blocks.
block_1 = statement_1 & statement_2
block_2 = statement_3
# Combine condition blocks with OR.
row_condition = block_1 | block_2An Expectation can have up to 100 condition statements grouped in any number of condition blocks.
Here are some examples of how to create common patterns in row conditions:
-
A and B.
Python# Two condition statements within a single condition block.
statement_1 = Column("A") == "a"
statement_2 = Column("B") == "b"
block_1 = statement_1 & statement_2
row_condition = block_1 -
A or B.
Python# Two condition statements, each in its own condition block.
statement_1 = Column("A") == "a"
statement_2 = Column("B") == "b"
block_1 = statement_1
block_2 = statement_2
row_condition = block_1 | block_2 -
(A and B) or (C and D).
Python# Two condition statements in one condition block and two statements in another block.
statement_1 = Column("A") == "a"
statement_2 = Column("B") == "b"
statement_3 = Column("C") == "c"
statement_4 = Column("D") == "d"
block_1 = statement_1 & statement_2
block_2 = statement_3 & statement_4
row_condition = block_1 | block_2 -
A and (B or C). This pattern is not supported verbatim, but you can achieve the same result with (A and B) or (A and C).
Python# Two condition statements in one condition block and two statements in another block.
statement_1 = Column("A") == "a"
statement_2 = Column("B") == "b"
statement_3 = Column("C") == "c"
block_1 = statement_1 & statement_2
block_2 = statement_1 & statement_3
row_condition = block_1 | block_2
The following comparison operators are supported:
==,!=,>,<,>=,<=,is_in,is_not_in,is_null,is_not_null. Here are some examples of using different kinds of operators:Python# Single value comparisons: ==, !=, >, <, >=, <=
statement_1 = Column("count") == 1
statement_2 = Column("date") > datetime(year=2025, month=1, day=31, tzinfo=timezone.utc)
# Set comparisons: is_in, is_not_in
statement_3 = Column("department").is_in(["sales", "finance"])
# Nullity checks: is_null, is_not_null
statement_4 = Column("name").is_null()Expectations that have different row conditions are treated as unique, even if they are of the same type, apply to the same column, and belong to the same Expectation Suite. This allows you to validate your data through multiple lenses.
Note that the following Expectations do not accept the
row_conditionargument:expect_column_to_existexpect_query_results_to_match_comparisonexpect_table_columns_to_match_ordered_listexpect_table_columns_to_match_setexpect_table_column_count_to_be_betweenexpect_table_column_count_to_equalunexpected_rows_expectation
-
Create the Expectation.
Using the Expectation class you picked and the parameters you determined when referencing the Expectation Gallery, you can create your Expectation.
In this example, the
ExpectColumnMaxToBeBetweenExpectation is created with a range of acceptable values that will be evaluated inclusively.Pythonexpectation = gx.expectations.ExpectColumnMaxToBeBetween(
column="passenger_count", min_value=1, max_value=6, severity="warning"
) -
Create or get an Expectation Suite.
An Expectation Suite is used to group Expectations. All Expectations need to be added to an Expectation Suite before they can be associated with a Data Asset via a Validation Definition. All of the Expectations that are grouped within an Expectation Suite will be evaluated together whenever the Validation Definition runs.
Create an Expectation Suite and add it to your Data Context:
Pythonsuite_name = "my_expectation_suite"
suite = gx.ExpectationSuite(name=suite_name)
context.suites.add(suite)Optional. If you already have an API-managed Expectation Suite, get it from your Data Context:
Pythonexisting_suite_name = (
"my_expectation_suite" # replace this with the name of your Expectation Suite
)
suite = context.suites.get(name=existing_suite_name) -
Add the Expectation to the Expectation Suite.
Pythonsuite.add_expectation(expectation)
suite.save()
Next Steps
If you have created a new Expectation Suite, you will need to associate it to a Data Asset before you can run Validations. Visit Run Validations to learn how to do so.
Save time with ExpectAI
ExpectAI is an analytical AI tool that you can use to generate tests.
Generate Expectations
To accelerate test coverage, you can use ExpectAI to generate recommended Expectations for a Data Asset. These will be personalized based on an analysis of a sample of your data.
Keep the following requirements in mind when working with ExpectAI:
- Your organization must be using a fully-hosted deployment.
- The Data Asset's Data Source must be AlloyDB, Amazon Aurora PostgreSQL, Citus, Databricks SQL, Neon, PostgreSQL, Redshift, or Snowflake.
- Generated Expectations will default to warning severity, which you can edit later.
To add AI-recommended Expectations:
- In GX Cloud, select the relevant Workspace and then click Data Assets.
- In the Data Assets list, click the Data Asset name.
- Click Generate Expectations.
This might take a few minutes
ExpectAI may take a few minutes to analyze your data and recommend personalized Expectations. You can navigate away from the page while ExpectAI works in the background. GX will send an email alert when your recommended Expectations are ready for review.
- Review the recommended Expectations and Approve (✓) or Reject (✗) them within 48 hours. After 48 hours, any remaining recommendations will be discarded.
- Optional. Run an ad hoc Validation.
- Optional. Edit AI-generated Expectations based on the insights you get from running a Validation and your data quality needs.
Generate SQL
To simplify working with custom SQL Expectations, you can use ExpectAI to generate a SQL query based on a natural language prompt you provide and a data profile GX Cloud automatically provides.
For example, imagine you have a New York City taxi trip dataset with columns named pickup_borough, vehicle_type, and passenger_count. If you add a custom SQL Expectation with a Prompt for SQL generation like sedan rides in Manhattan shouldn't have more than 4 passengers then ExpectAI would generate a SQL query similar to the following:
SELECT
*
FROM
{batch}
WHERE
pickup_borough = 'Manhattan'
AND vehicle_type = 'Sedan'
AND passenger_count > 4
Keep the following requirements in mind when working with ExpectAI:
- Your organization must be using a fully-hosted deployment.
- The Data Asset's Data Source must be AlloyDB, Amazon Aurora PostgreSQL, Citus, Databricks SQL, Neon, PostgreSQL, Redshift, or Snowflake.
Edit an Expectation
-
In GX Cloud, select the relevant Workspace and then click Data Assets.
-
In the Data Assets list, click the Data Asset name.
-
Find the Expectation that you want to edit.
-
Click
Edit Expectation for the Expectation that you want to edit. -
Edit the Expectation configuration.
-
Click Save.
If you edit the Severity of an Expectation, note that historical validation results will continue to indicate the severity that was recorded at the time of an Expectation failure. The newly assigned severity will apply to future validation failures only.
Delete an Expectation
-
In GX Cloud, select the relevant Workspace and then click Data Assets.
-
In the Data Assets list, click the Data Asset name.
-
Find the Expectation that you want to delete.
-
Click
Delete Expectation for the Expectation that you want to delete.You can delete Expectations in bulkIf you want to delete all Expectations that test for a certain data quality issue, you can instead click
Bulk-delete Expectations for the relevant category. -
Click Delete.