First of all, we can extract the current TestCase
implementation into an interface:
type Test interface {
Test(t *testing.T, ctx *TestingContext)
}
then, we can define a few different interfaces that a test can implement to provide additional information that can be used to determine if a test should be executed:
// we can define a set of categories
type Category string
// an interface to provide category info
type CategorisedTest interface {
Category() Category
}
// we can define a list of environments
type Environment string
// List the environments the test should be running against.
// If the current environment is in the list, it will run the test
type RequireEnvironmentTest interface {
RequireEnvironments() []Environment
}
// List the environments the test should not be running against
// If the current environment is in the list, it will not run the test
type ExcludeEnvironmentTest interface {
ExcludeEnvironments() []Environment
}
// We can specify a list of profiles: managed vs. unmanaged
type Profile string
// List the profiles the test can support
// If the current profile is in the list, the test will run
type ForProfileTest interface {
ForProfile() []Profile
}
Obviously more interfaces can be defined to if additional info is needed.
Each test can implement any one of a combination of the above interfances. Here is an example:
type CRDExistsTest struct {
Description
}
func (t *CRDExistsTest) Test(t *testing.T, ctx *TestingContext) {
// run test to check if CRD exists
TestIntegreatlyCRDExists(t, ctx)
}
func (t *CRDExistsTest) Category() Category {
return Category.Installation //as an example
}
func (t *CRDExistsTest) ExcludeEnvironments []Environment {
return []Environment{Environment.OSD} //as an example
}
test := CRDExistsTest{"Verify RHMI CRD Exists"}
Then in the main test, we just need to filter the test cases based on the information. So we can define a few filters:
type TestFilter interface {
Filter(input []Test) []Test
}
// Filter based on test categories
type CategorisedTestFilter struct {
Categories []Category
}
func (c *CategorisedTestFilter) Filter(input []Test) []Test {
out := []Test{}
for _, t := range input {
if _, ok := t.(CategorisedTest); ok {
ct := t.(CategorisedTest)
// check if c.Categories has ct.Category(), and if that's true
out = append(out, ct.(Test))
}
// if not ok, add it to out as well
}
return out
}
// Filter based on current environment
type EnvironmentFilter struct {
}
func (e *EnvironmentFilter) Filter(input []Test) []Test {
// filter tests based on environments
return nil
}
and finally we can filter the tests use something like this:
// Filter tests to decide which tests should be running based on command line parameters or environment variables
func filterTests(input []Test) []Test {
//retrieve categories values from either the command parameters or via environment variables
f1 := &CategorisedTestFilter{}
//retrieve environment values from either the command line parameters or via environment variables
f2 := &EnvironmentFilter{}
filters := []TestFilter{f1, f2}
out := input
for _, f := range filters {
in := out
out = f.Filter(in)
}
return out
}
There are few benefits with this approach:
- Each test can define their own conditions to run without changing the main test
- No need to change any of the existing test if the test doesn't requirement any conditons
@wei-lee Hi Wei, I like this idea. I have one question thought.
Soon we might need to keep automated tests and test cases synchronized (regarding the categories, environments, etc.). This means that we probably need to define the single source of truth that automated tests and test cases would refer to (for generating manual test cases, and for running the tests). It's important to have this for example in scenario when the test category changes, or a new category/environment is added.
One solution how we could do that would be to have some JSON file with collected (metadata) information about test cases, that would be generated by some script each time the test cases are modified.
Then each time the Golang test suite is run (with parameters specifying the environments and categories), this filter would just filter required tests based on what is in the JSON file. If the test doesn't have
"full_test_name"
specified, it would be skipped.This is just an idea and it might not be ideal, but keeping the tests and test cases in sync is also important and maybe it should have a clear idea how we can achieve this.
Wdyt? Any other ideas?