tests.ssg_test_suite package
tests.ssg_test_suite.combined module
- class tests.ssg_test_suite.combined.CombinedChecker(test_env)[source]
Bases:
RuleChecker
Combined mode works like pretty much like the Rule mode - for every rule selected in a profile:
Alter the system.
Run the scan, check that the result meets expectations. If the test scenario passed as requested, return True, if it failed or passed unexpectedly, return False.
The following sequence applies if the initial scan has failed as expected:
If there are no remediations, return True.
Run remediation, return False if it failed.
Return result of the final scan of remediated system.
If a rule doesn’t have any test scenario, it is skipped. Skipped rules are reported at the end.
tests.ssg_test_suite.common module
- class tests.ssg_test_suite.common.RuleResult(result_dict=None)[source]
Bases:
object
- STAGE_STRINGS = {'final_scan', 'initial_scan', 'preparation', 'remediation'}
Result of a test suite testing rule under a scenario.
Supports ordering by success - the most successful run orders first.
- class tests.ssg_test_suite.common.Scenario_conditions(backend, scanning_mode, remediated_by, datastream)
Bases:
tuple
- backend
Alias for field number 0
- datastream
Alias for field number 3
- remediated_by
Alias for field number 2
- scanning_mode
Alias for field number 1
- class tests.ssg_test_suite.common.Scenario_run(rule_id, script)
Bases:
tuple
- rule_id
Alias for field number 0
- script
Alias for field number 1
- class tests.ssg_test_suite.common.Stage[source]
Bases:
object
- FINAL_SCAN = 4
- INITIAL_SCAN = 2
- NONE = 0
- PREPARATION = 1
- REMEDIATION = 3
- tests.ssg_test_suite.common.create_tarball(test_content_by_rule_id)[source]
Create a tarball which contains all test scenarios and additional content for every rule that is selected to be tested. The tarball contains directories with the test scenarios. The name of the directories is the same as short rule ID. There is no tree structure.
- tests.ssg_test_suite.common.fetch_all_templated_tests_paths(rule_template)[source]
Builds a dictionary of a test case relative path -> test case absolute path mapping.
Here, we want to know what the relative path on disk (under the tests/ subdirectory) is (such as “installed.pass.sh”), along with the actual absolute path.
- tests.ssg_test_suite.common.get_product_context(product_id=None)[source]
Returns a product YAML context if any product is specified. Hard-coded to assume a debug build.
- tests.ssg_test_suite.common.load_rule_and_env(rule_dir_path, env_yaml, product=None)[source]
Loads a rule and returns the combination of the RuleYAML class and the corresponding local environment for that rule.
- tests.ssg_test_suite.common.load_templated_tests(templated_tests_paths, template, local_env_yaml)[source]
- tests.ssg_test_suite.common.retry_with_stdout_logging(command, args, log_file, max_attempts=5)[source]
tests.ssg_test_suite.log module
- class tests.ssg_test_suite.log.LogHelper[source]
Bases:
object
Provide focal point for logging. LOG_DIR is useful when output of script is saved into file. Log preloading is a way to log outcome before the output itself.
- FORMATTER = <logging.Formatter object>
- INTERMEDIATE_LOGS = {'fail': [], 'notapplicable': [], 'pass': []}
- LOG_DIR = None
- LOG_FILE = None
- classmethod add_console_logger(logger, level)[source]
Convenience function to set defaults for console logger
- classmethod add_logging_dir(logger, _dirname)[source]
Convenience function to set default logging into file.
Also sets LOG_DIR and LOG_FILE
- static find_name(original_path, suffix='')[source]
Find file name which is still not present in given directory
Returns path – original_path + number + suffix
tests.ssg_test_suite.oscap module
- class tests.ssg_test_suite.oscap.AnsibleProfileRunner(environment, profile, datastream, benchmark_id)[source]
Bases:
ProfileRunner
- class tests.ssg_test_suite.oscap.AnsibleRuleRunner(environment, profile, datastream, benchmark_id, rule_id, script_name, dont_clean, no_reports, manual_debug)[source]
Bases:
RuleRunner
- class tests.ssg_test_suite.oscap.BashProfileRunner(environment, profile, datastream, benchmark_id)[source]
Bases:
ProfileRunner
- class tests.ssg_test_suite.oscap.BashRuleRunner(environment, profile, datastream, benchmark_id, rule_id, script_name, dont_clean, no_reports, manual_debug)[source]
Bases:
RuleRunner
- class tests.ssg_test_suite.oscap.GenericRunner(environment, profile, datastream, benchmark_id)[source]
Bases:
object
- property get_command
- class tests.ssg_test_suite.oscap.OscapProfileRunner(environment, profile, datastream, benchmark_id)[source]
Bases:
ProfileRunner
- class tests.ssg_test_suite.oscap.OscapRuleRunner(environment, profile, datastream, benchmark_id, rule_id, script_name, dont_clean, no_reports, manual_debug)[source]
Bases:
RuleRunner
- class tests.ssg_test_suite.oscap.ProfileRunner(environment, profile, datastream, benchmark_id)[source]
Bases:
GenericRunner
- class tests.ssg_test_suite.oscap.RuleRunner(environment, profile, datastream, benchmark_id, rule_id, script_name, dont_clean, no_reports, manual_debug)[source]
Bases:
GenericRunner
- tests.ssg_test_suite.oscap.get_file_remote(test_env, verbose_path, local_dir, remote_path)[source]
Download a file from VM.
- tests.ssg_test_suite.oscap.is_virtual_oscap_profile(profile)[source]
Test if the profile belongs to the so called category virtual from OpenSCAP available profiles. It can be (all) or other id we might come up in the future, it just needs to be encapsulated with parenthesis for example “(custom_profile)”.
- tests.ssg_test_suite.oscap.run_stage_remediation_ansible(run_type, test_env, formatting, verbose_path)[source]
Returns False on error, or True in case of successful Ansible playbook run.
- tests.ssg_test_suite.oscap.run_stage_remediation_bash(run_type, test_env, formatting, verbose_path)[source]
Returns False on error, or True in case of successful bash scripts run.
tests.ssg_test_suite.profile module
- class tests.ssg_test_suite.profile.ProfileChecker(test_env)[source]
Bases:
Checker
Iterate over profiles in data stream and perform scanning of unaltered system using every profile according to input. Also perform remediation run. Return value not defined, textual output and generated reports is the result.
tests.ssg_test_suite.rule module
- class tests.ssg_test_suite.rule.Rule(directory, id, short_id, template, local_env_yaml, rule)
Bases:
tuple
- directory
Alias for field number 0
- id
Alias for field number 1
- local_env_yaml
Alias for field number 4
- rule
Alias for field number 5
- short_id
Alias for field number 2
- template
Alias for field number 3
- class tests.ssg_test_suite.rule.RuleChecker(test_env)[source]
Bases:
Checker
Rule checks generally work like this - for every profile that supports that rule:
Alter the system.
Run the scan, check that the result meets expectations. If the test scenario passed as requested, return True, if it failed or passed unexpectedly, return False.
The following sequence applies if the initial scan has failed as expected:
If there are no remediations, return True.
Run remediation, return False if it failed.
Return result of the final scan of remediated system.
- class tests.ssg_test_suite.rule.RuleTestContent(scenarios, other_content)
Bases:
tuple
- other_content
Alias for field number 1
- scenarios
Alias for field number 0
tests.ssg_test_suite.test_env module
- class tests.ssg_test_suite.test_env.ContainerTestEnv(scanning_mode, image_name)[source]
Bases:
TestEnv
- property current_container
- property current_image
- class tests.ssg_test_suite.test_env.DockerTestEnv(mode, image_name)[source]
Bases:
ContainerTestEnv
- name = 'docker-based'
- class tests.ssg_test_suite.test_env.PodmanTestEnv(scanning_mode, image_name)[source]
Bases:
ContainerTestEnv
- name = 'podman-based'
- class tests.ssg_test_suite.test_env.TestEnv(scanning_mode)[source]
Bases:
object
tests.ssg_test_suite.virt module
tests.ssg_test_suite.xml_operations module
- tests.ssg_test_suite.xml_operations.benchmark_get_applicable_platforms(datastream, benchmark_id, logging=None)[source]
Returns a set of CPEs the given benchmark is applicable to.
- tests.ssg_test_suite.xml_operations.find_checks_in_rule(datastream, benchmark_id, rule_id)[source]
Return check types for given rule from benchmark.
- tests.ssg_test_suite.xml_operations.find_fix_in_benchmark(datastream, benchmark_id, rule_id, fix_type='bash', logging=None)[source]
Return fix from benchmark. None if not found.
- tests.ssg_test_suite.xml_operations.find_rule_in_benchmark(datastream, benchmark_id, rule_id, logging=None)[source]
Returns rule node from the given benchmark.
- tests.ssg_test_suite.xml_operations.get_all_profiles_in_benchmark(datastream, benchmark_id, logging=None)[source]
- tests.ssg_test_suite.xml_operations.get_all_rule_ids_in_profile(datastream, benchmark_id, profile_id, logging=None)[source]
- tests.ssg_test_suite.xml_operations.get_all_rule_selections_in_profile(datastream, benchmark_id, profile_id, logging=None)[source]
- tests.ssg_test_suite.xml_operations.get_all_rules_in_benchmark(datastream, benchmark_id, logging=None)[source]
Returns all rule IDs in the given benchmark.
- tests.ssg_test_suite.xml_operations.get_oscap_supported_cpes()[source]
Obtain a list of CPEs that the scanner supports