tests.ssg_test_suite package

tests.ssg_test_suite.combined module

class tests.ssg_test_suite.combined.CombinedChecker(test_env)

Bases: RuleChecker

Combined mode works like pretty much like the Rule mode - for every rule selected in a profile:

  • Alter the system.

  • Run the scan, check that the result meets expectations. If the test scenario passed as requested, return True, if it failed or passed unexpectedly, return False.

The following sequence applies if the initial scan has failed as expected:

  • If there are no remediations, return True.

  • Run remediation, return False if it failed.

  • Return result of the final scan of remediated system.

If a rule doesn’t have any test scenario, it is skipped. Skipped rules are reported at the end.

test_rule(state, rule, scenarios)
tests.ssg_test_suite.combined.perform_combined_check(options)

tests.ssg_test_suite.common module

class tests.ssg_test_suite.common.RuleResult(result_dict=None)

Bases: object

STAGE_STRINGS = {'final_scan', 'initial_scan', 'preparation', 'remediation'}

Result of a test suite testing rule under a scenario.

Supports ordering by success - the most successful run orders first.

load_from_dict(data)
record_stage_result(stage, successful)
relative_conditions_to(other)
save_to_dict()
class tests.ssg_test_suite.common.Scenario_conditions(backend, scanning_mode, remediated_by, datastream)

Bases: tuple

property backend

Alias for field number 0

property datastream

Alias for field number 3

property remediated_by

Alias for field number 2

property scanning_mode

Alias for field number 1

class tests.ssg_test_suite.common.Scenario_run(rule_id, script)

Bases: tuple

property rule_id

Alias for field number 0

property script

Alias for field number 1

class tests.ssg_test_suite.common.Stage

Bases: object

FINAL_SCAN = 4
INITIAL_SCAN = 2
NONE = 0
PREPARATION = 1
REMEDIATION = 3
tests.ssg_test_suite.common.cpes_to_platform(cpes)
tests.ssg_test_suite.common.create_tarball(test_content_by_rule_id)

Create a tarball which contains all test scenarios and additional content for every rule that is selected to be tested. The tarball contains directories with the test scenarios. The name of the directories is the same as short rule ID. There is no tree structure.

tests.ssg_test_suite.common.fetch_all_templated_tests_paths(rule_template)

Builds a dictionary of a test case relative path -> test case absolute path mapping.

Here, we want to know what the relative path on disk (under the tests/ subdirectory) is (such as “installed.pass.sh”), along with the actual absolute path.

tests.ssg_test_suite.common.fetch_local_tests_paths(tests_dir)
tests.ssg_test_suite.common.fetch_templated_tests_paths(rule_namedtuple, product_yaml)
tests.ssg_test_suite.common.file_known_as_useless(file_name)
tests.ssg_test_suite.common.get_cpe_of_tested_os(test_env, log_file)
tests.ssg_test_suite.common.get_prefixed_name(state_name)
tests.ssg_test_suite.common.get_product_context(product=None)

Returns a product YAML context if any product is specified. Hard-coded to assume a debug build.

tests.ssg_test_suite.common.get_test_dir_config(test_dir, product_yaml)
tests.ssg_test_suite.common.install_packages(test_env, packages)
tests.ssg_test_suite.common.load_local_tests(local_tests_paths, local_env_yaml)
tests.ssg_test_suite.common.load_rule_and_env(rule_dir_path, env_yaml, product=None)

Loads a rule and returns the combination of the RuleYAML class and the corresponding local environment for that rule.

tests.ssg_test_suite.common.load_templated_tests(templated_tests_paths, template, local_env_yaml)
tests.ssg_test_suite.common.load_test(absolute_path, rule_template, local_env_yaml)
tests.ssg_test_suite.common.matches_platform(scenario_platforms, benchmark_cpes)
tests.ssg_test_suite.common.retry_with_stdout_logging(command, args, log_file, max_attempts=5)
tests.ssg_test_suite.common.run_cmd_local(command, verbose_path, env=None)
tests.ssg_test_suite.common.run_with_stdout_logging(command, args, log_file)
tests.ssg_test_suite.common.select_templated_tests(test_dir_config, available_scenarios_basenames)
tests.ssg_test_suite.common.send_scripts(test_env, test_content_by_rule_id)
tests.ssg_test_suite.common.walk_through_benchmark_dirs(product=None)
tests.ssg_test_suite.common.write_rule_test_content_to_dir(rule_dir, test_content)

tests.ssg_test_suite.log module

class tests.ssg_test_suite.log.LogHelper

Bases: object

Provide focal point for logging. LOG_DIR is useful when output of script is saved into file. Log preloading is a way to log outcome before the output itself.

FORMATTER = <logging.Formatter object>
INTERMEDIATE_LOGS = {'fail': [], 'notapplicable': [], 'pass': []}
LOG_DIR = None
LOG_FILE = None
classmethod add_console_logger(logger, level)

Convenience function to set defaults for console logger

classmethod add_logging_dir(logger, _dirname)

Convenience function to set default logging into file.

Also sets LOG_DIR and LOG_FILE

static find_name(original_path, suffix='')

Find file name which is still not present in given directory

Returns path – original_path + number + suffix

classmethod log_preloaded(log_target)

Log messages preloaded in one of the named buffers. Wipe out all buffers afterwards.

classmethod preload_log(log_level, log_line, log_target=None)

Save log for later use. Fill named buffer `log_target`with the log line for later use.

Special case: If log_target is default, i.e. None, all buffers will be filled with the same log line.

tests.ssg_test_suite.oscap module

class tests.ssg_test_suite.oscap.AnsibleProfileRunner(environment, profile, datastream, benchmark_id)

Bases: ProfileRunner

initial()
remediation()
class tests.ssg_test_suite.oscap.AnsibleRuleRunner(environment, profile, datastream, benchmark_id, rule_id, script_name, dont_clean, no_reports, manual_debug)

Bases: RuleRunner

initial()
remediation()
class tests.ssg_test_suite.oscap.BashProfileRunner(environment, profile, datastream, benchmark_id)

Bases: ProfileRunner

initial()
remediation()
class tests.ssg_test_suite.oscap.BashRuleRunner(environment, profile, datastream, benchmark_id, rule_id, script_name, dont_clean, no_reports, manual_debug)

Bases: RuleRunner

initial()
remediation()
class tests.ssg_test_suite.oscap.Checker(test_env)

Bases: object

finalize()
run_test_for_all_profiles(profiles, test_data=None)
start()
test_target()
class tests.ssg_test_suite.oscap.GenericRunner(environment, profile, datastream, benchmark_id)

Bases: object

analyze(stage)
final()
property get_command
initial()
make_oscap_call()
prepare_online_scanning_arguments()
remediation()
run_stage(stage)
class tests.ssg_test_suite.oscap.OscapProfileRunner(environment, profile, datastream, benchmark_id)

Bases: ProfileRunner

remediation()
class tests.ssg_test_suite.oscap.OscapRuleRunner(environment, profile, datastream, benchmark_id, rule_id, script_name, dont_clean, no_reports, manual_debug)

Bases: RuleRunner

final()

There is no need to run final scan again - result won’t be different to what we already have in remediation step.

remediation()
class tests.ssg_test_suite.oscap.ProfileRunner(environment, profile, datastream, benchmark_id)

Bases: GenericRunner

final()
make_oscap_call()
class tests.ssg_test_suite.oscap.RuleRunner(environment, profile, datastream, benchmark_id, rule_id, script_name, dont_clean, no_reports, manual_debug)

Bases: GenericRunner

final()
make_oscap_call()
run_stage_with_context(stage, context)
tests.ssg_test_suite.oscap.analysis_to_serializable(analysis)
tests.ssg_test_suite.oscap.find_result_id_in_output(output)
tests.ssg_test_suite.oscap.generate_fixes_remotely(test_env, formatting, verbose_path)
tests.ssg_test_suite.oscap.get_file_remote(test_env, verbose_path, local_dir, remote_path)

Download a file from VM.

tests.ssg_test_suite.oscap.get_result_id_from_arf(arf_path, verbose_path)
tests.ssg_test_suite.oscap.is_virtual_oscap_profile(profile)

Test if the profile belongs to the so called category virtual from OpenSCAP available profiles. It can be (all) or other id we might come up in the future, it just needs to be encapsulated with parenthesis for example “(custom_profile)”.

tests.ssg_test_suite.oscap.process_profile_id(profile)
tests.ssg_test_suite.oscap.run_stage_remediation_ansible(run_type, test_env, formatting, verbose_path)

Returns False on error, or True in case of successful Ansible playbook run.

tests.ssg_test_suite.oscap.run_stage_remediation_bash(run_type, test_env, formatting, verbose_path)

Returns False on error, or True in case of successful bash scripts run.

tests.ssg_test_suite.oscap.save_analysis_to_json(analysis, output_fname)
tests.ssg_test_suite.oscap.send_arf_to_remote_machine_and_generate_remediations_there(run_type, test_env, formatting, verbose_path)
tests.ssg_test_suite.oscap.send_files_remote(verbose_path, remote_dir, domain_ip, *files)

Upload files to VM.

tests.ssg_test_suite.oscap.single_quote_string(input)
tests.ssg_test_suite.oscap.triage_xml_results(fname)

tests.ssg_test_suite.profile module

class tests.ssg_test_suite.profile.ProfileChecker(test_env)

Bases: Checker

Iterate over profiles in datastream and perform scanning of unaltered system using every profile according to input. Also perform remediation run. Return value not defined, textual output and generated reports is the result.

tests.ssg_test_suite.profile.perform_profile_check(options)

tests.ssg_test_suite.rule module

class tests.ssg_test_suite.rule.Rule(directory, id, short_id, template, local_env_yaml, rule)

Bases: tuple

property directory

Alias for field number 0

property id

Alias for field number 1

property local_env_yaml

Alias for field number 4

property rule

Alias for field number 5

property short_id

Alias for field number 2

property template

Alias for field number 3

class tests.ssg_test_suite.rule.RuleChecker(test_env)

Bases: Checker

Rule checks generally work like this - for every profile that supports that rule:

  • Alter the system.

  • Run the scan, check that the result meets expectations. If the test scenario passed as requested, return True, if it failed or passed unexpectedly, return False.

The following sequence applies if the initial scan has failed as expected:

  • If there are no remediations, return True.

  • Run remediation, return False if it failed.

  • Return result of the final scan of remediated system.

copy_of_datastream(new_filename=None)
finalize()
test_rule(state, rule, scenarios)
class tests.ssg_test_suite.rule.RuleTestContent(scenarios, other_content)

Bases: tuple

property other_content

Alias for field number 1

property scenarios

Alias for field number 0

class tests.ssg_test_suite.rule.Scenario(script, script_contents)

Bases: object

matches_platform(benchmark_cpes)
matches_regex(scenarios_regex)
matches_regex_and_platform(scenarios_regex, benchmark_cpes)
override_profile(scenarios_profile)
tests.ssg_test_suite.rule.generate_xslt_change_value_template(value_short_id, new_value)
tests.ssg_test_suite.rule.get_viable_profiles(selected_profiles, datastream, benchmark, script=None)

Read datastream, and return set intersection of profiles of given benchmark and those provided in selected_profiles parameter.

tests.ssg_test_suite.rule.perform_rule_check(options)

tests.ssg_test_suite.test_env module

class tests.ssg_test_suite.test_env.ContainerTestEnv(scanning_mode, image_name)

Bases: TestEnv

property current_container
property current_image
finalize()

Perform the environment cleanup and shut it down.

get_ip_address()
get_ssh_additional_options()
get_ssh_port()
image_stem2fqn(stem)
offline_scan(args, verbose_path)
reset_state_to(state_name, new_running_state_name)
run_container(image_name, container_name='running')
start()

Run the environment and ensure that the environment will not be permanently modified by subsequent procedures.

class tests.ssg_test_suite.test_env.DockerTestEnv(mode, image_name)

Bases: ContainerTestEnv

get_ip_address()
name = 'docker-based'
class tests.ssg_test_suite.test_env.PodmanTestEnv(scanning_mode, image_name)

Bases: ContainerTestEnv

extract_port_map(podman_network_data)
get_ip_address()
name = 'podman-based'
class tests.ssg_test_suite.test_env.SavedState(environment, name)

Bases: object

classmethod create_from_environment(environment, state_name)
map_on_top(function, args_list)
class tests.ssg_test_suite.test_env.TestEnv(scanning_mode)

Bases: object

arf_to_html(arf_filename)
execute_ssh_command(command, log_file, error_msg_template=None)
Args:
  • command: Command to execute remotely as a single string

  • log_file

  • error_msg_template: A string that can contain references to:

    command, remote_dest, rc, and stderr

finalize()

Perform the environment cleanup and shut it down.

get_ip_address()
get_ssh_additional_options()
get_ssh_port()
offline_scan(args, verbose_path)
online_scan(args, verbose_path)
refresh_connection_parameters()
reset_state_to(state_name, new_running_state_name)
save_state(state_name)
scan(args, verbose_path)
scp_download_file(source, destination, log_file, error_msg=None)
scp_transfer_file(source, destination, log_file, error_msg=None)
scp_upload_file(source, destination, log_file, error_msg=None)
start()

Run the environment and ensure that the environment will not be permanently modified by subsequent procedures.

class tests.ssg_test_suite.test_env.VMTestEnv(mode, hypervisor, domain_name, keep_snapshots)

Bases: TestEnv

finalize()

Perform the environment cleanup and shut it down.

get_ip_address()
has_test_suite_prefix(snapshot_name)
name = 'libvirt-based'
offline_scan(args, verbose_path)
reboot()
reset_state_to(state_name, new_running_state_name)
snapshot_lookup(snapshot_name)
snapshots_cleanup()
start()

Run the environment and ensure that the environment will not be permanently modified by subsequent procedures.

tests.ssg_test_suite.virt module

tests.ssg_test_suite.xml_operations module

tests.ssg_test_suite.xml_operations.add_platform_to_benchmark(root, cpe_regex)
tests.ssg_test_suite.xml_operations.add_product_to_fips_certified(root, product='fedora')
tests.ssg_test_suite.xml_operations.benchmark_get_applicable_platforms(datastream, benchmark_id, logging=None)

Returns a set of CPEs the given benchmark is applicable to.

tests.ssg_test_suite.xml_operations.datastream_root(ds_location, save_location=None)
tests.ssg_test_suite.xml_operations.find_elements(root, element_spec=None)
tests.ssg_test_suite.xml_operations.find_fix_in_benchmark(datastream, benchmark_id, rule_id, fix_type='bash', logging=None)

Return fix from benchmark. None if not found.

tests.ssg_test_suite.xml_operations.find_rule_in_benchmark(datastream, benchmark_id, rule_id, logging=None)

Returns rule node from the given benchmark.

tests.ssg_test_suite.xml_operations.get_all_profiles_in_benchmark(datastream, benchmark_id, logging=None)
tests.ssg_test_suite.xml_operations.get_all_rule_ids_in_profile(datastream, benchmark_id, profile_id, logging=None)
tests.ssg_test_suite.xml_operations.get_all_rule_selections_in_profile(datastream, benchmark_id, profile_id, logging=None)
tests.ssg_test_suite.xml_operations.get_all_rules_in_benchmark(datastream, benchmark_id, logging=None)

Returns all rule IDs in the given benchmark.

tests.ssg_test_suite.xml_operations.get_all_xccdf_ids_in_datastream(datastream)
tests.ssg_test_suite.xml_operations.get_oscap_supported_cpes()

Obtain a list of CPEs that the scanner supports

tests.ssg_test_suite.xml_operations.infer_benchmark_id_from_component_ref_id(datastream, ref_id)
tests.ssg_test_suite.xml_operations.instance_in_platforms(inst, platforms)
tests.ssg_test_suite.xml_operations.remove_ansible_machine_remediation_condition(root)
tests.ssg_test_suite.xml_operations.remove_bash_machine_remediation_condition(root)
tests.ssg_test_suite.xml_operations.remove_machine_platform(root)
tests.ssg_test_suite.xml_operations.remove_machine_remediation_condition(root)
tests.ssg_test_suite.xml_operations.remove_ocp4_platforms(root)
tests.ssg_test_suite.xml_operations.remove_platforms(root)
tests.ssg_test_suite.xml_operations.remove_platforms_from_element(root, element_spec=None, platforms=None)

Module contents