Provided by: libconfig-model-tester-perl_4.007-2_all bug

NAME

       Config::Model::Tester - Test framework for Config::Model

VERSION

       version 4.007

SYNOPSIS

       In your test file (typically "t/model_test.t"):

        use warnings;
        use strict;

        use Config::Model::Tester ;
        use ExtUtils::testlib;

        run_tests() ;

       Run tests with:

        perl t/model_test.t [ --log ] [--error] [--trace] [ subtest [ test_case ] ]

DESCRIPTION

       This class provides a way to test configuration models with tests files.  This class was designed to
       tests several models and run several tests cases per model.

       A specific layout for test files must be followed.

   Sub test specification
       Each subtest is defined in a file like:

        t/model_tests.d/<app-name>-test-conf.pl

       This file specifies that "app-name" (which is defined in "lib/Config/Model/*.d" directory) is used for
       the test cases defined in the "*-test-conf.pl" file. The model to test is inferred from the application
       name to test.

       This file contains a list of test case (explained below) and expects a set of files used as test data.
       The layout of these test data files is explained in next section.

   Simple test file layout
       Each test case is represented by a configuration file (not a directory) in the "*-examples" directory.
       This configuration file is used by the model to test and is copied as "$confdir/$conf_file_name" using
       the test data structure explained below.

       In the example below, we have 1 app model to test: "lcdproc" and 2 tests cases. The app name matches the
       file specified in "lib/Config/Model/*.d" directory. In this case, the app name matches
       "lib/Config/Model/system.d/lcdproc"

        t
        |-- model_test.t
        \-- model_tests.d           # do not change directory name
            |-- lcdproc-test-conf.pl   # subtest specification for lcdproc app
            \-- lcdproc-examples
                |-- t0              # test case t0
                \-- LCDD-0.5.5      # test case for older LCDproc

       Subtest specification is written in "lcdproc-test-conf.pl" file (i.e. this module looks for files named
       like "<app-name>-test-conf.pl>").

       Subtests data is provided in files in directory "lcdproc-examples" ( i.e. this modules looks for test
       data in directory "<model-name>-examples>". "lcdproc-test-conf.pl" contains instructions so that each
       file is used as a "/etc/LCDd.conf" file during each test case.

       "lcdproc-test-conf.pl" can contain specifications for more test cases. Each test case requires a new file
       in "lcdproc-examples" directory.

       See "Examples" for a link to the actual LCDproc model tests

   Test file layout for multi-file configuration
       When a configuration is spread over several files, each test case is provided in a sub-directory. This
       sub-directory is copied in "conf_dir" (a test parameter as explained below)

       In the example below, the test specification is written in "dpkg-test-conf.pl". Dpkg layout requires
       several files per test case.  "dpkg-test-conf.pl" contains instructions so that each directory under
       "dpkg-examples" is used.

        t/model_tests.d
        \-- dpkg-test-conf.pl         # subtest specification
        \-- dpkg-examples
            \-- libversion            # example subdir, used as test case name
                \-- debian            # directory for used by test case
                    |-- changelog
                    |-- compat
                    |-- control
                    |-- copyright
                    |-- rules
                    |-- source
                    |   \-- format
                    \-- watch

       See "Examples" for a link to the (many) Dpkg model tests

   More complex file layout
       Each test case is a sub-directory on the "*-examples" directory and contains several files. The
       destination of the test files may depend on the system (e.g. the OS). For instance, system wide
       "ssh_config" is stored in "/etc/ssh" on Linux, and directly in "/etc" on MacOS.

       These files are copied in a test directory using a "setup" parameter in test case specification.

       Let's consider this example of 2 tests cases for ssh:

        t/model_tests.d/
        |-- ssh-test-conf.pl
        |-- ssh-examples
            \-- basic
                |-- system_ssh_config
                \-- user_ssh_config

       Unfortunately, "user_ssh_config" is a user file, so you need to specify where is located the home
       directory of the test with another global parameter:

         home_for_test => '/home/joe' ;

       For Linux only, the "setup" parameter is:

        setup => {
          system_ssh_config => '/etc/ssh/ssh_config',
          user_ssh_config   => "~/.ssh/config"
        }

       On the other hand, system wide config file is different on MacOS and the test file must be copied in the
       correct location. When the value of the "setup" hash is another hash, the key of this other hash is used
       as to specify the target location for other OS (as returned by Perl $^O variable:

             setup => {
               'system_ssh_config' => {
                   'darwin' => '/etc/ssh_config',
                   'default' => '/etc/ssh/ssh_config',
               },
               'user_ssh_config' => "~/.ssh/config"
             }

       "systemd" is another beast where configuration files can be symlinks to "/dev/null" or other files. To
       emulate this situation, use an array as setup target:

         setup => {
             # test data file => [ link (may be repeated), ..       link(s) target contains test data ]
             'ssh.service' => [ '/etc/systemd/system/sshd.service', '/lib/systemd/system/ssh.service' ]
         }

       This will result in a symlink like:

          wr_root/model_tests/test-sshd-service/etc/systemd/system/sshd.service
          -> /absolute_path_to/wr_root/model_tests/test-sshd-service/lib/systemd/system/ssh.service

       See the actual Ssh and Sshd model tests <https://github.com/dod38fr/config-model-
       openssh/tree/master/t/model_tests.d>

   Basic test specification
       Each model subtest is specified in "<app>-test-conf.pl". This file must return a data structure
       containing the test specifications. Each test data structure contains global parameters (Applied to all
       tests cases) and test cases parameters (parameters are applied to the test case)

        use strict;
        use warnings;
        {
          # global parameters

          # config file name (used to copy test case into test wr_root/model_tests directory)
          conf_file_name => "fstab",
          # config dir where to copy the file (optional)
          conf_dir => "etc",
          # home directory for this test
          home_for_test => '/home/joe'

          tests =>  [
            {
              # test case 1
              name => 'my_first_test',
              # other test case parameters
            },
            {
              # test case 2
              name => 'my_second_test',
              # other test case parameters
            },
            # ...
          ],
        };

        # do not add 1; at the end of the file

       In the example below, "t0" file is copied in "wr_root/model_tests/test-t0/etc/fstab".

        use strict;
        use warnings;
        {
          # list of tests.
          tests => [
            {
              # test name
              name => 't0',
              # add optional specification here for t0 test
            },
            {
              name => 't1',
              # add optional specification here for t1 test
            },
          ]
        };

       You can suppress warnings by specifying "no_warnings => 1" in each test case. On the other hand, you may
       also want to check for warnings specified to your model. In this case, you should avoid specifying
       "no_warnings" here and specify warning tests or warning filters as mentioned below.

       See actual fstab test <https://github.com/dod38fr/config-model/blob/master/t/model_tests.d/fstab-test-
       conf.pl>.

   Skip a test
       A test file can be skipped using "skip" global test parameter.

       In this example, test is skipped when not running on a Debian system:

        eval { require AptPkg::Config; };
        my $skip = ( $@ or not -r '/etc/debian_version' ) ? 1 : 0;

        {
          skip => $skip,
          tests => [ ] ,
        };

   Internal tests or backend tests
       Some tests require the creation of a configuration class dedicated for test (typically to test corner
       cases on a backend).

       This test class can be created directly in the test specification by specifying tests classes in
       "config_classes" global test parameter in an array ref. Each array element is a data structure that use
       create_config_class parameters.  See for instance the layer test <https://github.com/dod38fr/config-
       model/blob/master/t/model_tests.d/layer-test-conf.pl> or the test for shellvar backend
       <https://github.com/dod38fr/config-model/blob/master/t/model_tests.d/backend-shellvar-test-conf.pl>.

       In this case, no application exist for such classes so the model to test must be specified in a global
       test parameter:

         return {
           config_classes => [ { name => "Foo", element => ... } , ... ],
           model_to_test => "Foo",
           tests => [ ... ]
         };

   Test specification with arbitrary file names
       In some models, like "Multistrap", the config file is chosen by the user. In this case, the file name
       must be specified for each tests case:

        {
          tests => [ {
              name        => 'arm',
              config_file => '/home/foo/my_arm.conf',
              check       => {},
           }]
        };

       See the actual multistrap test <https://github.com/dod38fr/config-
       model/blob/master/t/model_tests.d/multistrap-test-conf.pl>.

   Backend argument
       Some application like systemd requires a backend argument specified by user (e.g. a service name for
       systemd). The parameter "backend_arg" can be specified to emulate this behavior.

   Re-use test data
       When the input data for test is quite complex (several files), it may be interesting to re-use these data
       for other test cases. Knowing that test names must be unique, you can re-use test data with "data_from"
       parameter. For instance:

         tests => [
           {
               name  => 'some-test',
               # ...
           },
           {
               name  => 'some-other-test',
               data_from  => 'some-test',    # re-use data from test above
               # ...
           },
         ]

       See plainfile backend test <https://github.com/dod38fr/config-model/blob/master/t/model_tests.d/backend-
       plainfile-test-conf.pl> for a real life example.

       Likewise, it may be useful to re-use test data from another group of test. Lets see this example from
       "systemd-service-test-conf.pl":

           {
               name => 'transmission',
               data_from_group => 'systemd', # i.e from ../systemd-examples
           }

       "data_from" and "data_from_group" can be together.

   Test scenario
       Each subtest follow a sequence explained below. Each step of this sequence may be altered by adding test
       case parameters in "<model-to-test>-test-conf.pl":

       •   Setup  test in "wr_root/model_tests/<subtest name>/". If your configuration file layout depend on the
           target system, you will have to specify the path using "setup" parameter.   See  "More  complex  file
           layout".

       •   Create  configuration  instance, load config data and check its validity. Use "load_check => 'no'" if
           your file is not valid.

       •   Check for config data warnings. You should pass the  list  of  expected  warnings  that  are  emitted
           through  Log::Log4perl.  The  array  ref  is  passed  as  is  to the "expect" function of "expect" in
           Test::Log::Lo4Perl. E.g:

               log4perl_load_warnings => [
                    [ 'Tree.Node', (warn => qr/deprecated/) x 2 ]  ,
                    [ 'Tree.Element.Value' , ( warn => qr/skipping/) x 2 ]
               ]

           The Log classes are specified in "cme/Logging".

           Log levels below "warn" are ignored.

           Note that log tests are disabled when "--log" option is used, hence all  warnings  triggered  by  the
           tests are shown.

           Config::Model  is currently transitioning from traditional "warn" to warn logs. To avoid breaking all
           tests  based  on  this  module,  the  warnings  are   emitted   through   Log::Log4perl   only   when
           $::_use_log4perl_to_warn  is  set.  This  hack  will be removed once all warnings checks in tests are
           ported to log4perl checks.

       •   DEPRECATED. Check for config data warning. You should pass the list of expected warnings.  E.g.

               load_warnings => [ qr/Missing/, (qr/deprecated/) x 3 , ],

           Use an empty array_ref to mask load warnings.

       •   Optionally run update command:

            update => {
               returns => 'foo' , # optional
               no_warnings => [ 0 | 1 ], # default 0
               quiet => [ 0 | 1], # default 0, passed to update method
               load4perl_update_warnings => [ ... ] # Test::Log::Log4perl::expect arguments
            }

           Where:

           •   "returns" is the expected return value (optional).

           •   "no_warnings" can be used to suppress the warnings coming from  Config::Model::Value.  Note  that
               "no_warnings => 1" may be useful for less verbose test.

           •   "quiet" to suppress progress messages during update.

           •   "log4perl_update_warnings"  is used to check the warnings produced during update. The argument is
               passed to "expect" function of Test::Log::Log4perl. See "load_warnings" parameter above for  more
               details.

           •   DEPRECATED.  "update_warnings"  is  an  array ref of quoted regexp (See qr operator) to check the
               warnings produced during update. Please use "log4perl_update_warnings" instead.

           All other arguments are passed to "update" method.

       •   Optionally load configuration data. You should design this config  data  to  suppress  any  error  or
           warning mentioned above. E.g:

               load => 'binary:seaview Synopsis="multiplatform interface for sequence alignment"',

           See Config::Model::Loader for the syntax of the string accepted by "load" parameter.

       •   Optionally,  run  a  check  before  running  apply_fix (if any). This step is useful to check warning
           messages:

              check_before_fix => {
                 dump_errors   => [ ... ] # optional, see below
                 log4perl_dump_warnings => [ ... ] # optional, see below
              }

           Use "dump_errors" if you expect issues:

             check_before_fix => {
               dump_errors =>  [
                   # the issues  and a way to fix the issue using Config::Model::Node::load
                   qr/mandatory/ => 'Files:"*" Copyright:0="(c) foobar"',
                   qr/mandatory/ => ' License:FOO text="foo bar" ! Files:"*" License short_name="FOO" '
               ],
             }

           Likewise, specify any expected warnings:

             check_before_fix => {
                   log4perl_dump_warnings => [ ... ],
             }

           "log4perl_dump_warnings" passes the array ref content to "expect" function of Test::Log::Log4perl.

           Both "log4perl_dump_warnings" and "dump_errors" can be specified in "check_before_fix" hash.

       •   Optionally, call apply_fixes:

               apply_fix => 1,

       •   Call dump_tree to check the validity of the  data  after  optional  "apply_fix".  This  step  is  not
           optional.

           As  with  "check_before_fix",  both  "dump_errors"  or  "log4perl_dump_warnings"  can be specified in
           "full_dump" parameter:

            full_dump => {
                log4perl_dump_warnings => [ ... ], # optional
                dump_errors            => [ ... ], # optional
            }

       •   Run specific content check to verify that configuration data was retrieved correctly:

               check => {
                   'fs:/proc fs_spec' => "proc",
                   'fs:/proc fs_file' => "/proc",
                   'fs:/home fs_file' => "/home",
               },

           The keys of the hash points to the value to be checked  using  the  syntax  described  in  "grab"  in
           Config::Model::Role::Grab.

           Multiple check on the same item can be applied with a array ref:

               check => [
                   Synopsis => 'fix undefined path_max for st_size zero',
                   Description => [ qr/^The downstream/,  qr/yada yada/ ]
               ]

           You can run check using different check modes (See "fetch" in Config::Model::Value) by passing a hash
           ref instead of a scalar :

               check  => {
                   'sections:debian packages:0' => { mode => 'layered', value => 'dpkg-dev' },
                   'sections:base packages:0'   => { mode => 'layered', value => "gcc-4.2-base' },
               },

           The whole hash content (except "value") is passed to  grab and fetch

           A regexp can also be used to check value:

              check => {
                 "License text" => qr/gnu/i,
              }

           And specification can nest hash or array style:

              check => {
                 "License:0 text" => qr/gnu/i,
                 "License:1 text" => [ qr/gnu/i, qr/Stallman/ ],
                 "License:2 text" => { mode => 'custom', value => [ qr/gnu/i , qr/Stallman/ ] },
                 "License:3 text" => [ qr/General/], { mode => 'custom', value => [ qr/gnu/i , qr/Stallman/ ] },
              }

       •   Verify if a hash contains one or more keys (or keys matching a regexp):

            has_key => [
               'sections' => 'debian', # sections must point to a hash element
               'control' => [qw/source binary/],
               'copyright Files' => qr/.c$/,
               'copyright Files' => [qr/\.h$/], qr/\.c$/],
            ],

       •   Verify that a hash does not have a key (or a key matching a regexp):

            has_not_key => [
               'copyright Files' => qr/.virus$/ # silly, isn't ?
            ],

       •   Verify annotation extracted from the configuration file comments:

               verify_annotation => {
                       'source Build-Depends' => "do NOT add libgtk2-perl to build-deps (see bug #554704)",
                       'source Maintainer' => "what a fine\nteam this one is",
                   },

       •   Write back the config data in "wr_root/model_tests/<subtest name>/".  Note that write back is forced,
           so  the tested configuration files are written back even if the configuration values were not changed
           during the test.

           You can skip warning when writing back with the global :

               no_warnings => 1,

       •   Check the content of the written files(s) with Test::File::Contents. Tests can be grouped in an array
           ref:

              file_contents => {
                       "/home/foo/my_arm.conf" => "really big string" ,
                       "/home/bar/my_arm.conf" => [ "really big string" , "another"], ,
                   }

              file_contents_like => {
                       "/home/foo/my_arm.conf" => [ qr/should be there/, qr/as well/ ] ,
              }

              file_contents_unlike => {
                       "/home/foo/my_arm.conf" => qr/should NOT be there/ ,
              }

       •   Check the mode of the written files:

             file_mode => {
                "~/.ssh/ssh_config"     => oct(600), # better than 0600
                "debian/stuff.postinst" => oct(755),
             }

           Only the last four octets of the mode are tested. I.e. the test is done with " $file_mode & oct(7777)
           "

           Note: this test is skipped on Windows

       •   Check added or removed configuration files. If you expect changes, specify a subref to alter the file
           list:

               file_check_sub => sub {
                   my $list_ref = shift ;
                   # file added during tests
                   push @$list_ref, "/debian/source/format" ;
               },

           Note that actual and expected file lists are sorted before check, adding a  file  can  be  done  with
           "push".

       •   Copy  all  config  data  from  "wr_root/model_tests/<subtest name>/" to "wr_root/model_tests/<subtest
           name>-w/". This steps is necessary to check that configuration written back has the same  content  as
           the original configuration.

       •   Create a second configuration instance to read the conf file that was just copied (configuration data
           is checked.)

       •   You  can skip the load check if the written file still contain errors (e.g.  some errors were ignored
           and cannot be fixed) with "load_check2 => 'no'"

       •   Optionally load configuration data in the second instance. You should  design  this  config  data  to
           suppress any error or warning that occur in the step below. E.g:

               load2 => 'binary:seaview',

           See Config::Model::Loader for the syntax of the string accepted by "load2" parameter.

       •   Compare data read from original data.

       •   Run  specific  content check on the written config file to verify that configuration data was written
           and retrieved correctly:

               wr_check => {
                   'fs:/proc fs_spec' =>          "proc" ,
                   'fs:/proc fs_file' =>          "/proc",
                   'fs:/home fs_file' =>          "/home",
               },

           Like the "check" item explained above, you can run "wr_check" using different check modes.

   Running the test
       Run all tests with one of these commands:

        prove -l t/model_test.t :: [ --trace ] [ --log ] [ --error ] [ <model_name> [ <regexp> ]]
        perl -Ilib t/model_test.t  [ --trace ] [ --log ] [ --error ] [ <model_name> [ <regexp> ]]

       By default, all tests are run on all models.

       You can pass arguments to "t/model_test.t":

       •   Optional parameters: "--trace" to get test traces. "--error" to get stack trace in  case  of  errors,
           "--log" to have logs. E.g.

             # run with log and error traces
             prove -lv t/model_test.t :: --error --logl

       •   The model name to tests. E.g.:

             # run only fstab tests
             prove -lv t/model_test.t :: fstab

       •   A regexp to filter subtest E.g.:

             # run only fstab tests foobar subtest
             prove -lv t/model_test.t :: fstab foobar

             # run only fstab tests foo subtest
             prove -lv t/model_test.t :: fstab '^foo$'

Examples

       Some of these examples may still use global variables (which is deprecated). Such files may be considered
       as buggy after Aug 2019. Please warn the author if you find one.

       •   LCDproc  <http://lcdproc.org>  has a single configuration file: "/etc/LCDd.conf". Here's LCDproc test
           layout  <https://github.com/dod38fr/config-model-lcdproc/tree/master/t/model_tests.d>  and  the  test
           specification <https://github.com/dod38fr/config-model-lcdproc/blob/master/t/model_tests.d/lcdd-test-
           conf.pl>

       •   Dpkg packages are constructed from several files. These files are handled like configuration files by
           Config::Model::Dpkg  <https://salsa.debian.org/perl-team/modules/packages/libconfig-model-dpkg-perl>.
           The    test     layout     <https://salsa.debian.org/perl-team/modules/packages/libconfig-model-dpkg-
           perl/-/tree/master/t/model_tests.d>    features    test   with   multiple   file   in   dpkg-examples
           <https://salsa.debian.org/perl-team/modules/packages/libconfig-model-dpkg-
           perl/-/tree/master/t/model_tests.d/dpkg-examples>.       The      test      is      specified      in
           <https://salsa.debian.org/perl-team/modules/packages/libconfig-model-dpkg-perl/-/blob/master/t/model_tests.d/dpkg-test-conf.pl>

       •   multistrap-test-conf.pl                                           <https://github.com/dod38fr/config-
           model/blob/master/t/model_tests.d/multistrap-test-conf.pl>          and           multistrap-examples
           <https://github.com/dod38fr/config-model/tree/master/t/model_tests.d/multistrap-examples>  specify  a
           test where the configuration file name is not imposed by the application. The file name must then  be
           set in the test specification.

       •   backend-shellvar-test-conf.pl                                     <https://github.com/dod38fr/config-
           model/blob/master/t/model_tests.d/backend-shellvar-test-conf.pl> is a more  complex  example  showing
           how to test a backend. The test is done creating a dummy model within the test specification.

CREDITS

       In alphabetical order:

       •   Cyrille Bollu

       Many thanks for your help.

SEE ALSO

       •   Config::Model

       •   Test::More

AUTHOR

       Dominique Dumont

COPYRIGHT AND LICENSE

       This software is Copyright (c) 2013-2020 by Dominique Dumont.

       This is free software, licensed under:

         The GNU Lesser General Public License, Version 2.1, February 1999

SUPPORT

   Websites
       The  following websites have more information about this module, and may be of help to you. As always, in
       addition to those websites please use your favorite search engine to discover more resources.

       •   CPANTS

           The CPANTS is a website that analyzes the Kwalitee ( code metrics ) of a distribution.

           <http://cpants.cpanauthors.org/dist/Config-Model-Tester>

       •   CPAN Testers

           The CPAN  Testers  is  a  network  of  smoke  testers  who  run  automated  tests  on  uploaded  CPAN
           distributions.

           <http://www.cpantesters.org/distro/C/Config-Model-Tester>

       •   CPAN Testers Matrix

           The  CPAN  Testers  Matrix  is  a  website  that provides a visual overview of the test results for a
           distribution on various Perls/platforms.

           <http://matrix.cpantesters.org/?dist=Config-Model-Tester>

       •   CPAN Testers Dependencies

           The CPAN Testers Dependencies is a website that shows a chart of the test results of all dependencies
           for a distribution.

           <http://deps.cpantesters.org/?module=Config::Model::Tester>

   Bugs / Feature Requests
       Please report any bugs or feature requests by  email  to  "ddumont  at  cpan.org",  or  through  the  web
       interface  at <https://github.com/dod38fr/config-model-tester/issues>. You will be automatically notified
       of any progress on the request by the system.

   Source Code
       The code is open to the world, and available for you to hack on. Please feel free to browse it  and  play
       with  it,  or  whatever. If you want to contribute patches, please send me a diff or prod me to pull from
       your repository :)

       <http://github.com/dod38fr/config-model-tester.git>

         git clone git://github.com/dod38fr/config-model-tester.git

perl v5.36.0                                       2022-12-10                         Config::Model::Tester(3pm)