[WARNING]: Collection infra.leapp does not support Ansible version 2.14.18 [WARNING]: running playbook inside collection infra.leapp ansible-playbook [core 2.14.18] config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.9/site-packages/ansible ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible-playbook python version = 3.9.25 (main, Nov 10 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-11)] (/usr/bin/python3) jinja version = 3.1.2 libyaml = True Using /etc/ansible/ansible.cfg as config file Skipping callback 'default', as we already have a stdout callback. Skipping callback 'minimal', as we already have a stdout callback. Skipping callback 'oneline', as we already have a stdout callback. PLAYBOOK: tests_default.yml **************************************************** 1 plays in /root/.ansible/collections/ansible_collections/infra/leapp/roles/common/tests/tests_default.yml PLAY [Test] ******************************************************************** TASK [Gathering Facts] ********************************************************* task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/common/tests/tests_default.yml:2 ok: [managed-node01] TASK [infra.leapp.common : Log directory exists] ******************************* task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/common/tasks/main.yml:3 ok: [managed-node01] => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "owner": "root", "path": "/var/log/ripu", "secontext": "unconfined_u:object_r:var_log_t:s0", "size": 22, "state": "directory", "uid": 0} TASK [infra.leapp.common : Check for existing log file] ************************ task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/common/tasks/main.yml:11 ok: [managed-node01] => {"changed": false, "stat": {"atime": 1764757117.9514542, "attr_flags": "", "attributes": [], "block_size": 4096, "blocks": 8, "charset": "us-ascii", "checksum": "ceb5b2cb466c1f4cdeb9e0c0b41a0c84cf86c0ea", "ctime": 1764757121.94645, "dev": 51716, "device_type": 0, "executable": false, "exists": true, "gid": 0, "gr_name": "root", "inode": 142620612, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "mimetype": "text/plain", "mode": "0644", "mtime": 1764757121.94645, "nlink": 1, "path": "/var/log/ripu/ripu.log", "pw_name": "root", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 105, "uid": 0, "version": "4081930022", "wgrp": false, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}} TASK [infra.leapp.common : Fail if log file already exists] ******************** task path: /root/.ansible/collections/ansible_collections/infra/leapp/roles/common/tasks/main.yml:16 fatal: [managed-node01]: FAILED! => {"changed": false, "msg": "Another RIPU playbook job is already running. See /var/log/ripu/ripu.log for details. If the previous job was aborted, rename the log file to clear this failure and try again."} PLAY RECAP ********************************************************************* managed-node01 : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 Dec 03 05:19:49 managed-node01 python3[13346]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d Dec 03 05:19:50 managed-node01 python3[13508]: ansible-ansible.builtin.file Invoked with path=/var/log/ripu state=directory owner=root group=root mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None Dec 03 05:19:50 managed-node01 python3[13639]: ansible-ansible.builtin.stat Invoked with path=/var/log/ripu/ripu.log follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1 Dec 03 05:19:51 managed-node01 sshd-session[13663]: Accepted publickey for root from 10.31.41.216 port 59652 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Dec 03 05:19:51 managed-node01 systemd-logind[677]: New session 8 of user root. ░░ Subject: A new session 8 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 8 has been created for the user root. ░░ ░░ The leading process of the session is 13663. Dec 03 05:19:51 managed-node01 systemd[1]: Started session-8.scope - Session 8 of User root. ░░ Subject: A start job for unit session-8.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-8.scope has finished successfully. ░░ ░░ The job identifier is 1633. Dec 03 05:19:51 managed-node01 sshd-session[13663]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0) Dec 03 05:19:51 managed-node01 sshd-session[13666]: Received disconnect from 10.31.41.216 port 59652:11: disconnected by user Dec 03 05:19:51 managed-node01 sshd-session[13666]: Disconnected from user root 10.31.41.216 port 59652 Dec 03 05:19:51 managed-node01 sshd-session[13663]: pam_unix(sshd:session): session closed for user root Dec 03 05:19:51 managed-node01 systemd[1]: session-8.scope: Deactivated successfully. ░░ Subject: Unit succeeded ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ The unit session-8.scope has successfully entered the 'dead' state. Dec 03 05:19:51 managed-node01 systemd-logind[677]: Session 8 logged out. Waiting for processes to exit. Dec 03 05:19:51 managed-node01 systemd-logind[677]: Removed session 8. ░░ Subject: Session 8 has been terminated ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A session with the ID 8 has been terminated. Dec 03 05:19:51 managed-node01 sshd-session[13689]: Accepted publickey for root from 10.31.41.216 port 59662 ssh2: RSA SHA256:9j1blwt3wcrRiGYZQ7ZGu9axm3cDklH6/z4c+Ee8CzE Dec 03 05:19:51 managed-node01 systemd-logind[677]: New session 9 of user root. ░░ Subject: A new session 9 has been created for user root ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ Documentation: sd-login(3) ░░ ░░ A new session with the ID 9 has been created for the user root. ░░ ░░ The leading process of the session is 13689. Dec 03 05:19:51 managed-node01 systemd[1]: Started session-9.scope - Session 9 of User root. ░░ Subject: A start job for unit session-9.scope has finished successfully ░░ Defined-By: systemd ░░ Support: https://access.redhat.com/support ░░ ░░ A start job for unit session-9.scope has finished successfully. ░░ ░░ The job identifier is 1740. Dec 03 05:19:51 managed-node01 sshd-session[13689]: pam_unix(sshd:session): session opened for user root(uid=0) by root(uid=0)