[PATCH pve-manger] test: remove logs and add a .gitignore file
Jing Luo
jing at jing.rocks
Thu Sep 12 13:55:17 CEST 2024
Oops sorry. This is for pve-manger.
On 2024-09-12 20:49, Jing Luo wrote:
> Through out the years there are 3 log files committed to the git repo.
> Let's
> remove those and add a .gitignore file.
>
> Signed-off-by: Jing Luo <jing at jing.rocks>
> ---
> test/.gitignore | 1 +
> test/replication_test4.log | 25 ---------------
> test/replication_test5.log | 64 --------------------------------------
> test/replication_test6.log | 8 -----
> 4 files changed, 1 insertion(+), 97 deletions(-)
> create mode 100644 test/.gitignore
> delete mode 100644 test/replication_test4.log
> delete mode 100644 test/replication_test5.log
> delete mode 100644 test/replication_test6.log
>
> diff --git a/test/.gitignore b/test/.gitignore
> new file mode 100644
> index 00000000..397b4a76
> --- /dev/null
> +++ b/test/.gitignore
> @@ -0,0 +1 @@
> +*.log
> diff --git a/test/replication_test4.log b/test/replication_test4.log
> deleted file mode 100644
> index caefa0de..00000000
> --- a/test/replication_test4.log
> +++ /dev/null
> @@ -1,25 +0,0 @@
> -1000 job_900_to_node2: new job next_sync => 900
> -1000 job_900_to_node2: start replication job
> -1000 job_900_to_node2: end replication job with error: faked
> replication error
> -1000 job_900_to_node2: changed config next_sync => 1300
> -1000 job_900_to_node2: changed state last_node => node1, last_try =>
> 1000, fail_count => 1, error => faked replication error
> -1300 job_900_to_node2: start replication job
> -1300 job_900_to_node2: end replication job with error: faked
> replication error
> -1300 job_900_to_node2: changed config next_sync => 1900
> -1300 job_900_to_node2: changed state last_try => 1300, fail_count => 2
> -1900 job_900_to_node2: start replication job
> -1900 job_900_to_node2: end replication job with error: faked
> replication error
> -1900 job_900_to_node2: changed config next_sync => 2800
> -1900 job_900_to_node2: changed state last_try => 1900, fail_count => 3
> -2800 job_900_to_node2: start replication job
> -2800 job_900_to_node2: end replication job with error: faked
> replication error
> -2800 job_900_to_node2: changed config next_sync => 4600
> -2800 job_900_to_node2: changed state last_try => 2800, fail_count => 4
> -4600 job_900_to_node2: start replication job
> -4600 job_900_to_node2: end replication job with error: faked
> replication error
> -4600 job_900_to_node2: changed config next_sync => 6400
> -4600 job_900_to_node2: changed state last_try => 4600, fail_count => 5
> -6400 job_900_to_node2: start replication job
> -6400 job_900_to_node2: end replication job with error: faked
> replication error
> -6400 job_900_to_node2: changed config next_sync => 8200
> -6400 job_900_to_node2: changed state last_try => 6400, fail_count => 6
> diff --git a/test/replication_test5.log b/test/replication_test5.log
> deleted file mode 100644
> index 928feca3..00000000
> --- a/test/replication_test5.log
> +++ /dev/null
> @@ -1,64 +0,0 @@
> -1000 job_900_to_node2: new job next_sync => 900
> -1000 job_900_to_node2: start replication job
> -1000 job_900_to_node2: guest => VM 900, running => 0
> -1000 job_900_to_node2: volumes => local-zfs:vm-900-disk-1
> -1000 job_900_to_node2: create snapshot
> '__replicate_job_900_to_node2_1000__' on local-zfs:vm-900-disk-1
> -1000 job_900_to_node2: using secure transmission, rate limit: none
> -1000 job_900_to_node2: full sync 'local-zfs:vm-900-disk-1'
> (__replicate_job_900_to_node2_1000__)
> -1000 job_900_to_node2: end replication job
> -1000 job_900_to_node2: changed config next_sync => 1800
> -1000 job_900_to_node2: changed state last_node => node1, last_try =>
> 1000, last_sync => 1000
> -1000 job_900_to_node2: changed storeid list local-zfs
> -1840 job_900_to_node2: start replication job
> -1840 job_900_to_node2: guest => VM 900, running => 0
> -1840 job_900_to_node2: volumes => local-zfs:vm-900-disk-1
> -1840 job_900_to_node2: create snapshot
> '__replicate_job_900_to_node2_1840__' on local-zfs:vm-900-disk-1
> -1840 job_900_to_node2: using secure transmission, rate limit: none
> -1840 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-1'
> (__replicate_job_900_to_node2_1000__ =>
> __replicate_job_900_to_node2_1840__)
> -1840 job_900_to_node2: delete previous replication snapshot
> '__replicate_job_900_to_node2_1000__' on local-zfs:vm-900-disk-1
> -1840 job_900_to_node2: end replication job
> -1840 job_900_to_node2: changed config next_sync => 2700
> -1840 job_900_to_node2: changed state last_try => 1840, last_sync =>
> 1840
> -2740 job_900_to_node2: start replication job
> -2740 job_900_to_node2: guest => VM 900, running => 0
> -2740 job_900_to_node2: volumes =>
> local-zfs:vm-900-disk-1,local-zfs:vm-900-disk-2
> -2740 job_900_to_node2: create snapshot
> '__replicate_job_900_to_node2_2740__' on local-zfs:vm-900-disk-1
> -2740 job_900_to_node2: create snapshot
> '__replicate_job_900_to_node2_2740__' on local-zfs:vm-900-disk-2
> -2740 job_900_to_node2: delete previous replication snapshot
> '__replicate_job_900_to_node2_2740__' on local-zfs:vm-900-disk-1
> -2740 job_900_to_node2: end replication job with error: no such volid
> 'local-zfs:vm-900-disk-2'
> -2740 job_900_to_node2: changed config next_sync => 3040
> -2740 job_900_to_node2: changed state last_try => 2740, fail_count =>
> 1, error => no such volid 'local-zfs:vm-900-disk-2'
> -3040 job_900_to_node2: start replication job
> -3040 job_900_to_node2: guest => VM 900, running => 0
> -3040 job_900_to_node2: volumes =>
> local-zfs:vm-900-disk-1,local-zfs:vm-900-disk-2
> -3040 job_900_to_node2: create snapshot
> '__replicate_job_900_to_node2_3040__' on local-zfs:vm-900-disk-1
> -3040 job_900_to_node2: create snapshot
> '__replicate_job_900_to_node2_3040__' on local-zfs:vm-900-disk-2
> -3040 job_900_to_node2: using secure transmission, rate limit: none
> -3040 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-1'
> (__replicate_job_900_to_node2_1840__ =>
> __replicate_job_900_to_node2_3040__)
> -3040 job_900_to_node2: full sync 'local-zfs:vm-900-disk-2'
> (__replicate_job_900_to_node2_3040__)
> -3040 job_900_to_node2: delete previous replication snapshot
> '__replicate_job_900_to_node2_1840__' on local-zfs:vm-900-disk-1
> -3040 job_900_to_node2: end replication job
> -3040 job_900_to_node2: changed config next_sync => 3600
> -3040 job_900_to_node2: changed state last_try => 3040, last_sync =>
> 3040, fail_count => 0, error =>
> -3640 job_900_to_node2: start replication job
> -3640 job_900_to_node2: guest => VM 900, running => 0
> -3640 job_900_to_node2: volumes =>
> local-zfs:vm-900-disk-1,local-zfs:vm-900-disk-2
> -3640 job_900_to_node2: create snapshot
> '__replicate_job_900_to_node2_3640__' on local-zfs:vm-900-disk-1
> -3640 job_900_to_node2: create snapshot
> '__replicate_job_900_to_node2_3640__' on local-zfs:vm-900-disk-2
> -3640 job_900_to_node2: using secure transmission, rate limit: none
> -3640 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-1'
> (__replicate_job_900_to_node2_3040__ =>
> __replicate_job_900_to_node2_3640__)
> -3640 job_900_to_node2: incremental sync 'local-zfs:vm-900-disk-2'
> (__replicate_job_900_to_node2_3040__ =>
> __replicate_job_900_to_node2_3640__)
> -3640 job_900_to_node2: delete previous replication snapshot
> '__replicate_job_900_to_node2_3040__' on local-zfs:vm-900-disk-1
> -3640 job_900_to_node2: delete previous replication snapshot
> '__replicate_job_900_to_node2_3040__' on local-zfs:vm-900-disk-2
> -3640 job_900_to_node2: end replication job
> -3640 job_900_to_node2: changed config next_sync => 4500
> -3640 job_900_to_node2: changed state last_try => 3640, last_sync =>
> 3640
> -3700 job_900_to_node2: start replication job
> -3700 job_900_to_node2: guest => VM 900, running => 0
> -3700 job_900_to_node2: volumes =>
> local-zfs:vm-900-disk-1,local-zfs:vm-900-disk-2
> -3700 job_900_to_node2: start job removal - mode 'full'
> -3700 job_900_to_node2: delete stale replication snapshot
> '__replicate_job_900_to_node2_3640__' on local-zfs:vm-900-disk-1
> -3700 job_900_to_node2: delete stale replication snapshot
> '__replicate_job_900_to_node2_3640__' on local-zfs:vm-900-disk-2
> -3700 job_900_to_node2: job removed
> -3700 job_900_to_node2: end replication job
> -3700 job_900_to_node2: vanished job
> diff --git a/test/replication_test6.log b/test/replication_test6.log
> deleted file mode 100644
> index 91754544..00000000
> --- a/test/replication_test6.log
> +++ /dev/null
> @@ -1,8 +0,0 @@
> -1000 job_900_to_node1: new job next_sync => 1
> -1000 job_900_to_node1: start replication job
> -1000 job_900_to_node1: guest => VM 900, running => 0
> -1000 job_900_to_node1: volumes => local-zfs:vm-900-disk-1
> -1000 job_900_to_node1: start job removal - mode 'full'
> -1000 job_900_to_node1: job removed
> -1000 job_900_to_node1: end replication job
> -1000 job_900_to_node1: vanished job
--
Jing Luo
About me: https://jing.rocks/about/
GPG Fingerprint: 4E09 8D19 00AA 3F72 1899 2614 09B3 316E 13A1 1EFC
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 228 bytes
Desc: OpenPGP digital signature
URL: <http://lists.proxmox.com/pipermail/pve-devel/attachments/20240912/e3078e16/attachment-0001.sig>
More information about the pve-devel
mailing list