mirror of
https://gitlab.com/gnuwget/wget2.git
synced 2026-02-01 14:41:08 +00:00
* include/libwget.h.in: Add function wget_list_getnext(). * libwget/list.c: Add function wget_list_getnext(). * libwget/robots.c: Fix memory leak. * src/host.c (host_remove_job): Cleanup queue after downloading and scanning robots.txt. * src/job.h (struct JOB): Add flag 'requested_by_user'. * src/wget.c (add_url_to_queue): Set 'requested_by_user', (add_url): Fix checking for disallowed paths. * tests/Makefile.am: Add test 'test-robots'. * tests/test-robots.c: New test to prove robots functionality. Special handling for automatic robots.txt jobs ============================================== What can happen with --recursive and --span-hosts is that a document from hostA has links to hostB. All these links might go into the hostB queue before robots.txt is downloaded and parsed. To avoid downloading of 'disallowed' documents, the queue for hostB has to be cleaned up right after downloading and parsing robots.txt. Any links links that have been explicitly requested by the user are still downloaded.