Crawl・Scan
- What sites are uncrawlable or difficult to crawl?
- Can I download the scan results as a file?
- If I do a second scan, can I copy and reuse the scan information?
- I want to remove the admin panel from the crawl/scan target.
- I want to run scans only during the day. Can I specify a specific time?
- Can I add comments to the scan results?
- What values should be entered into the form during auto-crawl?
- Please explain why there are differences in the results of multiple scans of the same site.
- How do I set up authentication for a different domain?
- What is the max hierarchy and max number of pages I can crawl?
- I want to specify specific methods to be allowed or excluded in the "Crawl/Scan Target Settings".
- Is there a function to check if the target site is accessible after the scan is set up?
- Checking the crawl results, it seems to be crawling robots.txt and sitemap.xml.
- Does the max number of pages include pages determined to be similar?
- How many manual crawl imports can be registered for one scan?
- Is there a way to check the execution history of a timed crawl/scan?
- What are the requirements for a domain to be added to the external domain list?
- I am getting input errors after crawling. Is there any way to change the input values on the form and re-crawl?
- Is there a way to check the page on which the form submission was filled out in error?
- Will the transfer parameters, etc. that are needed at the time of transition be automatically taken over?
- Can I import multiple manual crawl results in one batch?
- I would like to stop the crawl or scan midway and start over from the crawl, is this possible?
- Is there a way to check if the API URL is a target URL for scanning?
- Is it possible to stop the crawl midway and scan up to the area where the crawl was performed?
- Can I add comments (notes) to the page?
- How do I transition from the page list to the screen transition diagram?
- Can more than one user simultaneously crew and scan?
- I want to set crawl/scan inhibit time.
- How can I exclude certain pages from being crawled or scanned?
- I would like to stop the crawl or scan midway and start over from the crawl, is this possible?