有些网站爬不了,爬虫会自动终止,观察错误代码:
2019-01-05 21:57:21 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2019-01-05 21:57:21 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://dig.chouti.com/robots.txt> (referer: None)
2019-01-05 21:57:21 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET https://dig.chouti.com/>
2019-01-05 21:57:22 [scrapy.core.engine] INFO: Closing spider (finished)
原因是robots.txt有个协议,遵循这个协议有些网站爬取不了
Robots协议(也称为爬虫协议、机器人协议等)的全称是“网络爬虫排除标准”(Robots Exclusion Protocol),网站通过Robots协议告诉搜索引擎哪些页面可以抓取,哪些页面不能抓取。
解决方法,settings.py 文件中吧 ROBOTSTXT_OBEY值改成False即可
# Obey robots.txt rules
ROBOTSTXT_OBEY = False