avoid-ban

根据scrapy官方文档:http://doc.scrapy.org/en/master/topics/practices.html#avoiding-getting-banned里面的描述,要防止scrapy被ban,主要有以下几个策略:

  • 动态设置user agent
  • 禁用cookies
  • 设置延迟下载
  • 使用Google cache
  • 使用IP地址池(Tor project、VPN和代理IP)
  • 使用Crawlera

1.创建middlewares.py

scrapy代理IP、user agent的切换都是通过DOWNLOADER_MIDDLEWARES进行控制,下面我们创建middlewares.py文件。
文件内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import random
import base64
from settings import PROXIES
class RandomUserAgent(object):
def __init__(self, agents):
self.agents = agents
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.settings.getlist('USER_AGENTS'))
def process_request(self, request, spider):
print "**************************" + random.choice(self.agents)
request.headers.setdefault('User-Agent', random.choice(self.agents))
class ProxyMiddleware(object):
def process_request(self, request, spider):
proxy = random.choice(PROXIES)
if proxy['user_pass'] is not None:
request.meta['proxy'] = "http://%s" % proxy['ip_port']
encoded_user_pass = base64.encodestring(proxy['user_pass'])
request.headers['Proxy-Authorization'] = 'Basic ' + encoded_user_pass
print "**************ProxyMiddleware have pass************" + proxy['ip_port']
else:
print "**************ProxyMiddleware no pass************" + proxy['ip_port']
request.meta['proxy'] = "http://%s" % proxy['ip_port']

类RandomUserAgent主要用来动态获取user agent,user agent列表USER_AGENTS在settings.py中进行配置。

类ProxyMiddleware用来切换代理,proxy列表PROXIES也是在settings.py中进行配置。

2.修改settings.py配置

  • 添加USER_AGENTS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
USER_AGENTS = [
"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
"Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
"Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
"Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
"Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
"Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
"Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
"Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
"Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
]
  • 添加代理ip设置PROXIES
1
2
3
4
5
6
PROXIES = [
{'ip_port': '117.136.234.9:80', 'user_pass': None},
{'ip_port': '117.136.234.7:80', 'user_pass': None},
{'ip_port': '117.136.234.10:80', 'user_pass': None},
{'ip_port': '117.136.234.18:80', 'user_pass': None},
]

代理IP可以网上搜索一下,上面的代理IP获取自:http://www.xici.net.co/。
刚开始看到这没明白代理ip什么意思,上面网址首页给出的ip就都是可以用的,如果上面给出的几个不能用了,可以去这个网站随便在copy几个下来。都是没有密码的,user_pass这项都不用动。

  • 禁用cookies

    1
    2
    3
    * 设置下载延迟
    ```DOWNLOAD_DELAY=3
  • 最后设置DOWNLOADER_MIDDLEWARES

1
2
3
4
5
DOWNLOADER_MIDDLEWARES = {
'cnblogs.middlewares.RandomUserAgent': 1, #随机user agent
'scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware': 110,
'cnblogs.middlewares.ProxyMiddleware': 100, #代理需要用到
}

以上几个措施可以单独使用也可以组合使用,这些方法我都放在了爬取cnblogs博客文章(保存json)这个工程中

此工程github地址:https://github.com/lowkeynic4/crawl/tree/master/cnblogs%28%E9%98%B2ban%29

转自:http://www.cnblogs.com/rwxwsblog/p/4575894.html