Contents

继续鼓捣爬虫,今天贴出一个代码,爬取点点网「美女」标签下的照片。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# -*- coding: utf-8 -*-
#---------------------------------------
# 程序:点点美女图片爬虫
# 版本:0.2
# 作者:sys.linux.d
# 日期:2013-09-07
# 语言:Python 2.7
# 说明:能设置下载的页数
#---------------------------------------
import urllib2
import urllib
import re
pat = re.compile('<div class="feed-big-img">\n.*?imgsrc="(ht.*?)\".*?')
nexturl1 = "http://www.diandian.com/tag/%E7%BE%8E%E5%A5%B3?page="
count = 1
while count < 2:
print "Page " + str(count) + "\n"
myurl = nexturl1 + str(count)
myres = urllib2.urlopen(myurl)
mypage = myres.read()
ucpage = mypage.decode("utf-8") #转码
mat = pat.findall(ucpage)
if len(mat):
cnt = 1
for item in mat:
print "Page" + str(count) + " No." + str(cnt) + " url: " + item + "\n"
cnt += 1
fnp = re.compile('(\w{10}\.\w+)$')
fnr = fnp.findall(item)
if fnr:
fname = fnr[0]
urllib.urlretrieve(item, fname)
else:
print "no data"
count += 1

使用方法是类似的。

Contents