你们想要的 Tumblr 爬虫

2016-10-28 23:58:30 +08:00
 tumbzzc

好几个月前写的了,写的比较挫。
并没有写成爬取一个博客的所有内容,本来是用来网站的,如果要爬所有内容,会让用户等待太久。

# -*- coding=utf-8 -*-
from threading import Thread
import Queue
import requests
import re
import os
import sys
import time


api_url='http://%s.tumblr.com/api/read?&num=50&start='
UQueue=Queue.Queue()
def getpost(uid,queue):
    url='http://%s.tumblr.com/api/read?&num=50'%uid
    page=requests.get(url).content
    total=re.findall('<posts start="0" total="(.*?)">',page)[0]
    total=int(total)
    a=[i*50 for i in range(1000) if i*50-total<0]
    ul=api_url%uid
    for i in a:
        queue.put(ul+str(i))


extractpicre = re.compile(r'(?<=<photo-url max-width="1280">).+?(?=</photo-url>)',flags=re.S)   #search for url of maxium size of a picture, which starts with '<photo-url max-width="1280">' and ends with '</photo-url>'
extractvideore=re.compile('/tumblr_(.*?)" type="video/mp4"')

video_links = []
pic_links = []
vhead = 'https://vt.tumblr.com/tumblr_%s.mp4'

class Consumer(Thread):

    def __init__(self, l_queue):
        super(Consumer,self).__init__()
        self.queue = l_queue

    def run(self):
        session = requests.Session()
        while 1:
            link = self.queue.get()
            print 'start parse post: ' + link
            try:
                content = session.get(link).content
                videos = extractvideore.findall(content)
                video_links.extend([vhead % v for v in videos])
                pic_links.extend(extractpicre.findall(content))
            except:
                print 'url: %s parse failed\n' % link
            if self.queue.empty():
                break


def main():
    task=[]
    for i in range(min(10,UQueue.qsize())):
        t=Consumer(UQueue)
        task.append(t)
    for t in task:
        t.start()
    for t in task:
        t.join
    while 1:
        for t in task:
            if t.is_alive():
                continue
            else:
                task.remove(t)
        if len(task)==0:
            break


def write():
    videos=[i.replace('/480','') for i in video_links]
    pictures=pic_links
    with open('pictures.txt','w') as f:
        for i in pictures:
            f.write('%s\n'%i)
    with open('videos.txt','w') as f:
        for i in videos:
            f.write('%s\n'%i)


if __name__=='__main__':
    #name=sys.argv[1]
    #name=name.strip()
    name='mzcyx2011'
    getpost(name,UQueue)
    main()
    write()
17984 次点击
所在节点    Python
52 条回复
tumbzzc
2016-10-29 11:52:06 +08:00
@exoticknight 名字没办法
guokeke
2016-10-29 11:54:29 +08:00
Mark
cevincheung
2016-10-29 11:58:33 +08:00
然后就可以 wget 了?
exalex
2016-10-29 12:09:26 +08:00
能不能简述下爬虫效果。。。
guonning
2016-10-29 16:51:33 +08:00
收藏了
LeoEatle
2016-10-29 20:34:31 +08:00
name 改成什么好,能否给个名单: )
yangonee
2016-10-29 21:12:00 +08:00
求 name_list
lycos
2016-10-29 23:48:36 +08:00
mark
leetom
2016-10-30 00:07:26 +08:00
@cszhiyue

下载到一半会这样

Traceback (most recent call last):
File "turmla.py", line 150, in <module>
for square in tqdm(pool.imap_unordered(download_base_dir, urls), total=len(urls)):
File "/home/leetom/.pyenv/versions/2.7.10/lib/python2.7/site-packages/tqdm/_tqdm.py", line 713, in __iter__
for obj in iterable:
File "/home/leetom/.pyenv/versions/2.7.10/lib/python2.7/multiprocessing/pool.py", line 668, in next
raise value
Exception: Unexpected response.
thinks
2016-10-30 10:22:00 +08:00
Mark ,哎,老司机一言不合就发车啊。
sangmong
2016-10-30 21:59:24 +08:00
mark
errorlife
2016-10-31 01:58:11 +08:00
没人知道 www.tumblrget.com
mozutaba
2016-10-31 04:13:10 +08:00
@errorlife 无效啊
errorlife
2016-10-31 09:13:51 +08:00
@mozutaba 上梯子=。=
Nutlee
2016-10-31 09:52:35 +08:00
战略 Mark
iewgnaw
2016-10-31 11:12:14 +08:00
不是有现成的 API 吗
tumbzzc
2016-10-31 11:53:43 +08:00
@iewgnaw 这不就是用 api 吗
znoodl
2016-10-31 12:19:27 +08:00
我也用 golang 爬过。。。后来被墙就没搞了
Layne
2016-10-31 13:01:29 +08:00
默默点个赞 :)
itqls
2016-10-31 14:57:08 +08:00
一天到晚搞事情

这是一个专为移动设备优化的页面(即为了让你能够在 Google 搜索结果里秒开这个页面),如果你希望参与 V2EX 社区的讨论,你可以继续到 V2EX 上打开本讨论主题的完整版本。

https://www.v2ex.com/t/316337

V2EX 是创意工作者们的社区,是一个分享自己正在做的有趣事物、交流想法,可以遇见新朋友甚至新机会的地方。

V2EX is a community of developers, designers and creative people.

© 2021 V2EX