爬虫学习————request模块

参考课程

4.requests模块巩固深入案例之破解百度翻译_哔哩哔哩_bilibili

P5 获取搜狗首页

import requests

if __name__=='__main__':

    url = 'https://123.sogou.com/'

    response = requests.get(url=url)

    page_text = response.text

    print(page_text)

    with open('./sogou.html','w',encoding='utf-8') as fp:
        fp.write(page_text)
    print('爬取数据结束!!!')

如何使用VS Code 建立并运行HTML文件(超详细,新手入内)?-VS Code 编辑器的使用_vs code编写html-CSDN博客

P7 简易网页采集器

import requests

if __name__=='__main__':

    headers={
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36 Edg/123.0.0.0'
    }
    url = 'https://cn.bing.com/search' 

    kw = input('enter a word:')
    param={
        'query':kw
    }

    response = requests.get(url=url,params=param,headers=headers)

    page_text = response.text

    filename = kw+'.html'
    with open(filename,'w',encoding='utf-8') as fp:
        fp.write(page_text)
    print(filename,'保存成功!!!')

emmm,Edge浏览器不行。 

P8 破解百度翻译

import requests
import json

if __name__=='__main__':

    headers={
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36 Edg/123.0.0.0'
    }
    url = 'https://fanyi.baidu.com/sug' 

    word = input('enter a word:')
    data={
        'kw':word
    }

    response = requests.post(url=url,data=data,headers=headers)

    dic_obj = response.json()
    filename=word+'.json'
    with open(filename,'w',encoding='utf-8') as fp:
        json.dump(dic_obj,fp=fp,ensure_ascii=False)
    print('over!!!')

不要直接复制粘贴,UA以自己的浏览器上的为准。

P9 豆瓣电影

import requests
import json

if __name__=='__main__':

    headers={
        'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/123.0.0.0 Safari/537.36 Edg/123.0.0.0'
    }
    url = 'https://movie.douban.com/j/chart/top_list' 

    param={
        'type':'24',
        'interval_id':'100:90',
        'action':'',
        'start':'0',
        'limit':'20'
    }

    response = requests.get(url=url,params=param,headers=headers)

    list_data = response.json()
    print(list_data)

    with open('./douban.json','w',encoding='utf-8') as fp:
        json.dump(list_data,fp=fp,ensure_ascii=False)
    print('over!!!')

注意param的参数中不要有多余的空格,否则list_data为空列表。 

相关推荐

  1. 爬虫学习————request模块

    2024-06-14 14:16:03       33 阅读
  2. 爬虫学习--3.Requests模块

    2024-06-14 14:16:03       27 阅读
  3. Python爬虫-request模块

    2024-06-14 14:16:03       33 阅读
  4. Python爬虫-requests模块

    2024-06-14 14:16:03       28 阅读
  5. Python爬虫学习requests

    2024-06-14 14:16:03       55 阅读

最近更新

  1. docker php8.1+nginx base 镜像 dockerfile 配置

    2024-06-14 14:16:03       98 阅读
  2. Could not load dynamic library ‘cudart64_100.dll‘

    2024-06-14 14:16:03       106 阅读
  3. 在Django里面运行非项目文件

    2024-06-14 14:16:03       87 阅读
  4. Python语言-面向对象

    2024-06-14 14:16:03       96 阅读

热门阅读

  1. react-router 的路由匹配逻辑

    2024-06-14 14:16:03       32 阅读
  2. Python 学习 第二册 对第一册的一些补充

    2024-06-14 14:16:03       30 阅读
  3. selenium使用已经打开的浏览器

    2024-06-14 14:16:03       27 阅读
  4. cv的优势

    2024-06-14 14:16:03       24 阅读
  5. git log 后一直出现:(冒号)

    2024-06-14 14:16:03       33 阅读
  6. 2024.06.01 校招 实习 内推 面经

    2024-06-14 14:16:03       27 阅读
  7. Centos上部署Node服务和MongoDB

    2024-06-14 14:16:03       20 阅读
  8. C++智能指针举例

    2024-06-14 14:16:03       28 阅读