代码
#!/usr/bin/env python # -*- coding: utf-8 -*- # @File : HtmlParser.py # @Author: 赵路仓 # @Date : 2020/3/17 # @Desc : # @Contact : 398333404@qq.com import json from lxml import etree import requests from bs4 import BeautifulSoup url="https://search.jd.com/Search" head={ 'authority': 'search.jd.com', 'method': 'GET', 'path': '/s_new.php"开始") url = "https://search.jd.com/Search"+page+"&s=181&click=0" r=requests.get(url,timeout=3,headers=head) r.encoding=r.apparent_encoding # print(r.text) b=BeautifulSoup(r.text,"html.parser") #print(b.prettify()) _element = etree.HTML(r.text) datas = _element.xpath('//li[contains(@class,"gl-item")]') print(datas) for data in datas: p_price = data.xpath('div/div[@class="p-price"]/strong/i/text()') p_comment = data.xpath('div/div[5]/strong/a/text()') p_name = data.xpath('div/div[@class="p-name p-name-type-2"]/a/em/text()') p_href = data.xpath('div/div[@class="p-name p-name-type-2"]/a/@href') comment=' '.join(p_comment) name = ' '.join(p_name) price = ' '.join(p_price) href = ' '.join(p_href) print(name,price,p_comment,href) if __name__=="__main__": page("5")
爬取结果
以上就是python 爬虫爬取某东ps4售卖情况的详细内容,更多关于python 爬虫的资料请关注其它相关文章!
免责声明:本站文章均来自网站采集或用户投稿,网站不提供任何软件下载或自行开发的软件!
如有用户或公司发现本站内容信息存在侵权行为,请邮件告知! 858582#qq.com
内蒙古资源网 Copyright www.nmgbbs.com
暂无“python 爬虫爬取京东ps4售卖情况”评论...