在Python中,可以使用concurrent.futures
模块中的ThreadPoolExecutor
类来管理多线程爬虫的线程池。以下是一个简单的示例:
- 首先,导入所需的库:
import requests from bs4 import BeautifulSoup from concurrent.futures import ThreadPoolExecutor, as_completed
- 定义一个函数来处理单个URL的爬取和解析:
def fetch_and_parse(url): try: response = requests.get(url) response.raise_for_status() soup = BeautifulSoup(response.text, 'html.parser') # 在这里提取所需的数据 data = https://www.yisu.com/ask/soup.title.string"Error fetching {url}: {e}") return None
- 定义一个函数来处理多个URL的爬取和解析:
def fetch_and_parse_urls(urls): results = [] with ThreadPoolExecutor(max_workers=10) as executor: future_to_url = {executor.submit(fetch_and_parse, url): url for url in urls} for future in as_completed(future_to_url): url = future_to_url[future] try: data = future.result() if data: results.append((url, data)) except Exception as e: print(f"Error processing {url}: {e}") return results
- 准备要爬取的URL列表:
urls = [ "https://www.example.com", "https://www.example2.com", "https://www.example3.com", # 更多URL... ]
- 调用
fetch_and_parse_urls
函数来处理这些URL:
results = fetch_and_parse_urls(urls) for url, data in results: print(f"URL: {url}, Data: {data}")
在这个示例中,我们使用ThreadPoolExecutor
创建了一个线程池,最大工作线程数为10。fetch_and_parse_urls
函数接受一个URL列表,然后使用线程池来并行处理这些URL。as_completed
函数用于在任务完成时获取结果。最后,我们将结果打印出来。