Recursive Use Of Scrapy To Scrape Webpages From A Website
I have recently started to work with Scrapy. I am trying to gather some info from a large list which is divided into several pages(about 50). I can easily extract what I want from
Solution 1:
use urllib2 to download a page. Then use either re (regular expressions) or BeautifulSoup (an HTML parser) to find the link to the next page you need. Download that with urllib2. Rinse and repeat.
Scapy is great, but you dont need it to do what you're trying to do
Post a Comment for "Recursive Use Of Scrapy To Scrape Webpages From A Website"