[–] stoneridge link

Haven't tried this[0] yet, but Scrapy should be able to handle JavaScript sites with the JavaScript rendering service Splash[1]. scrapy-splash[2] is the plugin to integrate Scrapy and Splash.

[0] https://blog.scrapinghub.com/2015/03/02/handling-javascript-...

[1] https://splash.readthedocs.io/en/stable/index.html

[2] https://github.com/scrapy-plugins/scrapy-splash


[–] arien link

I've recently made a little project with scrapy (for crawling) and BeautifulSoup (for parsing html) and it works out great. One more thing to add to the above list are pipelines, they make downloading files quite easy.


[–] harperlee link

I've had mixed results with scrapy, probably more based in my inexperience than other thing, but for example retrieving a posting in idealista.com with vanilla scrapy begets an error page whereas a basic wget command retrieves the correct page.

So the learning curve for simple things makes me jump to bash scripts; scrapy might prove more valuable when your project starts to scale.

But also of course: normally the best tool is the one you already know!


[–] Bromskloss link

Would you still recommend Scrapy if the task wasn't specifically crawling?


[–] ddorian43 link

Would you recommend it for scalable projects ? Like, crawl twitter or tumblr ?


[–] sharmi link

If you are a programmer, scrapy[0] will be a good bet. It can handle robots.txt, request throttling by ip, request throttling by domain, proxies and all other common nitty-gritties of crawling. The only drawback is handling pure javascript sites. We have to manually dig into the api or add a headless browser invocation within the scrapy handler.

Scrapy also has the ability to pause and restart crawls [1], run the crawlers distributed [2] etc. It is my goto option.

[0] https://scrapy.org/

[1] https://doc.scrapy.org/en/latest/topics/jobs.html

[2] https://github.com/rmax/scrapy-redis


[–] Bromskloss link

> BeautifulSoup / lxml

When should one use one or the other, would you say?


[–] jackschultz link

BeautifulSoup. The difference is that lxml can run a little faster in certain cases for a huge scrape, but you'll very very very if ever need that. It's interesting and probably worthwhile to try both and know the difference, but bs BeautifulSoup is definitely where to start


[–] darpa_escapee link

BeautifulSoup has a friendly API, but it is slow. It has a lxml backend, however.

If you're familiar with writing XPath queries, lxml is great.


[–] jackschultz link

I've actually wrote about this! General tips that I've found from doing more than a few projects [0], and then an overview of Python libraries I use [1].

If you don't want to clock on the links, requests and BeautifulSoup / lxml is all you need 90% of the time. Throw gevent in there and you can get a lot of scraping done in not as much time as you think it would take.

And as long as we're talking about web scraping, I'm a huge fan of it. There's so much data out there that's not easily accessible and needs to be cleaned and organized. When running a learning algorithm, for example, a very hard part that isn't talked about a lot is getting the data before throwing it in a learning function or library. Of course, there the legal side of it if companies are not happy with people being able to scrape, but that's a different topic.

I'll keep going. The best way to learn about what are the best tools is to do a project on your own and teat them all out. Then you'll know what suits you. That's absolutely the best way to learn something about programming -- doing it instead of reading about it.

[0] https://bigishdata.com/2017/05/11/general-tips-for-web-scrap...

[1] https://bigishdata.com/2017/06/06/web-scraping-with-python-p...


[–] mapster link

Mind if I ask what info/data you are scraping and for what ends?


[–] frik link

> We use Redis to send task (update / discovery) to our crawlers.

Some kind of queue implemented with Redis? How does it work?


[–] samtc link

It's a simple redis list containing JSON task. We have a custom Scrapy Spider hooked to next_request and item_scraped [1]. It check (lpop) for update/discovery tasks in the list and build a Request [2]. We only crawl max ~1 request per second, so performance is not an issue.

For every website we crawl we implement a custom discovery/update logic.

Discovery can be, for example, crawl a specific date range, seq number, postal code.... We usually seed discovery based on the actual data we have, like highest_company_number + 1000, so we get the newly registered companies.

Update is to update a single document. Like crawl document for company number 1234. We generate a Request [2] to crawl only that document.

[1] https://doc.scrapy.org/en/latest/topics/signals.html

[2] https://doc.scrapy.org/en/latest/topics/request-response.htm...


[–] thibaut_barrere link

See https://sidekiq.org for instance.


[–] CGamesPlay link

Probably not what the GP uses, but Resque does this in Ruby land.


[–] bdcravens link

Sidekiq has emerged as a better option to Resque


[–] CGamesPlay link

I have a similar set up! How do you monitor for failures and deal with the scrape target changing?


[–] samtc link

We monitor exceptions with Sentry. We store raw data so we don't have to hurry to fix the ETL, we only have to fix navigation logic and we keep crawling.


[–] Launchr link

Sorry if it's a stupid question/example/comparison, just trying to understand better: You're storing the full html data instead of reaching into the specific div's for the data you might need? This way, separating the fetching from the parsing?

I'm a scraping rookie, and I usually fetch + parse in the same call, this might resolve some issues for me :) thanks!


[–] jimsmart link

When I've done scraping, I've always taken this approach also: I decouple my process into paired fetch-to-local-cache-folder and process-cached-files stages.

I find this useful for several reasons, but particularly if you want to recrawl the same site for new/updated content, or if you decide to grab extra data from the pages (or, indeed, if your original parsing goes wrong or meets pages it wasn't designed for).

Related: As well as any pages I cache, I generally also have each stage output a CSV (requested url, local file name, status, any other relevant data or metadata), which can be used to drive later stages, or may contain the final output data.

Requesting all of the pages is the biggest time sink when scraping — it's good to avoid having to do any portion of that again, if possible.


[–] samtc link

I maintain ~30 different crawlers. Most of them are using Scrapy. Some are using PhantomJS/CasperJS but they are called from Scrapy via a simple web service.

All data (zip files, pdf, html, xml, json) we collect are stored as-is (/path/to/<dataset name>/<unique key>/<timestamp>) and processed later using a Spark pipeline. lxml.html is WAY faster than beautifulsoup and less prone to exception.

We have cronjob (cron + jenkins) that trigger dataset update and discovery. For example, we scrape corporate registry, so everyday we update the 20k oldest companies version. We also implement "discovery" logic in all of our crawlers so they can find new data (ex.: newly registered company). We use Redis to send task (update / discovery) to our crawlers.


[–] hydragit link

> Python 3, AFAIK, doesn't have anything as handy as Ruby/Perl's Mechanize. But using the web developer tools you can usually figure out the requests made by the browser and then use the Session object in the Requests library to deal with stateful requests

You could also use the WebOOB (http://weboob.org) framework. It's built on requests+lxml and it provides a Browser class usable like mechanize's one (ability to access doc, select HTML forms, etc.).

It also has nice companion features like associating url patterns to some custom Page classes where you can write what data to retrieve when a page with this url pattern is browsed.


[–] djtriptych link

All great advice. I've written dozens of small purpose-built scrapers and I love your last point.

It's pretty much always a great idea to completely separate the parts that perform the HTTP fetches and the part that figures out what those payloads mean.


[–] Buttons840 link

lxml has good xpath support too; the best I've seen. I miss good xpath support in some of the other scraping options I've tried in other languages.


[–] upofadown link

>Python 3, AFAIK, doesn't have anything as handy as Ruby/Perl's Mechanize.

Did the version of Mechanize written in Py2 stop being supported?


[–] danso link

Looks like it's recently been updated but no big announcement that it's Python 3 ready: https://github.com/python-mechanize/mechanize

I've also seen these alternatives:

- https://robobrowser.readthedocs.io/en/latest/

- https://github.com/MechanicalSoup/MechanicalSoup

MechanicalSoup seems well updated but the last time I tried these libraries, they were either buggy (and/or I was ignorant) and I just couldn't get things to work as I was used to in Ruby and Mechanize.


[–] sebcat link

lxml can be hit-or-miss on HTML5 docs. I've had greater success with a modified version of gumbo-parser.


[–] danso link

Ah very cool, had seen various python libraries about HTML5, but not gumbo (or at least I had starred it).


Is the modified version you use a personal version or a well-known fork?


[–] sebcat link

> Is the modified version you use a personal version or a well-known fork?

I had a specific thing I needed to do, gumbo-parser was a good match, I poked at it a little and moved on. It started with this[1] commit, then I did some other work locally which was not pushed because google/gumbo-parser is without an owner/maintainer. There are a couple of forks, but no/little adoption it seems.

[1] https://github.com/sebcat/gumbo-parser/commit/c158f8090c2df0...


[–] danso link

Always fascinated by how diverse the discussion and answers is for HN threads on web-scraping. Goes to show that "web-scraping" has a ton of connotations, everything from automated-fetching of URLs via wget or cURL, to data management via something like scrapy.

Scrapy is a whole framework that may be worthwhile, but if I were just starting out for a specific task, I would use:

- requests http://docs.python-requests.org/en/master/

- lxml http://lxml.de/

- cssselect https://cssselect.readthedocs.io/en/latest/

Python 3, AFAIK, doesn't have anything as handy as Ruby/Perl's Mechanize. But using the web developer tools you can usually figure out the requests made by the browser and then use the Session object in the Requests library to deal with stateful requests:


I usually just download pages/data/files as raw files and worry about parsing/collating them later. I try to focus on the HTTP mechanics and, if needed, the HTML parsing, before worrying about data extraction.


[–] pteredactyl link

I second this. I built using beautiful soup before and found Puppeteer much easier when interacting with the web. Especially nasty .NET sites.


[–] elyrly link

Simple and straight forward, +1


[–] marvinpinto link

I would recommend using Headless Chrome along with a library like puppeteer[0]. You get the advantage of using a real browser with which you run pages' javascript, load custom extensions, etc.

[0]: https://github.com/GoogleChrome/puppeteer


[–] beernutz link

The absolute best tool i have found for scraping is Visual Web Ripper.

It is not open source, and runs in windows only, but it is one of the easiest to use tools that i have found. I can set up scrapes entirely visually, and it handles complex cases like infinite scroll pages, highly javascript dependent pages and the like. I really wish there were an open source solution that was as good as this one.

I use it with one of my clients professionally. Their support is VERY good btw.



[–] hydragit link

WebOOB [0] is a good Python framework for scraping websites. It's mostly used to aggregate data from multiple websites by organizing each site backend implement an abstract interface (for example the CapBank abstract interface for parsing banking sites) but it can be used without that part.

On the pure scraping side, it has a "declarative parsing" to avoid painful plain-old procedural code [1]. You can parse pages by simply specifying a bunch of XPaths and indicating a few filters from the library to apply on those XPath elements, for example CleanText to remove whitespace nonsense, Lower (to lower-case), Regexp, CleanDecimal (to parse as number) and a lot more. URL patterns can be associated to a Page class of such declarative parsing. If declarative becomes too verbose, it can always be replaced locally by writing a plain-old Python method.

A set of applications are provided to visualize extracted data, and other niceties are provided for debug easing. Simply put: « Wonderful, Efficient, Beautiful, Outshining, Omnipotent, Brilliant: meet WebOOB ».

[0] http://weboob.org/

[1] http://dev.weboob.org/guides/module.html#parsing-of-pages


[–] zapperdapper link

No one has mentioned it so I will: consider Lynx, the text-mode web-browser. Being command-line you can automate with Bash or even Python. I have used it quite happily to crawl largeish static sites (10,000+ web pages per site). Do a `man lynx` the options of interest are -crawl, -traversal, and -dump. Pro tip - use in conjunction with HTML TIDY prior to the parsing phase (see below).

I have also used custom written Python crawlers in a lot of cases.

The other thing I would emphasize is that a web scraper has multiple parts, such as crawling (downloading pages) and then actually parsing the page for data. The systems I've set up in the past typically are structured like this:

1. crawl - download pages to file system 2. clean then parse (extract data) 3. ingest extracted data into database 4. query - run adhoc queries on database

One of the trickiest things in my experience is managing updates. So when new articles/content are added to the site you only want to have to get and add that to your database, rather than crawl the whole site again. Also detecting updated content can be tricky. The brute force approach of course is just to crawl the whole site again and rebuild the database - not ideal though!

Of course, this all depends really on what you are trying to do!


[–] mping link

I use nightmarejs https://github.com/segmentio/nightmare which is based on electron; I recommend it if you're on js


[–] phsource link

For someone on a Javascript stack, I highly recommend combining a requester (e.g., "request" or "axios") with Cheerio, a server-side jQuery clone. Having a familiar, well-known interface for selection helps a lot.

We use this stack at WrapAPI (https://wrapapi.com), which we highly recommend as a tool to turn webpages into APIs. It doesn't completely do all the scraping (you still need to write a script), but it does make turning a HTML page into a JSON structure much easier.


[–] baldfat link

I use R since that is the language I use mostly httr and rvest. Edit I missed typing rvest thanks for the comments you use the two together.



[–] Risse link

If you use PHP, Simple HTML DOM[0] is an awesome and simple scraping library.

[0] http://simplehtmldom.sourceforge.net/


[–] levi_n link

I use a combination of Selenium and python packages (beautifulsoup). I'm primarily interested in scraping data that is supplied via javascript, and I find Selenium to be the most reliable way scrape that info. I use BS when the scraped page has a lot of data, thereby slowing down Selenium, and I pipe the page source from Selenium, with all javascript rendered, into BS.

I use explicit waits exclusively (no direct calls like `driver.find_foo_by_bar`), and find it vastly improves selenium reliability. (Shameless plug) I have a python package, Explicit[1], that makes it easier to use explicit waits.

[1] https://pypi.python.org/pypi/explicit


[–] giarc link

For non-coders, import.io is great. However, they used to have a generous free plan that has since went away (you are limited to 500 records now). Still a great product, problem is they don't have a small plan (starts at $299/month and goes up to $9,999).


[–] indescions_2017 link

Headless Chrome, Puppeteer, NodeJS (jsdom), and MongoDB. Fantastic stack for web data mining. Async based using promises for explicit user input flow automation.


[–] jmkni link

I've had a surprising amount of success with the HTML Agility Pack in .net, if you have a decent understanding of HTML it's pretty usable.


[–] Doctor_Fegg link

If you speak Ruby, mechanize is good: https://github.com/sparklemotion/mechanize


[–] polote link

I maintain about 8 crawlers and I use only vanilla Python

I have a function to help me search :

   def find_r(value, ind, array,stop_word):
   	indice = ind
   	for i in array:
   		indice = value.find(i,indice)+1
   	end =  value.find(stop_word,indice)
   	return value[indice: end], end

You can use it like that :

   resulting_text , end_index = find_r(string, start_index, ["<td", ">"], "</td")

To find text it is quite fast and you don't need to master a framwork


[–] CGamesPlay link

If you can get away without a JS environment, do so. Something like scrapy will be much easier than a full browser environment. If you cannot, don’t bother going halfway and just go straight for headless chrome or Firefox. Unfortunately Selenium seems to be past its useful life as Firefox dropped support and chrome has a chrome driver which wraps around it. Phantom.js is woefully out of date and since it’s a different environment than your target site was designed for just leads to problems.


[–] deathemperor link

I've just finished my research on web scraping for my company (took me about 7 days). I started with import.io and scrapinghub.com for point and click scraping to see if I could do it without writing codes. Ultimately, UI point and click scraping is for none-technical. There are many data you would find it hard to scrape. For example, lazada.com.my stores the product's SKU inside an attribute that looks like <div data-sku-simple="SKU11111"></div> which I couldn't get. import.io's pricing is also something. I need to pay $999 a month for accessing API data is just too high.

So I decided to use scrapy, the core of scrapinghub.com.

I haven't written much python before but scrapy was very easy to learn. I wrote 2 spiders and run on scrapinghub (their serverless cloud). Scrapinghub support jobs scheduling and many other things at a cost. I prefer scrapinghub because in my team we don't have DevOps. It also supports Crawlera to prevent IP banning, Portia for point and click (still in beta, it was still hard to use), and Splash for SPA websites but it's buggy and the github repo is not under active maintenance.

For DOM query I use BeautifulSoup4. I love it. It's jQuery for python.

For SPA websites I wrote a scrapy middleware which uses puppeteer. The puppeteer is deployed on Amazon Lambda (1m free request first 365 days, more than enough for scraping) using this https://github.com/sambaiz/puppeteer-lambda-starter-kit

I am planning to use Amazon RDS to store scraped data.


[–] cholmon link

I recently stumbled across http://go-colly.org/, that looks well thought out and simple to use. It seems like a slimmed down Go version of Scrapy.


[–] elchief link

Anyone who suggests a tool that can't understand JavaScript doesn't know what they are talking about

You should be using Headless Chrome or Headless Firefox with a library that can control them in a user-friendly manner


[–] austincheney link

This is perhaps the fastest way to screenscrape a dynamically executed website.

1. First go get and run this code, which allows immediate gathering of all text nodes from the DOM: https://github.com/prettydiff/getNodesByType/blob/master/get...

2. Extract the text content from the text nodes and ignore nodes that contain only white space:

let text = document.getNodesByType(3), a = 0, b = text.length, output = []; do { if ((/^(\s+)$/).test(text[a].textContent) === false) { output.push(text[a].textContent); } a = a + 1; } while (a < b); output;

That will gather ALL text from the page. Since you are working from the DOM directly you can filter your results by various contextual and stylistic factors. Since this code is small and executes stupid fast it can be executed by bots easily.

Test this out in your browser console.


[–] khuknows link

Shameless plug - I build this tiny API for scraping and it works a treat for my uses: https://jsonify.link/

A few similar tools also exist, like https://page.rest/.


[–] dsacco link

I've done this professionally in an infrastructure processing several terabytes per day. A robust, scalable scraping system comprises several distinct parts:

1. A crawler, for retrieving resources over HTTP, HTTPS and sometimes other protocols a bit higher or lower on the network stack. This handles data ingestion. It will need to be sophisticated these days - sometimes you'll need to emulate a browser environment, sometimes you'll need to perform a JavaScript proof of work, and sometimes you can just do regular curl commands the old fashioned way.

2. A parser, for correctly extracting specific data from JSON, PDF, HTML, JS, XML (and other) formatted resources. This handles data processing. Naturally you'll want to parse JSON wherever you can, because parsing HTML and JS is a pain. But sometimes you'll need to parse images, or outdated protocols like SOAP.

3. A RDBMS, with databases for both the raw and normalized data, and columns that provide some sort of versioning to the data in a particular point in time. This is quite important, because if you collect the raw data and store it, you can re-parse it in perpetuity instead of needing to retrieve it again. This will happen somewhat frequently if you come across new data while scraping that you didn't realize you'd need or could use. Furthermore, if you're updating the data on a regular cadence, you'll need to maintain some sort of "retrieved_at", "updated_at" awareness in your normalized database. MySQL or PostgreSQL are both fine.

4. A server and event management system, like Redis. This is how you'll allocate scraping jobs across available workers and handle outgoing queuing for resources. You want a centralized terminal for viewing and managing a) the number of outstanding jobs and their resource allocations, b) the ongoing progress of each queue, c) problems or blockers for each queue.

5. A scheduling system, assuming your data is updated in batches. Cron is fine.

6. Reverse engineering tools, so you can find mobile APIs and scrape from them instead of using web targets. This is important because mobile API endpoints a) change far less frequently than web endpoints, and b) are far more likely to be JSON formatted, instead of HTML or JS, because the user interface code is offloaded to the mobile client (iOS or Android app). The mobile APIs will be private, so you'll typically have to reverse engineer the HMAC request signing algorithm, but that is virtually always trivial, with the exception of companies that really put effort into obfuscating the code. apktool, jadx and dex2jar are typically sufficient for this if you're working with an Android device.

7. A proxy infrastructure, this way you're not constantly pinging a website from the same IP address. Even if you're being fairly innocuous with your scraping, you probably want this, because many websites have been burned by excessive spam and will conscientiously and automatically ban any IP address that issues something nominally more than a regular user, regardless of volume. Your proxies come in several flavors: datacenter, residential and private. Datacenter proxies are the first to be banned, but they're cheapest. These are proxies resold from datacenter IP ranges. Residential IP addresses are IP addresses that are not associated with spam activity and which come from ISP IP ranges, like Verison Fios. Private IP addresses are IP addresses that have not been used for spam activity before and which are reserved for use by only your account. Naturally this is in order from lower to greater expense; it's also in order from most likely to least likely to be banned by a scraping target. NinjaProxies, StormProxies, Microleaf, etc are all good options. Avoid Luminati, which offers residential IP addresses contributed by users who don't realize their IP addresses are being leased through the use of Hola VPN.

Each website you intend to scrape is given a queue. Each queue is assigned a specific allotment of workers for processing scraping jobs in that queue. You'll write a bunch of crawling, parsing and database querying code in an "engine" class to manage the bulk of the work. Each scraping target will then have its own file which inherits functionality from the core class, with the specific crawling and parsing requirements in that file. For example, implementations of the POST requests, user agent requirements, which type of parsing code needs to be called, which database to write to and read from, which proxies should be used, asynchronous and concurrency settings, etc should all be in here.

Once triggered in a job, the individual scraping functions will call to the core functionality, which will build the requests and hand them off to one of a few possible functions. If your code is scraping a target that has sophisticated requirements, like a JavaScript proof of work system or browser emulation, it will be handed off to functionality that implements those requirements. Most of the time, this won't be needed and you can just make your requests look as human as possible - then it will be handed off to what is basically a curl script.

Each request to the endpoint is a job, and the queue will manage them as such: the request is first sent to the appropriate proxy vendor via the proxy's API, then the response is sent back through the proxy. The raw response data is stored in the raw database, then normalized data is processed out of the raw data and inserted into the normalized database, with corresponding timestamps. Then a new job is sent to a free worker. Updates to the normalized data will be handled by something like cron, where each queue is triggered at a specific time on a specific cadence.

You'll want to optimize your workflow to use endpoints which change infrequently and which use lighter resources. If you are sending millions of requests, loading the same boilerplate HTML or JS data is a waste. JSON resources are preferable, which is why you should invest some amount of time before choosing your endpoint into seeing if you can identify a usable mobile endpoint. For the most part, your custom code is going to be in middleware and the parsing particularities of each target; BeautifulSoup, QueryPath, Headless Chrome and JSDOM will take you 80% of the way in terms of pure functionality.


[–] mmmnt link

For very simple tasks Listly seems to be a fast and good solution: http://www.listly.io/

If you need more power, I heard good stuff about http://80legs.com/ though never tried them myself.

If you really need to do crazy shit like crawling the iOS App Store really fast and keep thing up to date. I suggest using Amazon Lambda and a custom Python parser. Though Lambda is not meant for this kind of things it works really well and is super scalable at a reasonable price.


[–] jppope link

Headless chrome in the form of puppeteer (https://github.com/GoogleChrome/puppeteer) or Chromeless (https://github.com/graphcool/chromeless) or for smaller gigs use nightmare.js (http://www.nightmarejs.org/).

scapy is fine but selenium, phantom, etc are all outdated IMO


[–] ravenstine link

It depends on what you're trying to do.

For most things, I use Node.js with the Cheerio library, which is basically a stripped-down version of jQuery without the need for a browser environment. I find using the jQuery API far more desirable than the clunky, hideous Beautiful Soup or Nokogiri APIs.

For something that requires an actual DOM or code execution, PhantomJS with Horseman works well, though everyone is talking about headless Chrome these days so IDK. I've not had nearly as many bad experiences with PhantomJS as others have purportedly experienced.


[–] btb link

We have been using kapow robosuite for close to 10 years now. Its a commercial GUI based tool which have worked well for us, it saves us a lot of maintenance time compared to our previous hand-rolled code extraction pipeline. Only problem is that its very expensive(pricing seems catered towards very large enterprises).

So I was really hoping this this thread would have revealed some newer commercial GUI-based alternatives(on-premise, not SaaS). Because I dont really ever want to go back the maintenance hell of hand rolled robots ever again :)


[–] kanishkalinux link

for mostly static pages requests/pycurl + beautifulsoup more than sufficient. For advance scraping, take a look at scrapy.

for javascript heavy pages most people rely on selenium webdriver. However you can also try hlspy (https://github.com/kanishka-linux/hlspy), which is a little utility I made a while ago for dealing with javascript heavy pages for simple usage.


[–] bootcat link

One of the important avenues to scrape AJAX heavy and phantomjs avoiding websites is using the google chrome extension support. They can mirror the dom and send it to an external server for processing where we can use python lxml to xpath to appropriate nodes. This worked for me to scrape Google, before we hit the capatcha. If anyone is interested, i can share code i wrote to scrape websites !

If you can scrape findthecompany database ? I have done it successfully !!


[–] etatoby link

If you need to scrape content from complex JS apps (eg. React) where it doesn't pay to reverse engineer their backend API (or worse, it's encrypted/obfuscated) you may want to look at CasperJS.

It's a very easy to use frontend to PhantomJS. You can code your interactions in JS or CoffeeScript and scrape virtually anything with a few lines of code.

If you need crawling, just pair a CasperJS script with any spider library like the ones mentioned around here.


[–] theden link

I've had good success with scrapy (https://scrapy.org/) for my personal projects


[–] jacinda link

If you're specifically looking at news articles, go for the Python library Newspaper: http://newspaper.readthedocs.io/en/latest/

Auto-detection of languages, and will automatically give you things like the following:

>>> article.parse()

>>> article.authors [u'Leigh Ann Caldwell', 'John Honway']

>>> article.text u'Washington (CNN) -- Not everyone subscribes to a New Year's resolution...'

>>> article.top_image u'http://someCDN.com/blah/blah/blah/file.png'

>>> article.movies [u'http://youtube.com/path/to/link.com', ...]


[–] mrskitch link

I’d recommend puppeteer or some other Chrome driver. It’s fast and resilient even on single page apps.

If you’re looking to run it on a Linux machine also take a look at https://browserless.io (full disclosure I’m the creator of that site).


[–] riekus link

Depends on your skillset and the data you want to scrape. I am testing waters for a new business that relies on scraped data. As a non programmer I had good success testing stuff with contentgrabber. Import.io also get mentioned a lot. Tried out octoparse but wast stable with the scraping.


[–] pwaai link

hey I'm working on this thing called BAML (browser automation markup language) and it looks something like this:

    OPEN http://asdf.com
    CRAWL a
    EXTRACT {'title': '.title'}
It's meant to be super simple and built from ground up to support crawling Single Page Applications.

Also, creating a terminal client (early ver: https://imgur.com/a/RYx5g) for it which will launch a Chrome browser and scrape everything. http://export.sh is still very early in the works, I'd appreciate any feedback (email in profile, contact form doesn't work).


[–] vrathee link

If you are looking for SaaS or managed services, Try https://www.agenty.com/

Agenty is cloud-hosted web scraping app and you can setup scraping agents using their point and click CSS Selector Chrome extension to extract anything from HTML with these 3 modes below: - TEXT : Simple clean text - HTML : Outer or Inner HTML - ATTR : Any attribute of a html tag like image src, hyperlink href…

Or advance mode like REGEX, XPATH etc.

And then save the scraping agent to execute on cloud-hosted app with most advanced features like batch crawling, scheduling, multiple website scraping simultaneously without worrying in ip-address block or speed like never before.


[–] doominasuit link

If you need to interpret javascript, or otherwise simulate regular browsing as closely as possible, you may consider running a browser inside a container and controlling it with selenium. I have found it’s necessary to run inside the container if you do not have a desktop environment. This is better suited for specific use cases rather than mass collection because it is slower to run a full browsing stack than to only operate at the HTTP layer. I have found that alternatives like phantomJS are hard to debug. Consider opening VNC on the container for debugging. Containers like this that I know of are SeleniumHQ and elgalu/selenium.


[–] hmottestad link

If you know Java, then my go to library is Jsoup https://jsoup.org/

It lets you use jQuery-like selectors to extract data.

Like this: Elements newsHeadlines = doc.select("#mp-itn b a");


[–] cdolan link

Outwit Hub, specifically the advanced or enterprise levels.

It has a GUI on it that is not designed very well, and documentation that is complete, but hard to search...

But it can do just about any type of scrape, including getting started from a command line script


[–] jpetersonmn link

I used to use a combo of python tools. Requests, beautifulsoup mostly. However the last few things I've built used selenium to drive headless chrome browsers. This allows me to run the javascript most sites use these days.


[–] jancurn link

Apify (https://www.apify.com) is a web scraping and automation platform where you can extract data from any website using a few simple lines of JavaScript. It's using headless browsers, so that people can extract data from pages that have complex structure, dynamic content or employ pagination.

Recently the platform added support for headless Chrome and Puppeteer, you can even run jobs written in Scrapy or any other library as long as it can be packaged as Docker container.

Disclaimer: I'm a co-founder of Apify


[–] servitor link

I agree with others, with curl and the likes you will hit insurmountable roadblocks sooner or later. It's better to go full headless browser from the start.

I use a python->selenium->chrome stack. The Page Object Model [0] has been a revelation for me. My scripts went from being a mess of spaghetti code to something that's a pleasure to write and maintain.

[0] https://www.guru99.com/page-object-model-pom-page-factory-in...


[–] sl0wik link

I had great experience with www.apify.com.


[–] mfontani link

Whatever you end up using for scraping, I beg you to pick a unique user-agent which allows a webmaster to understand which crawler is it, to better allow it to pass through (or be banned, depending).

Don't stick with the default "scrapy" or "Ruby" or "Jakarta Commons-HttpClient/...", which end up (justly) being banned more easily than unique ones, like "ABC/2.0 - https://example.com/crawler" or the like.


[–] Softcadbury link

With node, you can use cheerio [0]. It allows you to parse html pages with a JQuery similar syntax. I use it in production on my project [1]

[0] https://github.com/cheeriojs/cheerio [1] https://github.com/Softcadbury/football-peek/blob/master/ser...


[–] colinchartier link

We had a really tough time scraping dynamic web content using scrapy, and both scrapy and selenium require you to write a program (and maintain it) for every separate website that you have to scrape. If the website's structure changes you need to debug your scraper. Not fun if you need to manage more than 5 scrapers.

It was so hard that we made our own company JUST to scrape stuff easily without requiring programming. Take a look at https://www.parsehub.com


[–] 256cats link

I use Node and either puppeteer[0] or plain Curl[1]. IMO Curl is years ahead of any Node.js request lib. For proxies I use (shameless plug!) https://gimmeproxy.com .

[0] https://github.com/GoogleChrome/puppeteer

[1] https://github.com/JCMais/node-libcurl


[–] mitchtbaum link

I made this https://www.drupal.org/project/example_web_scraper and produced the underlying code many years ago. The idea is to map xpath queries to your data model and use some reusable infrastructure to simply apply it. It was very good, imho (for what it was). (I'm writing this comment since I don't see any other comments with the words map or model :/ )


[–] bbayer link

I am really surprised nobody mentioned pyspider. It is simple, has a web dashboard and can handle JS pages. It can store data to a database of your choice. It can handle scheduling, recrawling. I have used it to crawl Google Play. 5$ Digital Ocean VPS with pyspider installed on it could handle millions of pages crawled, processed and saved to a database.



[–] OzzyB link

A good host xD

Preferably one that doesn't mind giving you a bunch of IPs, and if they do, don't charge a fortune for them.

Then you can worry about what software you're gonna use.


[–] mrkeen link

I made a crawler https://github.com/jahaynes/crawler

It outputs to the warc file format (https://en.wikipedia.org/wiki/Web_ARChive), in case your workflow is to gather web pages and then process them afterwards.


[–] ngneer link

https://github.com/featurist/coypu is nice for browser automation. A related question: what are good tools for database scraping, meaning replicating a backend database via a web interface (not referring to compromising the application, rather using allowed queries to fully extract the database).


[–] dineshr93 link

If you know java then jsoup will be very handy. [1] https://jsoup.org/


[–] charlus link

For a little diversity on tools, if you're looking for something quick that others can access the data easily - Google Apps script in a Google Sheet can be quite useful.



[–] buildops link

Why are you looking to scrape? Here's a list of some scraper bots: https://www.incapsula.com/blog/web-scraping-bots.html

What about Botscraper: http://www.botscraper.com/


[–] wiradikusuma link

I tinkered with Apache Nutch (http://nutch.apache.org/), but I found it overkill. In the end, since I use Scala, I use https://github.com/ruippeixotog/scala-scraper


[–] laktek link

One of the challenges with modern day scraping is you need to account for client-side JS rendering.

If you prefer an API as a service that can pre-render pages, I built Page.REST (https://www.page.rest). It allows you to get rendered page content via CSS selectors as a JSON response.


[–] Jeaye link

I've written a bit on web scraping with Clojure and Enlive here: https://blog.jeaye.com/2017/02/28/clojure-apartments/

That's what I'd use, if I had to scrape again (no JS support).


[–] 0xdeadbeefbabe link

The best tool for web scraping, for me, is something easy to deploy and redeploy; and something that doesn't rely on three working programs--eliminating selenium sounds great.

For those reasons I like https://github.com/knq/chromedp


[–] blueadept111 link

Jaunt [http://jaunt-api.com] is a good java tool.


[–] ksahin link

I wrote some blog post about Java web scraping here : https://ksah.in/introduction-to-web-scraping-with-java/

As others said, phantomJS (and now headless Chrome) are good tools to deal with heavy js websites


[–] tmaly link

I just tried puppeteer yesterday for the first time. It seems to work very well. My only complaint is that it is very new and does now have a plethora of examples.

I previously have used WWW::Mechanize in the Perl world, but single page applications with Javascript really require something with a browser engine.


[–] RandomBookmarks link

The "best tool" is different for web developers and non-coders. If you are a non-technical person that just needs some data there is:

(1) hosted services like mozenda

(2) visual automation tools like Kantu Web Automation (which includes OCR)

(3) and last but not least outsourcing the scraping on sites like Freelancer.com


[–] brycematheson link

Shameless plug. I wrote a blog post on how I use Powershell to scrape sites: http://brycematheson.io/webscraping-with-powershell/


[–] thallian link

I used CasperJS[0] in the past to scrap a javascript heavy forum (ProBoards) and it worked well. But that was a few years ago, I have no idea what new strategies came up in the meantime.

[0] http://casperjs.org/


[–] undefined link


[–] tn_ link

Check out Heritrix if you're looking for an open-source webscraping archival tool: https://webarchive.jira.com/wiki/spaces/Heritrix


[–] frausto link

Been getting blocked by recaptcha more and more, do any of these tools handle dealing with that or workarounds by default? Tried routing through proxies and swapping IP addresses, slowing down, etc... Any specific ways people get around that?


[–] Karupan link

I've had some success using portia[1]. Its a visual wrapper over scrapy, but is actually quite useful.



[–] jschuur link

If you want to extract content and specific meta data, you might find the Mercury Web Parser useful:



[–] traviswingo link

I’ve been using puppeteer to scrape and it’s been fantastic. Since it’s a headless browser, it can handle SPA just as well as server side loaded traditional websites. It’s also incredibly easy to use with async/await.


[–] askz link

A friend released a little tool to only scrap html from websites, with tor and proxy chaining



[–] freeslugs link

If you need simple scraping, I like traditional http request lib. For more robust scraping (ie clicking buttons / filling text), use capybara and either phantomjs or chromedriver - easy to install using homebrew!


[–] thegrif link

A ton of people recommended Scrapy - and I am always looking for senior Scrapy resources that have experience scraping at scale. Please feel free to reach out - contact info is in my profile.


[–] mateuszf link

`clj-http`, `enlive`, `cheshire` in case of `clojure` worked fine for me


[–] sananth12 link

If you are looking for image scraping: https://github.com/sananth12/ImageScraper


[–] pudo link

We're about to announce a new Python scraping toolkit, memorious: https://github.com/alephdata/memorious - it's a pretty lightweight toolkit, using YAML config files to glue together pre-built and custom-made components into flexible and distributed pipelines. A simple web UI helps track errors and execution can be scheduled via celery.

We looked at scrapy, but it just seemed like the wrong type of framing for the type of scrapers we build: requests, some html/xml parser, and output into a service API or a SQL store.

Maybe some people will enjoy it.


[–] kbd link

For simple tasks, curl into pup is very convenient.



[–] kopos link

Scrapy [https://github.com/scrapy/scrapy] works really well.


[–] dor_jack link

If you need to perform a web-scale crawl I strongly recommend https://www.mixnode.com.


[–] Lxr link

Python requests + lxml, with Selenium as a last resort.


[–] bantersaurus link



[–] fazkan link

scrapy and BS4, for serious stuff. Selenium, for automating logging and other UI related stuff, you can even play games with it.


[–] crispytx link

I did a little web scraping project a few years ago using:

* cURL

* regex


[–] thejosh link

If you are scraping specific pages on a site, curl. Then transform that into the language you use.


[–] cm2012 link

For non developers dexi.io is great.


[–] novaleaf link

i wrote a tool: PhantomJsCloud.com

it's getting a little long in the tooth, but I will be updating it soon to use a Chrome based renderer. If you have any suggestions, you can leave it here or PM me :)


[–] aaronhoffman link

This tool takes a list of URIs and crawls each site for contact info. Phone, email, twitter, etc



[–] jpepinho link

WebDriver.io using Selenium and PhantomJS would be a good way to go!


[–] kzisme link

So in general what do most people use web scraping for? Is it building up their on database of things not available via an API or something? It always sounds interesting, but the need for it is what confuses me.


[–] greyfox link

i did a quick search and didnt see this listed here:



[–] undefined link


[–] etattva link

Scrapy and Jsoup are best combinations


[–] tomc1985 link

Perl or Ruby and Regular Expressions


[–] herbst link



[–] vsupalov link

That really depends on your project and tech stack. If you're into Python and are going to deal with relatively static HTML, then the Python modules Scrapy [1], BeautifulSoup [2] and the whole Python data crunching ecosystem are at your disposal. There's lots of great posts about getting such a stack off the ground and using it in the wild [3]. It can get you pretty darn far, the architecture is solid and there are lots of services and plugins which probably do everything you need.

Here's where I hit the limit with that setup: dynamic websites. If you're looking at something like discourse-powered communities or similar, and don't feel a bit too lazy to dig into all the ways requests are expected to look, it's no fun anymore. Luckily, there's lots of js-goodness which can handle dynamic website, inject your javascript for convenience and more [4].

The recently published Headless Chrome [5] and puppeteer [6] (a Node API for it), are really promising for many kinds of tasks - scraping among them. You can get a first impression in this article [7]. The ecosystem does not seem to be as mature yet, but I think this will be foundation of the next go-to scraping tech stack.

If you want to try it yourself, I've written a brief intro [8] and published a simple dockerized development environment [9], so you can give it a go without cluttering your machine or find out what dependencies you need and how the libraries are called.

[1] https://scrapy.org/

[2] https://www.crummy.com/software/BeautifulSoup/bs4/doc/

[3] http://sangaline.com/post/advanced-web-scraping-tutorial/

[4] https://franciskim.co/dont-need-no-stinking-api-web-scraping...

[5] https://developers.google.com/web/updates/2017/04/headless-c...

[6] https://github.com/GoogleChrome/puppeteer

[7] https://blog.phantombuster.com/web-scraping-in-2017-headless...

[8] https://vsupalov.com/headless-chrome-puppeteer-docker/

[9] https://github.com/vsupalov/docker-puppeteer-dev


[–] 21stio link



[–] 11235813213455 link



[–] traitormonkey link

regex (runs)


[–] Focalise link

What kind of things are you guys scraping?


[–] webfolder link
[–] ajimix link

I've been doing scraping for many years and at the end it's always the same, you build a lot of stuff to bypass site restrictions and finally, once you are done, you can start scraping. It all goes fine until the site you are scrapping bans you... So what I do now? - proxycrawl https://proxycrawl.com - node http://nodejs.org

With proxycrawl I don't need to worry about bans or blocks and I can crawl sites like amazon, google and facebook without problems by just calling their API.

With node I can do all my work async and with low memory footprint using simple http.get calls and some logic.

So no framework, no tools, nothing other than those two things


[–] ianertson link

I am currently building a platform for renting scrapers. It's not released yet but stay tuned.



[–] teremin link

I use Colly[0][1] which is a young but decent scraping framework for Golang.

[0] http://go-colly.org/ [1] https://github.com/gocolly/colly