Web Scraper No Data Scraped Yet



  • Apr 12, 2021 Sources: Microsoft in advanced talks to buy speech technology company Nuance Communications for $16B, or $56 a share, a 23% premium to Nuance's Friday close — - Price under discussion would value Nuance at $56 a share — An agreement could be announced as soon as this week.
  • Nov 16, 2017 While I was at it, I also included a test with a card scraper. This is yet another argument for the concept of Hybrid Woodworking, something I cover extensively in my book. The main idea being that I like to use power tools for the grunt work and hand tools for the finesse work. One book-matched set of boards was scraped and sanded.

These plastic razors are very handy. They can be used in any scraper that uses standard scraper blades. I bought these mostly for cleaning up automotive and small engine gaskets. These work well for scraping especially when the surface you are cleaning is delicate. These work great for removing stickers from windows and cars. As you can see, there is no damage to the tiles, no scratch marks and the tiles have not been dislodged by the process. So Why Use Chemicals? If you apply a fungicide/moss killer after the moss has been scraped off, it will soak into the tile and more importantly into the overlaps of the tiles and kill off any tiny moss spores.

The full code for the completed scraper can be found in the companion repository on github.

I wouldn’t really consider web scraping one of my hobbies or anything but I guess I sort of do a lot of it.It just seems like many of the things that I work on require me to get my hands on data that isn’t available any other way.I need to do static analysis of games for Intoli and so I scrape the Google Play Store to find new ones and download the apks.The Pointy Ball extension requires aggregating fantasy football projections from various sites and the easiest way was to write a scraper.When I think about it, I’ve probably written about 40-50 scrapers.I’m not quite at the point where I’m lying to my family about how many terabytes of data I’m hoarding away… but I’m close.

I’ve tried out x-ray/cheerio, nokogiri, and a few others but I always come back to my personal favorite: scrapy.In my opinion, scrapy is an excellent piece of software.I don’t throw such unequivocal praise around lightly but it feels incredibly intuitive and has a great learning curve.

You can read The Scrapy Tutorial and have your first scraper running within minutes.Then, when you need to do something more complicated, you’ll most likely find that there’s a built in and well documented way to do it.There’s a lot of power built in but the framework is structured so that it stays out of your way until you need it.When you finally do need something that isn’t there by default, say a Bloom filter for deduplication because you’re visiting too many URLs to store in memory, then it’s usually as simple as subclassing one of the components and making a few small changes.Everything just feels so easy and that’s really a hallmark of good software design in my book.

I’ve toyed with the idea of writing an advanced scrapy tutorial for a while now.Something that would give me a chance to show off some of its extensibility while also addressing realistic challenges that come up in practice.As much as I’ve wanted to do this, I just wasn’t able to get past the fact that it seemed like a decidely dick move to publish something that could conceivably result in someone’s servers getting hammered with bot traffic.

I can sleep pretty well at night scraping sites that actively try to prevent scraping as long as I follow a few basic rules.Namely, I keep my request rate comparable to what it would be if I were browsing by hand and I don’t do anything distasteful with the data.That makes running a scraper basically indistinguishable from collecting data manually in any ways that matter.Even if I were to personally follow these rules, it would still feel like a step too far to do a how-to guide for a specific site that people might actually want to scrape.

And so it remained just a vague idea in my head until I encountered a torrent site called Zipru.It has multiple mechanisms in place that require advanced scraping techniques but its robots.txt file allows scraping.Furthermore, there is no reason to scrape it.It has a public API that can be used to get all of the same data.If you’re interested in getting torrent data then just use the API; it’s great for that.

In the rest of this article, I’ll walk you through writing a scraper that can handle captchas and various other challenges that we’ll encounter on the Zipru site.The code won’t work exactly as written because Zipru isn’t a real site but the techniques employed are broadly applicable to real-world scraping and the code is otherwise complete.I’m going to assume that you have basic familiarity with python but I’ll try to keep this accessible to someone with little to no knowledge of scrapy.If things are going too fast at first then take a few minutes to read The Scrapy Tutorial which covers the introductory stuff in much more depth.

We’ll work within a virtualenv which lets us encapsulate our dependencies a bit.Let’s start by setting up a virtualenv in ~/scrapers/zipru and installing scrapy.

How to web scrape data

The terminal that you ran those in will now be configured to use the local virtualenv.If you open another terminal then you’ll need to run . ~/scrapers/zipru/env/bin/active again (otherwise you may get errors about commands or modules not being found).

You can now create a new project scaffold by running

which will create the following directory structure.

Most of these files aren’t actually used at all by default, they just suggest a sane way to structure our code.From now on, you should think of ~/scrapers/zipru/zipru_scraper as the top-level directory of the project.That’s where any scrapy commands should be run and is also the root of any relative paths.

We’ll now need to add a spider in order to make our scraper actually do anything.A spider is the part of a scrapy scraper that handles parsing documents to find new URLs to scrape and data to extract.I’m going to lean pretty heavily on the default Spider implementation to minimize the amount of code that we’ll have to write.Things might seem a little automagical here but much less so if you check out the documentation.

First, create a file named zipru_scraper/spiders/zipru_spider.py with the following contents.

Our spider inherits from scrapy.Spider which provides a start_requests() method that will go through start_urls and use them to begin our search.We’ve provided a single URL in start_urls that points to the TV listings.They look something like this.

At the top there, you can see that there are links to other pages.We’ll want our scraper to follow those links and parse them as well.To do that, we’ll first need to identify the links and find out where they point.

The DOM inspector can be a huge help at this stage.If you were to right click on one of these page links and look at it in the inspector then you would see that the links to other listing pages look like this

Next we’ll need to construct selector expressions for these links.There are certain types of searches that seem like a better fit for either css or xpath selectors and so I generally tend to mix and chain them somewhat freely.I highly recommend learning xpath if you don’t know it, but it’s unfortunately a bit beyond the scope of this tutorial.I personally find it to be pretty indispensible for scraping, web UI testing, and even just web development in general.I’ll stick with css selectors here though because they’re probably more familiar to most people.

To select these page links we can look for <a> tags with “page” in the title using a[title ~= page] as a css selector.If you press ctrl-f in the DOM inspector then you’ll find that you can use this css expression as a search query (this works for xpath too!).Doing so lets you cycle through and see all of the matches.This is a good way to check that an expression works but also isn’t so vague that it matches other things unintentionally.Our page link selector satisfies both of those criteria.

To tell our spider how to find these other pages, we’ll add a parse(response) method to ZipruSpider like so

When we start scraping, the URL that we added to start_urls will automatically be fetched and the response fed into this parse(response) method.Our code then finds all of the links to other listing pages and yields new requests which are attached to the same parse(response) callback.These requests will be turned into response objects and then fed back into parse(response) so long as the URLs haven’t already been processed (thanks to the dupe filter).

Our scraper can already find and request all of the different listing pages but we still need to extract some actual data to make this useful.The torrent listings sit in a <table> with class='list2at' and then each individual listing is within a <tr> with class='lista2'.Each of these rows in turn contains 8 <td> tags that correspond to “Category”, “File”, “Added”, “Size”, “Seeders”, “Leechers”, “Comments”, and “Uploaders”.It’s probably easiest to just see the other details in code, so here’s our updated parse(response) method.

Our parse(response) method now also yields dictionaries which will automatically be differentiated from the requests based on their type.Each dictionary will be interpreted as an item and included as part of our scraper’s data output.

We would be done right now if we were just scraping most websites.We could just run

and a few minutes later we would have a nice JSON Lines formatted torrents.jl file with all of our torrent data.Instead we get this (along with a lot of other stuff)

Drats!We’re going to have to be a little more clever to get our data that we could totally just get from the public API and would never actually scrape.

Our first request gets a 403 response that’s ignored and then everything shuts down because we only seeded the crawl with one URL.The same request works fine in a web browser, even in incognito mode with no session history, so this has to be caused by some difference in the request headers.We could use tcpdump to compare the headers of the two requests but there’s a common culprit here that we should check first: the user agent.

Scrapy identifies as “Scrapy/1.3.3 (+http://scrapy.org)' by default and some servers might block this or even whitelist a limited number of user agents.You can find lists of the most common user agents online and using one of these is often enough to get around basic anti-scraping measures.Pick your favorite and then open up zipru_scraper/settings.py and replace

with

You might notice that the default scrapy settings did a little bit of scrape-shaming there.Opinions differ on the matter but I personally think it’s OK to identify as a common web browser if your scraper acts like somebody using a common web browser.So let’s slow down the response rate a little bit by also adding

which will create a somewhat realistic browsing pattern thanks to the AutoThrottle extension.Our scraper will also respect robots.txt by default so we’re really on our best behavior.

Now running the scraper again with scrapy crawl zipru -o torrents.jl should produce

That’s real progress!We got two 200 statuses and a 302 that the downloader middleware knew how to handle.Unfortunately, that 302 pointed us towards a somewhat ominous sounding threat_defense.php.Unsurprisingly, the spider found nothing good there and the crawl terminated.

It will be helpful to learn a bit about how requests and responses are handled in scrapy before we dig into the bigger problems that we’re facing.When we created our basic spider, we produced scrapy.Request objects and then these were somehow turned into scrapy.Response objects corresponding to responses from the server.A big part of that “somehow” is downloader middleware.

Downloader middlewares inherit from scrapy.downloadermiddlewares.DownloaderMiddleware and implement both process_request(request, spider) and process_response(request, response, spider) methods.You can probably guess what those do from their names.There are actually a whole bunch of these middlewares enabled by default.Here’s what the standard configuration looks like (you can of course disable things, add things, or rearrange things).

As a request makes its way out to a server, it bubbles through the process_request(request, spider) method of each of these middlewares.This happens in sequential numerical order such that the RobotsTxtMiddleware processes the request first and the HttpCacheMiddleware processes it last.Then once a response has been generated it bubbles back through the process_response(request, response, spider) methods of any enabled middlewares.This happens in reverse order this time so the higher numbers are always closer to the server and the lower numbers are always closer to the spider.

One particularly simple middleware is the CookiesMiddleware.It basically checks the Set-Cookie header on incoming responses and persists the cookies.Then when a response is on its way out it sets the Cookie header appropriately so they’re included on outgoing requests.It’s a little more complicated than that because of expirations and stuff but you get the idea.

Another fairly basic one is the RedirectMiddleware which handles, wait for it… 3XX redirects.This one lets any non-3XX status code responses happily bubble through but what if there is a redirect?The only way that it can figure out how the server responds to the redirect URL is to create a new request, so that’s exactly what it does.When the process_response(request, response, spider) method returns a request object instead of a response then the current response is dropped and everything starts over with the new request.That’s how the RedirectMiddleware handles the redirects and it’s a feature that we’ll be using shortly.

If it was surprising at all to you that there are so many downloader middlewares enabled by default then you might be interested in checking out the Architecture Overview.There’s actually kind of a lot of other stuff going on but, again, one of the great things about scrapy is that you don’t have to know anything about most of it.Just like you didn’t even need to know that downloader middlewares existed to write a functional spider, you don’t need to know about these other parts to write a functional downloader middleware.

Getting back to our scraper, we found that we were being redirected to some threat_defense.php?defense=1&... URL instead of receiving the page that we were looking for.When we visit this page in the browser, we see something like this for a few seconds

before getting redirected to a threat_defense.php?defense=2&... page that looks more like this

A look at the source of the first page shows that there is some javascript code responsible for constructing a special redirect URL and also for manually constructing browser cookies.If we’re going to get through this then we’ll have to handle both of these tasks.

Then, of course, we also have to solve the captcha and submit the answer.If we happen to get it wrong then we sometimes redirect to another captcha page and other times we end up on a page that looks like this

where we need to click on the “Click here” link to start the whole redirect cycle over.Piece of cake, right?

All of our problems sort of stem from that initial 302 redirect and so a natural place to handle them is within a customized version of the redirect middleware.We want our middleware to act like the normal redirect middleware in all cases except for when there’s a 302 to the threat_defense.php page.When it does encounter that special 302, we want it to bypass all of this threat defense stuff, attach the access cookies to the session, and finally re-request the original page.If we can pull that off then our spider doesn’t have to know about any of this business and requests will “just work.”

So open up zipru_scraper/middlewares.py and replace the contents with

You’ll notice that we’re subclassing RedirectMiddleware instead of DownloaderMiddleware directly.This allows us to reuse most of the built in redirect handling and insert our code into _redirect(redirected, request, spider, reason) which is only called from process_response(request, response, spider) once a redirect request has been constructed.We just defer to the super-class implementation here for standard redirects but the special threat defense redirects get handled differently.We haven’t implemented bypass_threat_defense(url) yet but we can see that it should return the access cookies which will be attached to the original request and that the original request will then be reprocessed.

To enable our new middleware we’ll need to add the following to zipru_scraper/settings.py.

This disables the default redirect middleware and plugs ours in at the exact same position in the middleware stack.We’ll also have to install a few additional packages that we’re importing but not actually using yet.

Note that all three of these are packages with external dependencies that pip can’t handle.If you run into errors then you may need to visit the dryscrape, Pillow, and pytesseract installation guides to follow platform specific instructions.

Our middleware should be functioning in place of the standard redirect middleware behavior now; we just need to implement bypass_thread_defense(url).We could parse the javascript to get the variables that we need and recreate the logic in python but that seems pretty fragile and is a lot of work.Let’s take the easier, though perhaps clunkier, approach of using a headless webkit instance.There are a few different options but I personally like dryscrape (which we already installed).

First off, let’s initialize a dryscrape session in our middleware constructor.

You can think of this session as a single browser tab that does all of the stuff that a browser would typically do (e.g. fetch external resources, execute scripts).We can navigate to new URLs in the tab, click on things, enter text into inputs, and all sorts of other things.Scrapy supports concurrent requests and item processing but the response processing is single threaded.This means that we can use this single dryscrape session without having to worry about being thread safe.

So now let’s sketch out the basic logic of bypassing the threat defense.

This handles all of the different cases that we encountered in the browser and does exactly what a human would do in each of them.The action taken at any given point only depends on the current page so this approach handles the variations in sequences somewhat gracefully.

The one last piece of the puzzle is to actually solve the captcha.There are captcha solving services out there with APIs that you can use in a pinch, but this captcha is simple enough that we can just solve it using OCR.Using pytesseract for the OCR, we can finally add our solve_captcha(img) method and complete the bypass_threat_defense() functionality.

You can see that if the captcha solving fails for some reason that this delegates back to the bypass_threat_defense() method.This grants us multiple captcha attempts where necessary because we can always keep bouncing around through the verification process until we get one right.

This should be enough to get our scraper working but instead it gets caught in an infinite loop.

It at least looks like our middleware is successfully solving the captcha and then reissuing the request.The problem is that the new request is triggering the threat defense again.My first thought was that I had some bug in how I was parsing or attaching the cookies but I triple checked this and the code is fine.This is another of those “the only things that could possibly be different are the headers” situation.

The headers for scrapy and dryscrape are obviously both bypassing the initial filter that triggers 403 responses because we’re not getting any 403 responses.This must somehow be caused by the fact that their headers are different.My guess is that one of the encrypted access cookies includes a hash of the complete headers and that a request will trigger the threat defense if it doesn’t match.The intention here might be to help prevent somebody from just copying the cookies from their browser into a scraper but it also just adds one more thing that you need to get around.

So let’s specify our headers explicitly in zipru_scraper/settings.py like so.

Note that we’re explicitly adding the User-Agent header here to USER_AGENT which we defined earlier.This was already being added automatically by the user agent middleware but having all of these in one place makes it easier to duplicate the headers in dryscrape.We can do that by modifying our ThreatDefenceRedirectMiddleware initializer like so.

Now, when we run our scraper again with scrapy crawl zipru -o torrents.jl we see a steady stream of scraped items and our torrents.jl file records it all.We’ve successfully gotten around all of the threat defense mechanisms!

We’ve walked through the process of writing a scraper that can overcome four distinct threat defense mechanisms:

  1. User agent filtering.
  2. Obfuscated javascript redirects.
  3. Captchas.
  4. Header consistency checks.

Our target website Zipru may have been fictional but these are all real anti-scraping techniques that you’ll encounter on real sites.Hopefully you’ll find the approach we took useful in your own scraping adventures.

Are you thinking of having your roof tiles jet washed? Think again.

As stated by several roof tile manufacturers and the UK’s Trading Standards organization, pressure washers CAN damage roof tiles, strip the surface granules from the tiles and SHORTEN tile life expectancy. There is also a risk of flooding to the loft space.

Read more about why pressure washing a roof is a very stupid thing to do

Roof cleaning can be achieved by manually removing any excess moss and then applying the correct chemicals to the tiles. This approach doesn’t involve any water (no mess!) and won’t damage the tiles. Any individual tiles that are damaged during this process can be replaced cheaply as roof tiles are not that costly to replace if all the scaffold and access equipment is already in place.

It does involve using some chemicals (such as BAC50) and also a good dose of patience!

Roof Cleaning by Using the Correct Chemicals

One of the most gentle yet effective ways to remove roof moss and clean the roof is by way of scraping. A normal 9″ trowel is suitable, if your roofing contractor chooses a dry day any moss will flake off the roof much easier and there won’t be any scratch marks on the tiles. This approach is recommended by Forticrete who are one of several roof tile producers in the United Kingdom.

If you are concerned about scuff marks to the tiles then a rubber scraper and stiff brush can be used. Remember – with this method there is no damage to the tiles and no flood risk, unlike other roof cleaning methods such as high-powered pressure washing.

Look at the photo below, the moss was removed with nothing more than a trowel and a hand brush, the difference compared to the mossy section of roof is striking.

How To Data Scrape A Website

Roof cleaning doesn’t need to involve power jet washing.

This photo was even taken before any chemicals were applied – what a difference! As you can see, there is no damage to the tiles, no scratch marks and the tiles have not been dislodged by the process.

So Why Use Chemicals?

If you apply a fungicide/moss killer after the moss has been scraped off, it will soak into the tile and more importantly into the overlaps of the tiles and kill off any tiny moss spores. It will also kill off lichen and other organic growth on the tiles. This roof cleaning process can take some time after the excess moss is initially removed but there is no damage to the surface of the tile.

The application of roof cleaning chemicals will also prevent the moss from growing back for several years.

As a bonus, this chemical will also kill lichen, so those black marks on the tiles will fade away a few weeks after the treatment.

Which Chemicals are Recommended?

RoofCoatingScam recommends the following moss treatment chemical for roof tiles, paths, patios, fence panels, driveways and most hard surfaces:

No Pressure Washing. No Flood Risk. No Expensive (and Useless) Roof Coatings

As stated in other pages, roof coatings are often an expensive and short term approach to roof renovation/roof cleaning. There is a real flood risk, the coatings often look terrible after a few years and they being mis-sold by many many companies in the United Kingdom.

The manual approach to removing moss and applying a chemical maybe considered an old fashioned roof cleaning method but it is time proven and you certainly won’t have to worry about paint peeling off the tiles.

Did I mention that manually removing the moss/applying a fungicide is often the cheapest cleaning method?

Expect to pay around £500 for a semi detached house in Southern England. Compare that to £2000 for a roof cleaning and coating.

Roof cleaning without jet washing

Roof moss removal in progress – no power washing tools used

Roof demossed, no mess, no leaks, cheap and affordable

How Long Does the Moss Killer Last?

Assuming that the roof isn’t under tree cover, the moss shouldn’t grow back for about 3 years, after 4 years there will be light growth.

For this reason, the homeowner should instruct a roofing company to re-apply the chemical every 3 years, thus preventing any moss growth and keeping the roof looking clean.

Moss killing chemicals are not expensive, expect to pay around £30 for enough to treat a semi detached property. A backpack sprayer costs around £25 and the work takes around an hour.

If you purchase a more expensive backpack sprayer (Hozelock 12 or 16 litre) then you will find they have a very long reach, about 6 metres! That is enough to reach the top of the roof from access towers erected to gutter height.

So once the moss is initially removed nobody ever needs to get on the roof again. This reduces the amount of foot traffic on the roof tiles. Remove the moss once, re-apply the chemicals from gutter height with a sprayer every few years – simple and cost effective!

Data

This type of work is often carried out by traditional roofing contractors, although few roof cleaning/coating companies recommend this approach as it doesn’t involve pressure washers and expensive roof coatings/sealants.

It is possible to complete a roof cleaning project without pressure washing, but the chemicals do take a little time to work. Remember: the “dirt” you see on a roof is usually moss and lichen which can be treated and prevented with chemicals.

Can a Moss Killer be Applied Before the Moss is Removed?

This is a common question, can/should a moss killing chemical be applied to the roof before the moss is removed?

The answer is no.

First, you would need much more of the chemical as the moss would soak it up.

Second, thick layers of moss will prevent the chemical from soaking into the tile/overlaps.

Third, if you were thinking of applying the chemical instead of manually removing the moss with a scraper, then you would have a big problem – the dead moss would gradually wash down into the gutters and block them. It could take months if not years for all the moss to wash off the roof. Your gutters would need frequent cleaning and even then they are still likely to become blocked.

Advice From Tile Manufacturers

Marley Eternit, one of the largest roof tile manufacturers in the United Kingdom responded to an email enquiry about how to clean their popular “Double Roman” type roof tile with the following statement:

We DO NOT recommend pressure washing your roof covering.

Forticrete, which manufacturers a variety of concrete tiles for the UK market responded to a similar request with the following statements:

Removal of moss should be done manually off proper roof access equipment to protect the tiles and operators.
This should be done with a scraper and stiff hand brush.

They added:

Power washing the tiles will remove fines in the concrete which will affect the lifespan of the tiles…. As a manufacturer of concrete roof tiles we would not advocate power washing or recoating tiles as this would invalidate any guarantee.

Which Chemicals are Best?

As we have previously stated, in most cases moss will not damage the roof tiles but can block gutters and drainage points. On certain roofs, such as those with a very shallow pitch, moss can potentially reduce the flow of water off the roof.

If this is the case (and very few roofs suffer from this issue) then gently removing the excess moss via traditional methods and treating the tiles/remaining moss with a fungicide is the best option.

Try any of the moss killers listed here, they can be used on tiles, patios and driveways etc. I have used all of them previously.

As general advice: RoofCoatingScam does not recommend regular roof cleaning just for the sake of improving the appearance of the roof. Foot traffic can damage roof tiles so we only recommend roof cleaning/de-mossing when it is absolutely necessary, ie when the moss is preventing water from draining off the roof.

Further Reading

What Does It Mean To Scrape Data

Did you like this article? Why not “like” us on Facebook and then try one of the following links:

Cost and price guides for roofing related work can be found at Roofer’s Rates

Can You Help?

You can help spread the word about roof coating scams, miss-selling and bad business practice by doing any of the following:

Share and “like” this page on FACEBOOK – remember we are a small website with a limited budget, help promote us please.

Write about our website on your blog or any other social media platform.

Leave a comment/feedback for us below or answer our question/poll (located to the right of this page)

(Page updated February 2014)