Web scraping for business – 7 non-obvious ways to go about it.

Share This Post

Share on facebook
Share on linkedin
Share on twitter
Share on email

Modern web-based businesses are data-hungry organisms, relying on data for their operation and constantly gorging on an array of structured datasets for various purposes.

The target datasets may be designed for a purpose, well-formed collections, intended for programmatic processing; examples would include commercially available data such as national mapping, demographic, contact, or geo-referencing data. Alternatively, open-source collections, for example: World Bank open economic data or historical stock price market data, may be required – BUT vast amounts of valuable data is also available as content on the web, in theory fully downloadable, in practice a technical challenge to reliable and ultimately acquire . . . and this is where the fun starts – on simple static, well structured HTML web pages the challenge would seem trivial, but when was the last time you saw one of those  . . .

Collecting large quantities of web-hosted data and structuring it for ingestion and analysis is colloquially called “Scraping.” There are many, many very valid reasons for indulging in a bit of data scraping for business – Let’s look at a few web-scraping ideas – some obvious and some a bit esoteric:-

1. Monitoring competitor websites for changes to prices, stock, and special offers.

2. Analysing the SEO performance of your website across major search engines by monitoring for ranking changes (positive or negative) in real-time.

3. Lead and contact acquisition - by monitoring relevant blogs, commentary, and reviews for potential customers.

4. Pricing compliance - monitoring the pricing behavior of agents and outlets to ensure they are complying with the pricing policy in their distribution agreements.

5. Acquiring labeled data sets for training machine learning models - Lack of labeled data is one of the most significant issues in creating an AI system. Many websites hold curated collections of images, text, or sound files.

6. Time series data for predictive analysis. Does not everyone want to predict share price and currency movements! Low latency data is typically complicated and expensive to acquire, but many financial websites provide highly granular financial data extracted with the right scraping approach.

But – most of this data, however valuable, however available, is not structured for easy retrieval. You’ll have to deal with complex stateful web pages, 3rd party web applications, online databases with an arbitrary API, etc., preventing the reliable extraction and parsing of the target data sources.

 

But wait… that’s not all!

It is not unknown for some websites to deliberately obfuscate or frequently change the presentation of their data specifically to make scraping operations harder. This is particularly common with online bookmarks and betting sites, for whom their Odds portfolio is their competitive edge. Many web pages are stateful and rely on tracking cookies for their operation; this leads to additional complexity in designing an appropriate stateful parser.

Ensuring that any scrape/data extraction is complete and accurate requires a careful, experienced, and thoughtful approach, remembering that there is no “test” platform, only a 3rd party host. While web scraping publicly accessible information is not illegal, it is prudent to undertake these operations without unduly stressing the target server; It may be that some systems are proofed against intensive scraping operations and take action to protect themselves – for instance, denying access to particularly active source hosts, or visitors from particular territories – necessitating the use of multiple scrape server host-addresses (or lots of proxies) or a VPN operator.

So, the target host service must be carefully examined, protocols, formats, and constraints carefully understood, parsers created, and an extraction plan developed and tested.

After having made a first extraction pass, the acquired data needs to be carefully examined and verified both for completeness and for any formatting errors which may have crept in – if necessary, the extraction algorithm may need to be adjusted and re-run. The final data set needs then to be versioned and provided to the end-user for onward processing.

Fun fact: Of course, all of the above presupposes that the extraction process is a one-off or rare event.

As we saw earlier – many of the use-cases we identified will require that scraping is an ongoing activity (as an example, watching the odds variance on a competitor’s online betting site) – in these instances, the scraping platform can be thought of as an inbound API gateway, providing an interface between the complex “API” of the competitor website and translation this to a well-formed data stream.

 

Where do we go from here?

So – whatever data your business craves, be it intelligence on your competition, content, reporting, or machine learning training data, the likelihood is that it’s out there on the web somewhere. The challenge of retrieving it quickly, efficiently, and accurately is often non-trivial. Umbric is there to help. With extensive experience in web scraping for businesses – we can manage both complex and straightforward scrapes; we are your trusted partner.

Grab a slot on our calendar – we’re looking forward to learning more about your individual needs!

 

 

About Umbric Data Services

Forget knowledge; data is power – especially when hooked up to custom web applications leveraging the latest in big data, machine learning, and AI to deliver profitable results for business.

Umbric Data Services combines the latest in tech with good old-fashioned customer service to deliver innovative, efficient software that drives productive business insight and revenues in the click of a button. Isn’t it time you worked smart, not hard? Find out more about how we help businesses to grow – visit umbric.com today.

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore