Lesson Overview
This lesson focuses on how Artificial Intelligence can be used in data scraping and web scraping. The KM-07 documents define this topic through the following learning outcomes: concept and definition, purpose of data scraping, data scraping tools, legal issues, web scraping procedure, and libraries used for web scraping.
Web scraping involves writing a software robot that can automatically collect data from webpages. The documents explain that simple bots may do basic extraction, while more sophisticated bots use AI to find the correct data on a page and copy it into suitable data fields for processing by analytics applications. AI and ML can enhance the web scraping value chain, especially where the work is tedious, repetitive, and requires governance and quality assurance.
1. Concept and Definition of Data Scraping
Data scraping, in its general form, refers to a technique in which a computer program extracts data from output generated by another program. In the KM-07 documents, this is closely linked with web scraping, which is the process of using an application or bot to extract useful information from a website.
Web scraping is more than just copying what appears on a page. The documents explain that, unlike screen scraping, which only copies visible pixels displayed on a screen, web scraping extracts the underlying HTML code and, with it, data stored in a database. Once extracted, this data can be replicated, reformatted, stored, and used elsewhere.
This means web scraping is not simply a manual reading exercise. It is a technical process that allows machines to gather structured and unstructured information from websites automatically.
2. Is Web Scraping Part of AI?
The documents directly explain that AI and ML can be used to enhance several processes along the web scraping value chain. This is especially useful in tasks that are time-consuming and tedious, and in tasks that require governance and quality assurance.
In practical terms, this means AI can help scraping systems:
- identify the correct content on complex pages,
- distinguish useful data from irrelevant page content,
- improve extraction accuracy,
- automate repetitive scraping processes,
- support analytics once the data is collected.
So while traditional web scraping can be rule-based, AI-enhanced scraping can make the process smarter, faster, and more adaptive.
3. Purpose of Data Scraping
The KM-07 documents explain that one of the reasons scraping exists is because many companies do not want their unique content to be downloaded and reused for unauthorized purposes. As a result, they do not always expose all data through APIs or other easy-to-consume sources. Scraper bots, however, try to obtain website data despite those limitations. This creates what the documents describe as a cat-and-mouse game between scraper bots and website protection strategies.
Scraper bots may be created for several purposes, including:
Content scraping
This is when content is pulled from a website and reused elsewhere. The documents give the example of scraping reviews from a site like Yelp and reproducing them on another site.
Price scraping
Competitors may scrape pricing information in order to compare prices and create a market advantage.
Contact scraping
Scrapers may collect email addresses and phone numbers from websites such as online directories. The documents note that this is often used for bulk mailing lists, robocalls, spam, or malicious social engineering attempts.
Beyond these examples, the documents also show a simpler educational example: scraping product information from an e-commerce site into an Excel spreadsheet.
4. Why Scrape Website Data?
The KM-07 material explains that scraping is often used because websites do not always provide their data in a directly consumable format. If an organization wants data for analysis, business intelligence, market comparison, product monitoring, or research, scraping may be the only practical method of collecting that information at scale.
The importance of scraping website data includes:
- collecting large amounts of web information quickly,
- reducing manual copying,
- enabling structured analysis,
- supporting decision-making,
- gathering public market or product data,
- feeding AI and analytics systems with real-world data.
5. Legal Issues in Web Scraping
The documents make it clear that web scraping is not illegal by itself. It is legal when you scrape data that is publicly available on the internet. However, certain kinds of data are protected by regulations, so caution is required when scraping:
- personal data,
- intellectual property,
- confidential data.
The material also explains that developers should respect target websites and use empathy to create ethical scrapers. This means legality depends not only on the act of scraping, but also on:
- what kind of data is being scraped,
- whether the data is protected,
- how the scraped data will be used,
- whether website rules or regulations are being violated.
So learners must understand that scraping has both technical and ethical dimensions.
6. The Web Scraping Procedure
The KM-07 documents present a clear scraping workflow. To extract data using web scraping with Python, the basic steps are: find the URL, inspect the page, find the data to extract, write the code, run the code, and store the data in the required format.
Let’s break that down in detail.
Step 1: Find the URL to scrape
The first step is to identify the exact webpage that contains the target information. Without the correct URL, the scraper has no source to work from.
Step 2: Inspect the page
The developer then inspects the page structure, usually using browser developer tools, in order to understand the HTML layout and identify where the target data is stored.
Step 3: Find the data you want to extract
Once the page is inspected, the relevant HTML elements must be located. This may include product titles, prices, tables, reviews, headings, or links.
Step 4: Write the code
A scraping script is then written using suitable libraries or tools. This code sends requests to the website, retrieves the page content, and parses it for the required data.
Step 5: Run the code and extract the data
The script is executed so the program can fetch and process the content automatically.
Step 6: Store the data in the required format
Finally, the extracted data is saved in a useful structure such as a spreadsheet, CSV file, database, or another required format.
The facilitator guide also summarizes the scraping process in three broader stages:
-
the scraper bot sends an HTTP GET request,
-
the website responds and the scraper parses the HTML,
-
the extracted data is converted into the required output format.
7. Types of Data Extracted Through Web Scraping
The documents explain that web scraping extracts underlying HTML code and data stored in a database, rather than just visible content.
Examples of data that can be scraped include:
- product information,
- prices,
- reviews,
- contact information,
- website content,
- data tables,
- text from HTML pages.
Because the scraper works with HTML and database-backed page data, the extracted information can be much richer and more structured than simple copy-and-paste.
8. Scraping a URL
The KM-07 summative memorandum defines scraping a URL as using bots to extract content and data from a website. It emphasizes that web scraping works on the underlying HTML and stored data, not just what appears on the surface.
This is important because it shows that a URL is not just a webpage address; it is the entry point into a structured information source that can be processed programmatically.
9. Code Scraping
The KM-07 summative memorandum defines code scraping, or data scraping, as a technique where a computer program extracts data from human-readable output coming from another program.
This means that scraping is not limited to webpages alone. It can also apply to other software outputs, as long as the data can be captured and transformed into a useful format.
10. Libraries Used for Web Scraping
The KM-07 learner guide includes a section on Python libraries used for web scraping. It notes that web scraping can extract both structured and unstructured data from the web and export it into a useful format.
Requests
Requests is described as the most basic Python library for web scraping. It is used for making HTTP requests such as GET and POST. It is simple and easy to use, which is why it is sometimes described as “HTTP for Humans.” However, Requests does not parse HTML on its own.
Advantages of Requests:
- simple,
- basic/digest authentication,
- international domains and URLs,
- chunked requests,
- HTTP(S) proxy support.
Disadvantages of Requests:
- retrieves only static content,
- cannot parse HTML,
- cannot handle websites built purely with JavaScript.
lxml
The learner guide explains that lxml is a fast, production-quality HTML and XML parsing library. It works especially well when scraping large datasets and is often combined with Requests. It supports XPath and CSS selectors for extracting information.
Advantages of lxml:
- faster than many other parsers,
- lightweight,
- uses element trees,
- Pythonic API.
Disadvantages of lxml:
- does not work well with poorly designed HTML,
- official documentation may be difficult for beginners.
Beautiful Soup
Beautiful Soup is described as one of the most widely used Python libraries for web scraping. It builds a parse tree for HTML and XML documents and is considered beginner-friendly. It can also be combined with other parsers like lxml.
The guide notes that Beautiful Soup is easier to work with, has strong documentation, and works well with poorly designed HTML, but it is slower than pure lxml.
Selenium
The learner guide explains that Selenium is especially useful for dynamically populated websites, where data is loaded through JavaScript. Other libraries may struggle with such pages, but Selenium can render web pages, click elements, fill forms, scroll pages, and perform actions much like a human user.
Advantages of Selenium:
- beginner-friendly,
- automated web scraping,
- can scrape dynamically populated pages,
- automates browsers,
- can do many actions on a web page.
Disadvantages of Selenium:
- very slow,
- difficult to set up,
- high CPU and memory usage,
- not ideal for large projects.
Lesson Summary
This lesson explained how AI can be used in data scraping and web scraping. The documents define web scraping as the use of software robots to automatically collect data from webpages, and they show that AI and ML can improve this process where accuracy, governance, and efficiency are important.
The lesson also covered:
- the purpose of data scraping,
- why websites are scraped,
- legal and ethical considerations,
- the web scraping procedure,
- types of data extracted,
- code scraping,
- and key Python scraping libraries such as Requests, lxml, Beautiful Soup, and Selenium.