Top 10 Benefits of Using ScrapingBee for Web Scraping

Jennie Lee
6 min readApr 9, 2024

--

Looking for a Postman alternative?

Try APIDog, the Most Customizable Postman Alternative, where you can connect to thousands of APIs right now!

Introduction to ScrapingBee and Web Scraping

ScrapingBee is a Web Scraping API company that specializes in providing tools and services for extracting data from websites. Web scraping is the process of automating the extraction of data from websites and is widely used by businesses and individuals for various purposes such as market research, lead generation, data analysis, and more.

Web Scraping 101 with Javascript and NodeJS

In this tutorial, we will explore the basics of web scraping using JavaScript and Node.js. Web scraping involves writing code to automate the extraction of data from websites. We will use the Cheerio library, which provides a way to manipulate the DOM structure of a webpage similar to jQuery.

To get started, make sure you have Node.js installed on your machine. You can download it from the official website or use a package manager like npm to install it.

Next, create a new directory for your project and navigate to it in the terminal. Initialize a new Node.js project by running the npm init command and following the prompts.

Once your project is set up, you can install the necessary dependencies by running the following command:

npm install axios cheerio

We will use the Axios library for making HTTP requests to the website we want to scrape, and Cheerio for parsing and manipulating the HTML response.

Now, let’s write some code to scrape a website. Create a new file called scrape.js and open it in your favorite text editor. Add the following code:

const axios = require('axios');
const cheerio = require('cheerio');
const fs = require('fs');

axios.get('https://example.com')
.then(response => {
const $ = cheerio.load(response.data);
const data = [];

// Extract data from HTML elements
$('h2').each((index, element) => {
const title = $(element).text();
data.push({ title });
});

// Save data to a JSON file
fs.writeFileSync('data.json', JSON.stringify(data, null, 2));
})
.catch(error => {
console.log(error);
});

This code demonstrates a simple web scraping example. We make a GET request to https://example.com using Axios, and then load the HTML response into Cheerio. We use Cheerio's $ function, which is similar to jQuery, to select and manipulate elements on the webpage.

In this example, we extract the text content of all <h2> elements on the page and store them in an array. Finally, we save the array of data to a JSON file using Node.js's fs module.

To run the code, open a terminal and navigate to the directory where scrape.js is located. Run the following command:

node scrape.js

This will execute the script and scrape the website, saving the extracted data to a data.json file in the same directory.

This is just a basic example to get you started with web scraping using JavaScript and Node.js. There are many more advanced techniques and libraries you can explore to enhance your scraping capabilities.

Easy Web Scraping With Scrapy

Scrapy is a powerful Python framework for web scraping. It provides an easy-to-use API and a robust set of features that make scraping websites a breeze.

To get started with Scrapy, make sure you have Python installed on your machine. You can download it from the official Python website or use a package manager like Anaconda.

Once Python is installed, you can install Scrapy by running the following command:

pip install scrapy

Now, let’s create a new Scrapy project. In your terminal, navigate to the directory where you want to create the project and run the following command:

scrapy startproject myproject

This will create a new directory called myproject with the basic structure of a Scrapy project.

Next, navigate to the myproject directory and create a new Spider. A Spider is a Scrapy component that defines the scraping logic for a website. Run the following command:

cd myproject
scrapy genspider example example.com

This will create a new Spider called example in the spiders directory. Open the newly created Spider file in your text editor and update the start_urls variable with the URL of the website you want to scrape.

Now, let’s write some code to extract data from the website. In the Spider file, find the parse method and update it with the following code:

import scrapy

class ExampleSpider(scrapy.Spider):
name = 'example'
allowed_domains = ['example.com']
start_urls = ['http://www.example.com/']

def parse(self, response):
data = []

# Extract data using CSS selectors
for element in response.css('h2'):
title = element.css('::text').get()
data.append({'title': title})

# Yield the extracted data
yield {'data': data}

In this code, we use Scrapy’s CSS selectors to select and extract data from the webpage. The response object represents the HTML response of the website. We use the css method to select elements with the h2 tag, and then use the ::text pseudo-selector to extract the text content.

Finally, we yield the extracted data as a dictionary. Scrapy will automatically handle the data and save it to a file specified in the Scrapy settings.

To run the Spider, open a terminal and navigate to the myproject directory. Run the following command:

scrapy crawl example

This will start the scraping process and output the extracted data to the terminal.

Scrapy provides many advanced features and options for configuring requests, handling pagination, following links, handling cookies, and more. Make sure to check out the official Scrapy documentation for more information and examples.

Practical XPath for Web Scraping

XPath is a powerful query language for selecting elements in an XML or HTML document. It provides a concise syntax and a wide range of functions for navigating and querying the DOM tree.

To use XPath for web scraping in Python, we can use the lxml library, which provides XPath support. To install lxml, run the following command:

pip install lxml

Now, let’s write some code to scrape a website using XPath. Create a new Python file called scrape.py and open it in your text editor. Add the following code:

from lxml import html

# Make a request to the website
response = requests.get('https://example.com')

# Create an HTML tree from the response content
tree = html.fromstring(response.content)

# Extract data using XPath expressions
titles = tree.xpath('//h2//text()')

# Print the extracted data
for title in titles:
print(title)

In this example, we use the requests library to make a GET request to the website we want to scrape. We then create an HTML tree from the response content using the fromstring function of the html module.

Next, we use XPath expressions to select and extract data from the HTML tree. In this case, we select all text nodes (//text()) inside the <h2> elements (//h2).

Finally, we print the extracted data to the console. You can modify the code to save the data to a file or perform any other processing as needed.

To run the code, open a terminal and navigate to the directory where scrape.py is located. Run the following command:

python scrape.py

This will execute the script and output the extracted data to the console.

XPath provides a powerful and flexible way to select elements in an HTML document. It supports a wide range of operators, functions, and axes for advanced querying and navigation. Make sure to check out the official XPath documentation for more information and examples.

Looking for a Postman alternative?

Try APIDog, the Most Customizable Postman Alternative, where you can connect to thousands of APIs right now!

--

--

Jennie Lee
Jennie Lee

Written by Jennie Lee

Software Testing Blogger, #API Testing

No responses yet