Google
|
von Autom Team

Scrape Bing Search using Python

Google may dominate the search market, but Bing has developed strengths that make it valuable in its own right. It often provides different results, ranks pages uniquely, and excels in areas such as image search, news discovery, and certain commercial queries. For teams monitoring visibility beyond Google, Bing offers an alternative perspective that is important to consider.

Those who work with search data are already aware of this. Bing’s data is utilized for keyword tracking, competitor research, content analysis, and training internal systems, where a diverse range of search signals is crucial. If you're reading this, you likely recognize that relying solely on one search engine can create blind spots, and Bing helps to address those gaps.

In this guide, I will walk you through the process of scraping Bing search results using Python. We will extract common elements from the results, such as titles, links, snippets, and rankings, starting from a single page and scaling up across multiple result pages. 

The goal is straightforward: to help you collect structured Bing search data in a practical, repeatable manner that can be easily adapted to your own use case.

Why use Python to Scrape Bing Search Results?

Python is often the first choice for web scraping because it’s easy to pick up. You don’t need advanced programming knowledge to start fetching pages and extracting useful information from them.

It’s also supported by a large, active community. If something breaks or doesn’t behave as expected, chances are the issue has already been discussed. And if you don’t find an answer right away, you can post your question on community forums or Reddit and usually get helpful responses within minutes.

As your needs grow, Python grows with you. A simple script written for learning or testing can later be expanded to handle more keywords, more pages, or larger datasets, which is why Python is widely used for bigger scraping tasks as well.

Scraping Bing search results with Python

Now that we’ve covered why Python is a good fit, let’s move into the actual scraping process.

To keep things simple, we’ll start by scraping just one Bing search results page. Once that’s clear, we’ll expand the same logic to cover multiple pages. By the end, you’ll have a reusable script that works for any search query.

From each search result, we’ll collect:

  • Page title

  • Result URL

  • Short description (snippet)

  • Position in search results

Python to Scrape Bing Search

Step 1: Project Setup

Before we start writing any code, make sure Python is already installed on your machine. If you’ve worked with Python before, you’re good to go.

Create a new folder for this script and move into it:

mkdir bing_scraper

cd bing_scraper

Next, install the libraries we’ll use in this tutorial:

pip install requests beautifulsoup4

Code

Once the dependencies are installed, create a new Python file where we’ll write the scraping logic:

bing_scrape.py

Once done, we are ready to scrape Bing.

Step 2: Fetch a Bing Search Results Page

We’ll begin by sending a request to Bing and loading the HTML content.

import requests
from bs4 import BeautifulSoup

l = []
o = {}

target_url = "https://www.bing.com/search?q=sydney&rdr=1"
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"}

resp = requests.get(target_url, headers=headers)

What’s in the code: We define a search keyword, send it to Bing using query parameters, and utilize a browser-like user agent to help avoid basic blocks.

Step 3: Parsing and Extracting Bing Search Result Data

Here, we have imported the libraries that we installed earlier and used them to send an HTTP GET request to our target URL. Now, we will use BeautifulSoup (BS4) to parse the HTML and prepare it for data extraction.

This can also be achieved using XPath, but for this tutorial, we will continue using BS4.

soup = BeautifulSoup(resp.text, 'html.parser')

completeData = soup.find_all("li", {"class": "b_algo"})

The soup variable now contains the parsed HTML structure of the entire page, which we will use to locate and extract the data we need. The completeData variable stores all the elements that match the search result container, which are the elements we are going to scrape.

You can verify this by inspecting the page structure in your browser.

Parsing and Extracting Bing Search

It’s time to identify the location of each element and extract it.

Step 4: Extracting Result Titles from Bing SERP

Extracting Result Titles from Bing SERP

The result title is present inside the anchor tag within the b_algo container. The completeData list will act as the data source for extracting titles.

o["Title"] = completeData[i].find("a").text

Step 5: Scraping URLs and Description from Bing Search Results

Along with the title, we also need to extract the result URL and description from each search result container.

Scraping URLs and Description

The description text is located inside a div element with the class b_caption. We can use this container to extract the snippet shown in the search results.

o["Description"] = completeData[i].find("div", {"class": "b_caption"}).text

Now, let’s combine all the extraction steps into a loop so that we can collect data from all search results on the page and store it inside the l array.

for i in range(0, len(completeData)):
    o["Title"] = completeData[i].find("a").text
    o["link"] = completeData[i].find("a").get("href")
    o["Description"] = completeData[i].find("div", {"class": "b_caption"}).text
    o["Position"] = i + 1

    l.append(o)
    o = {}

print(l)

This loop goes through each search result container, extracts the title, URL, description, and position, and stores the structured data inside a list.

At this point, we have successfully extracted the data from the first page of Bing search results.

Successfully extracted the data

Next, we will extend this logic so that we can extract results from multiple pages for any given search query.

Step 6: Scraping Multiple Bing Search Result Pages

When you move from the first page of Bing search results to the next page, you will notice a change in the URL. Bing automatically adds a query parameter to load the next set of results.

For example:


Page 1 URL - https://www.bing.com/search?q=sydney&rdr=1&first=1
Page 2 URL - https://www.bing.com/search?q=sydney&rdr=1&first=11
Page 3 URL - https://www.bing.com/search?q=sydney&rdr=1&first=21

From this pattern, we can see that the value of the first parameter increases by 10 for each new page. We can use this behavior to automatically generate new URLs inside a loop.

To achieve this, we will use a loop that increments the page offset by 10 on every iteration.

Here, we will dynamically update the target_url by modifying the value of the first parameter. This allows us to fetch a new results page every time the loop runs. For this example, we will limit the total pages to ten.

 Will limit the total pages to ten

Just by changing the search keyword inside the URL, you can reuse this script to scrape data for any query.

Step 7: Complete Code for Multi-Page Scraping

Below is the combined version of the script that scrapes multiple Bing result pages.

import requests
from bs4 import BeautifulSoup

l = []
o = {}

headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) \
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"}

for i in range(0, 100, 10):
    target_url = "https://www.bing.com/search?q=sydney&rdr=1&first={}".format(i+1)

    print(target_url)

    resp = requests.get(target_url, headers=headers)

    soup = BeautifulSoup(resp.text, 'html.parser')

    completeData = soup.find_all("li", {"class": "b_algo"})

    for i in range(0, len(completeData)):
        o["Title"] = completeData[i].find("a").text
        o["link"] = completeData[i].find("a").get("href")
        o["Description"] = completeData[i].find("div", {"class": "b_caption"}).text
        o["Position"] = i + 1
        l.append(o)
        o = {}

print(l)

This script now loops through multiple Bing result pages and extracts search result data for each page automatically.

How to Scale Bing Search Result Scraping Without Getting Blocked

When you start scraping Bing search results using direct scripts, things usually work well at a small scale. But as you increase the number of keywords, pages, or request frequency, you may notice inconsistent responses, CAPTCHA, or request failures. Search engines are built to detect automated patterns, which makes large-scale scraping difficult to maintain using only basic scripts.

This is why many teams move to a search scraping API when they want to scale reliably. Instead of managing IP rotation, retries, headers, and request limits manually, an API handles these challenges in the background. This allows you to focus on collecting and using search data rather than maintaining scraping infrastructure.

A practical option is using Autom’s Bing Search API. You can check out the documentation to learn more about how it works and how to integrate it into your workflow.

It is built for structured search data extraction and works well for automation workflows, internal tools, and data pipelines. You can sign up and start with 1,000 free credits, which makes it easy to test your use case before scaling. As your data needs grow, you can move to a premium plan based on your usage. 

If you are building internal tools, dashboards, or automation workflows using search data, moving to an API early can save significant engineering time.

Wrapping Up

Scraping Bing search results using Python is a great way to understand how search data extraction works. While direct scraping can work for small-scale use cases, using an API is the best approach when you want reliability, scalability, and structured data without handling blocks or infrastructure issues.

If you want to scrape Bing search results at scale without getting blocked, using a Bing search API will work best for you. This lets you move from basic scripts to production-ready data pipelines much faster. Signup to Autom to test the Bing search API in your workflow.

To learn more and explore additional automation use cases, check out these resources:

SERP API

Discover why Autom is the preferred API provider for developers.