Simple Selenium Chrome Crawler (Python)

In this article, we show how you can create a simple crawler in Python that leverages Google Chrome and Selenium. The crawler we create will be able to take as input a list of urls to crawl, and to save as output the list of links it encountered during the crawl.

Installing Selenium and Chromedriver

First, go to to download the chromedriver version associated with your Google Chrome version. If you don't know which version of Chrome you are using, click on Chrome menu ("..."" icon on the right of your screen), then click on "Help" and "About Google Chrome".

The file you download is a zip file containing the chromedriver executable. Go to the directory you have created for your crawler, and extract the zip file in it. After it has been extracted, you should obtain a file called webdriver or webdriver.exe depending on your OS.

To install Selenium, you can simply run one of the following commands:
pip install seleniumpip3 install selenium

Loading a page with Selenium and Chrome

To ensure everything's installed properly, we create a simple version of our crawler that only loads a page, waits 5 seconds, and then closes the browser.
from selenium import webdriverimport time
driver = webdriver.Chrome('./chromedriver') # or chromedriver.exe on windows
time.sleep(5) # waits 5 seconds 
driver.quit() # closes the browser

Run your program and verify and that it opens

Crawling a list of urls with Selenium Chrome

Now that we verified that Selenium and chromedriver are properly installed, we modify our crawler to add more features. We create a `read_urls` function takes as input the path to a file containing a list of urls to crawl.
def read_urls(file_path):  urls = []
  with open(file_path, 'r') as file_urls:
      for line in file_urls:
          url = line.replace("\n", "")
  return urls

We also create a text file called urls.txt that contains 5 urls to crawl:

We modify the code of our crawler so that it reads the file containing the urls, and then, for each url, visits the page, gets all the links present on the page, and save them in a file. In order to get the list of links in a page, we use driver.find_elements_by_tag_name('a').
urls = read_urls('./urls.txt') # read the list of urls to crawldriver = webdriver.Chrome('./chromedriver')
links_crawled = []
for idx, url in enumerate(urls):
  print("Crawling {} ({}/{})".format(url, idx + 1, len(urls)))
  a_elts = driver.find_elements_by_tag_name('a')
  for a_elt in a_elts:

Finally, we create a function to save the links crawled in a file:

def save_links_crawled(links_crawled, file_path):  with open(file_path, 'w+') as file_links:
      for link in links_crawled:

We just need to call it after we close the browser.

save_links_crawled(links_crawled, './links_crawled.txt')

It generates a file called links_crawled.txt that contains the lists of links crawled: