Skip to content

Scrape Zalando utilising Python, Selenium & Residential Proxies

Notifications You must be signed in to change notification settings

Smartproxy/zalando_selenium_scraper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

Dependencies

BeautifulSoup
webdriver_manager
selenium
extension >> extension.py

Authentication

You can create, edit, and delete proxy users in our Dashboard > Residential > Proxy setup page.

Zalando Selenium Scraper

This code is a script in Python that uses the selenium and BeautifulSoup libraries to scrape product information from Zalando.

The script uses the Chrome web browser, controlled by the selenium library, to navigate to the website and retrieve its source code. The source code is then parsed using BeautifulSoup to extract specific information about each product on the page.

In order to use a proxy, the code uses the "webdriver_manager" and "extension" libraries to install the chrome driver and configure the Chrome browser to use a proxy server. The credentials for the proxy server, username, password, endpoint, and port are passed as arguments to the "proxies" function from the "extension" library (extension.py).

The script then uses BeautifulSoup to search the page source for specific tags containing the product information. The product information is then stored in a dictionary and added to a list "data".

Finally, the script saves the "data" list to a JSON file named "data.json".