In this tutorial, you’re going to learn how to extract all links from a given website or URL using BeautifulSoup and requests. If you’re new to web scraping I would recommend starting first with a and then move to this once you are comfortable with the basics. beginner tutorial to Web scraping How Do We Extract All Links? We will use the library to get the raw page from the website and then we are going to use to extract all the links from the HTML page. requests HTML BeautifulSoup Requirements To follow through with this tutorial you need to have and library installed. requests Beautiful Soup Installation $ pip install requests $ pip install beautifulsoup4 is a code that will prompt you to enter a to a and then it will use to send a request to the server to request the page and then use to all link tags in the HTML. Below link website requests GET HTML BeautifulSoup extract requests bs4 BeautifulSoup html = requests.get(site).text soup = BeautifulSoup(html, ).find_all( ) links = [link.get( ) link soup] links site_link = input( ) all_links = extract_all_links(site_link) print(all_links) import from import : def extract_all_links (site) 'html.parser' 'a' 'href' for in return 'Enter URL of the site : ' Output kalebu@kalebu-PC:~/$ python3 link_spider.py Enter URL of the site: https://kalebujordan.com/ [ , , , , .....] '#main-content' 'mailto://kalebjordan.kj@gmail.com' 'https://web.facebook.com/kalebu.jordan' 'https://twitter.com/j_kalebu' 'https://kalebujordan.com/' I hope you found this useful, feel free to share it with your fellow developers. Previously published here: https://kalebujordan.com/learn-how-to-extract-all-links-from-a-website-in-python/ The Original Article can be found on kalebujordan.com