Archive
extract all links from a file
Problem
You want to extract all links (URLs) from a text file.
Solution
def extract_urls(fname): with open(fname) as f: return re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', f.read())
Extract all links from a web page
Problem
You want to extract all the links from a web page. You need the links in absolute path format since you want to further process the extracted links.
Solution
Unix commands have a very nice philosophy: “do one thing and do it well”. Keeping that in mind, here is my link extractor:
#!/usr/bin/env python # get_links.py import re import sys import urllib import urlparse from BeautifulSoup import BeautifulSoup class MyOpener(urllib.FancyURLopener): version = 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.15) Gecko/20110303 Firefox/3.6.15' def process(url): myopener = MyOpener() #page = urllib.urlopen(url) page = myopener.open(url) text = page.read() page.close() soup = BeautifulSoup(text) for tag in soup.findAll('a', href=True): tag['href'] = urlparse.urljoin(url, tag['href']) print tag['href'] # process(url) def main(): if len(sys.argv) == 1: print "Jabba's Link Extractor v0.1" print "Usage: %s URL [URL]..." % sys.argv[0] sys.exit(-1) # else, if at least one parameter was passed for url in sys.argv[1:]: process(url) # main() if __name__ == "__main__": main()
You can find the up-to-date version of the script here.
The script will print the links to the standard output. The output can be refined with grep
for instance.
Troubleshooting
The HTML parsing is done with the BeautifulSoup (BS) library. If you get an error, i.e. BeautifulSoup cannot parse a tricky page, download the latest version of BS and put BeautifulSoup.py
in the same directory where get_links.py
is located. I had a problem with the version that came with Ubuntu 10.10 but I could solve the problem by upgrading to the latest version of BeautifulSoup.
Update (20110414): To update BS, first remove the package python-beautifulsoup
with Synaptic, then install the latest version from PyPI: sudo pip install beautifulsoup
.
Examples
Basic usage: get all links on a given page.
./get_links.py http://www.reddit.com/r/Python
Basic usage: get all links from an HTML file. Yes, it also works on local files.
./get_links.py index.html
Number of links.
./get_links.py http://www.reddit.com/r/Python | wc -l
Filter result and keep only those links that you are interested in.
./get_links.py http://www.beach-hotties.com/ | grep -i jpg
Eliminate duplicates.
./get_links.py http://www.beach-hotties.com/ | sort | uniq
Note: if the URL contains the special character “&
“, then put the URL between quotes.
./get_links.py "http://www.google.ca/search?hl=en&source=hp&q=python&aq=f&aqi=g10&aql=&oq="
Open (some) extracted links in your web browser. Here I use the script “open_in_tabs.py
” that I introduced in this post. You can also download “open_in_tabs.py
” here.
./get_links.py http://www.beach-hotties.com/ | grep -i jpg | sort | uniq | ./open_in_tabs.py
Update (20110507)
You might be interested in another script called “get_images.py
” that extracts all image links from a webpage. Available here.