Posts Tagged ‘extract’

extract all links from a file

June 17, 2014 Leave a comment

You want to extract all links (URLs) from a text file.


def extract_urls(fname):
    with open(fname) as f:
        return re.findall('http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+',
Categories: python Tags: , ,

Extract all links from a web page

March 10, 2011 19 comments


You want to extract all the links from a web page. You need the links in absolute path format since you want to further process the extracted links.


Unix commands have a very nice philosophy: “do one thing and do it well”. Keeping that in mind, here is my link extractor:

#!/usr/bin/env python


import re
import sys
import urllib
import urlparse
from BeautifulSoup import BeautifulSoup

class MyOpener(urllib.FancyURLopener):
    version = 'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv: Gecko/20110303 Firefox/3.6.15'

def process(url):
    myopener = MyOpener()
    #page = urllib.urlopen(url)
    page =

    text =

    soup = BeautifulSoup(text)

    for tag in soup.findAll('a', href=True):
        tag['href'] = urlparse.urljoin(url, tag['href'])
        print tag['href']
# process(url)

def main():
    if len(sys.argv) == 1:
        print "Jabba's Link Extractor v0.1"
        print "Usage: %s URL [URL]..." % sys.argv[0]
    # else, if at least one parameter was passed
    for url in sys.argv[1:]:
# main()

if __name__ == "__main__":

You can find the up-to-date version of the script here.

The script will print the links to the standard output. The output can be refined with grep for instance.


The HTML parsing is done with the BeautifulSoup (BS) library. If you get an error, i.e. BeautifulSoup cannot parse a tricky page, download the latest version of BS and put in the same directory where is located. I had a problem with the version that came with Ubuntu 10.10 but I could solve the problem by upgrading to the latest version of BeautifulSoup.
Update (20110414): To update BS, first remove the package python-beautifulsoup with Synaptic, then install the latest version from PyPI: sudo pip install beautifulsoup.


Basic usage: get all links on a given page.


Basic usage: get all links from an HTML file. Yes, it also works on local files.

./ index.html

Number of links.

./ | wc -l

Filter result and keep only those links that you are interested in.

./ | grep -i jpg

Eliminate duplicates.

./ | sort | uniq

Note: if the URL contains the special character “&“, then put the URL between quotes.

./ ""

Open (some) extracted links in your web browser. Here I use the script “” that I introduced in this post. You can also download “open_in_tabs.pyhere.

./ | grep -i jpg | sort | uniq | ./

Update (20110507)

You might be interested in another script called “” that extracts all image links from a webpage. Available here.

Categories: python Tags: , , , ,