Download Wikipedia pages using API (in Python)

Download Wikipedia pages using API (in Python)

Closed - This job posting has been filled.

Job Description

I have a list of 644 Wikipedia pages which you must process using python. (csv file is attached)

For each page you must provide me with :

1. a XML file with a list of all "revision" ids

2. for each revision, download the relevant Wikipedia page and store it as a HTML document

The final output is organized as follows:

1. a folder for each wikipedia page (eg: ./Pope_John_Paul_II/)

2. an xml file (Pope_John_Paul_II.xml) which contains a list of revisions in that folder. (use the following API: http://www.mediawiki.org/wiki/API:Properties#revisions_.2F_rv)

3. a revisions folder with a html file named after each revision id (eg: ./Pope_John_Paul_II/revisions/552708734.html) which is downloaded from Wikipedia's mobile version -- in this case http://en.m.wikipedia.org/w/index.php?oldid=552708734

4. Python scripts for this entire process, which I will use to replicate your data collection process.

Open Attachment

Other open jobs by this client