Internet Archive (Wayback) Site Extractor

Internet Archive (Wayback) Site Extractor

We run a group of sites that make money from advertising.

We are constantly in the need for non duplicate content.

We therefore came up with the idea of extracting old versions of sites from a number of years back whose content has totally changed since then.

Here is what we need....

A tool that will extract archived sites from [url removed, login to view]

If you do a search for lets say [url removed, login to view]

It will bring up many different versions of the site from over the years.

We will manually pick the one we want...

Lets say we pick Sep 20, 2003

We will then enter into our extractor tool its url ..

In this case being [url removed, login to view]://[url removed, login to view]

The extractor tool will then extract the entire site including all images and files and save it to a folder.

That is it.

However they have a little bit of code on every page that makes this hard to do.

You will have to find a way around the code.

Additional in the final extracted files there must be no mentioned of the [url removed, login to view] site in any of the code.

We will pay using GAF escrow.

Taidot: ASP, Java, Linux, Perl, Komentosarjan asennus

Näytä lisää: internet archive site extractor, wayback extractor, internet archive extractor, www cars com, site cars, search 4 internet, internet internet search, internet code, find perl, extracting bit, escrow internet, code internet, code code internet, case escrow, cars using, cars site, cars com, advertising on internet, wayback, wayback site

Tietoa työnantajasta:
( 7 arvostelua ) far Rockaway, Israel

Projektin tunnus: #199559