Hi there,
To implement your use case, i propose the following setup:
1. since you need to crawl to one url, the quick way to do it is using urllib or request
2. then find the fields using beautifulsoup
3. and save the file in csv or excel
since you know the script can be finished in an hour or two then your knowledgeable. I am proud python guy doing web development and scraping for 2 years now.
I can finish this job in less than an hour to 5 days. The time depends because some urls are hard to crawl.
You can check my portfolio for more info. If you want to know more about me, please dont hesitate to ask.
Yours Sincerely