Hi Akshay,

It seems that you need to scrape the webpages and execute the JavaScript
somehow. There are several solutions to this. A simple approach is to use
the Chrome Inspector and copy the URLs being accessed by JavaScript.
Another approach will be to use a headless browser like PhantomJS using
Selenium or use Selenium alone as explained in this tutorial
<http://thiagomarzagao.com/2013/11/12/webscraping-with-selenium-part-1/>.

Again, all this might fail if the Javascript content is truly dynamic like
a serverside token. So, good luck! :)

Cheers,
Arun


On Tue, Jul 1, 2014 at 3:49 PM, Akshay Verma <verak...@gmail.com> wrote:

> Hi,
>
> I am currently using Python 3. I have made a basic program which fetches
> HTML response from a URL and then parses it to get JS, CSS (using tinyCss2)
> and Image GET Requests URLs that the page should make to display properly.
>
> Problem here is that I am not able to get/locate all the requests as some
> requests are done by executing JS code. Could anyone suggest how to go
> ahead or a better approach?
>
> Best Regards,
> Akshay Verma.
> _______________________________________________
> BangPypers mailing list
> BangPypers@python.org
> https://mail.python.org/mailman/listinfo/bangpypers
>
_______________________________________________
BangPypers mailing list
BangPypers@python.org
https://mail.python.org/mailman/listinfo/bangpypers

Reply via email to