Re: Visit All URLs with selenium python
In <43f70312-83ba-457e-a83f-7b46e5d2a...@googlegroups.com> Nicolewrites: > it just visit first url not all .. Can anybody help how to fix that.. Have you tried some basic debugging, for example printing p_links to verify that it contains what you expected, and then printing each url in the loop? -- John Gordon A is for Amy, who fell down the stairs gor...@panix.com B is for Basil, assaulted by bears -- Edward Gorey, "The Gashlycrumb Tinies" -- https://mail.python.org/mailman/listinfo/python-list
Re: Visit All URLs with selenium python
I have used 0 to 10 sec time sleep but it is not working.. please help me otherwise my assignment would get mark 0 -- https://mail.python.org/mailman/listinfo/python-list
Re: Visit All URLs with selenium python
browser.get('https://www.google.co.uk/search?q=Rashmi=Rashmi=chrome..69i57j69i60l3.6857j0j1=chrome=UTF-8#q=Rashmi+Custom+Tailors') time.sleep(5) try: p_links = browser.find_elements_by_css_selector(' div > h3 > a') url_list = [] for urls in p_links: if "Rashmi Custom Tailors" in urls.text: url = urls.get_attribute("href") url_list.append(url) for links in url_list: browser.get(links) time.sleep(4) except: pass -- https://mail.python.org/mailman/listinfo/python-list
Re: Visit All URLs with selenium python
Not actually that's not.. You said there could be a Problem in HTML that's why I tested it on a new URL but it just viting the first URL not all.. Please help.. -- https://mail.python.org/mailman/listinfo/python-list
RE: Visit All URLs with selenium python
Nicole wrote, on Wednesday, April 12, 2017 11:30 PM > > Here you can see now > > from selenium.webdriver.firefox.firefox_profile import FirefoxProfile > import random > from selenium import webdriver > from selenium.webdriver.common.keys import Keys > > browser.get('https://www.google.co.uk/search?q=Rashmi=Rashm > i=chrome..69i57j69i60l3.6857j0j1=chrome=UTF-8# > q=Mobiles+in+london') > time.sleep(5) > > try: > p_links = browser.find_elements_by_css_selector(' div > > h3 > a') > url_list = [] > for urls in p_links: > if "London" in urls.text: > > urls.get_attribute("href") > url_list.append(urls) > for links in url_list: > browser.get(links) > time.sleep(4) > except: > browser.close() Ok, I'm sure you changed the search terms, don't know if you changed the google url, but the code looks the same. I'm headed away from the computer for the night, but try using a shorter sleep time, like time.sleep(1) -- https://mail.python.org/mailman/listinfo/python-list
RE: Visit All URLs with selenium python
Nicole wrote, on Wednesday, April 12, 2017 11:05 PM > > Hi Deborah, >I checked again selecting css there found 11 URLS and I > printed it is printing all urls but it visits the first url not all.. Hmm. Sounds like you've changed your code in some way. Either changing the web page you're pointing to, changing the css selector or something I can't guess, because in your last msg yo said you were seeing just the opposite. -- https://mail.python.org/mailman/listinfo/python-list
RE: Visit All URLs with selenium python
Nicole wrote, on Wednesday, April 12, 2017 11:05 PM > > Hi Deborah, >I checked again selecting css there found 11 URLS and I > printed it is printing all urls but it visits the first url not all.. I'm just guessing again, but time.sleep(4) could be too long a time to sleep, especially if you're on a fast network and you don't have many browser windows open before you run your code. It might be opening the first url and printing all the others and exiting the for loop before time.sleep(4) expires. -- https://mail.python.org/mailman/listinfo/python-list
RE: Visit All URLs with selenium python
Nicole wrote, on Wednesday, April 12, 2017 11:03 PM > > from selenium.webdriver.firefox.firefox_profile import > FirefoxProfile import random from selenium import webdriver > from selenium.webdriver.common.keys import Keys Ok, that gives us a clue what you're working with, which will probably help with something. Since your code works, I'm guessing your use of selenium is probably ok. I'd be looking for structural issues in the HTML for reasons why you're not getting what you want to get. -- https://mail.python.org/mailman/listinfo/python-list
Re: Visit All URLs with selenium python
Here you can see now from selenium.webdriver.firefox.firefox_profile import FirefoxProfile import random from selenium import webdriver from selenium.webdriver.common.keys import Keys browser.get('https://www.google.co.uk/search?q=Rashmi=Rashmi=chrome..69i57j69i60l3.6857j0j1=chrome=UTF-8#q=Mobiles+in+london') time.sleep(5) try: p_links = browser.find_elements_by_css_selector(' div > h3 > a') url_list = [] for urls in p_links: if "London" in urls.text: urls.get_attribute("href") url_list.append(urls) for links in url_list: browser.get(links) time.sleep(4) except: browser.close() -- https://mail.python.org/mailman/listinfo/python-list
Re: Visit All URLs with selenium python
from selenium.webdriver.firefox.firefox_profile import FirefoxProfile import random from selenium import webdriver from selenium.webdriver.common.keys import Keys -- https://mail.python.org/mailman/listinfo/python-list
Re: Visit All URLs with selenium python
Hi Deborah, I checked again selecting css there found 11 URLS and I printed it is printing all urls but it visits the first url not all.. -- https://mail.python.org/mailman/listinfo/python-list
RE: Visit All URLs with selenium python
Nicole wrote, on Wednesday, April 12, 2017 9:49 PM > > browser.get('https://www.google.co.uk/search?q=Rashmi=Rashm > i=chrome..69i57j69i60l3.6857j0j1=chrome=UTF-8# q=Rashmi+Custom+Tailors') > time.sleep(5) > > try: > p_links = > browser.find_elements_by_css_selector('div > h3 > a') > url_list = [] > for urls in p_links: > if "Rashmi Custom Tailors" in urls.text: > > url = urls.get_attribute("href") > url_list.append(url) > for url in url_list: > browser.get(url) > time.sleep(4) > > > it just visit first url not all .. Can anybody help how to fix that.. You don't say what module you're using and it would help to see the "import ..." statement. But there are a couple things I can think of that could be causing the problem: There's only one result with the exact phrase "Rashmi Custom Tailors" on the page. or The css_selector('div > h3 > a') only occurs for the first result and selectors for subsequent results are different. I've seen that before. If the div extends all the way down the list until after the last result, the results after the first one might have css_selector('h3 > a'), but I'm just guessing about how they might be different. Deborah -- https://mail.python.org/mailman/listinfo/python-list
Visit All URLs with selenium python
browser.get('https://www.google.co.uk/search?q=Rashmi=Rashmi=chrome..69i57j69i60l3.6857j0j1=chrome=UTF-8#q=Rashmi+Custom+Tailors') time.sleep(5) try: p_links = browser.find_elements_by_css_selector('div > h3 > a') url_list = [] for urls in p_links: if "Rashmi Custom Tailors" in urls.text: url = urls.get_attribute("href") url_list.append(url) for url in url_list: browser.get(url) time.sleep(4) it just visit first url not all .. Can anybody help how to fix that.. -- https://mail.python.org/mailman/listinfo/python-list