第一个景观项目目前推进到了还差树木、花卉,我觉得这需要大量的图例来参考学习,找感觉。光靠文字解说局限性大,且见效慢。我想到的方法是在pinterest上找图丢给midjourney参考生产,一种是带url参考一种是describe获取关键词之后用关键词生产,后者的话自由度更高。我目前设计的程序是每次采用第一组–ar前的关键词去生产图,还没有结合图片url做参考,或许也可以考虑把第二个–ar和第一个–ar之间的内容分两次,那么一个链接理论上如果配套url可以生产3组。配套url我觉得关键词结合一次就好了,图片引导的作用会比较强。
做这几个程序时还是要多依靠chatgpt往xpath方向走,结合一些特殊属性,text()来定位元素。drissionpage本身只支持class和id,其他那些定位什么多属性复合就别去研究了都交给chatgpt出xpath。class属性其实很不稳,经常会有元素共用导致定位不准确。
不用xpath定位我企图通过元素.ele()定位子元素失败了,反过来找父元素居然是通过层级1层1层试出来的,drissionpage貌似不能自己定位多重层级,这用tab.ele结合xpath根本就没这事儿。
pinterest没有设定默认图板的功能,因此我开了一个账号,专门用来搞景观、建筑参考,不设置任何图板,那么保存只能在默认的“个人资料”里堆积,一次量到位后,我在这个链接下操作批量删除所有的pin图即可,pinterest同样没有批量删除pin图便捷全选的功能。这个程序折腾了我大半天,一方面是受困于xpath,一方面是第1轮循环后它总是报原来获取的元素不在页面内,没有大小尺寸什么的,然后程序中断退出,完全就是无解了。后来我想到的办法是重复执行这个程序,抛出异常不让它中断即可,最后也解决了,不过开始几次测试有问题,问题出在sleep的时长,pinterest的响应速度比较老年痴呆,加载页面很慢,这个也不算占用太久,那就每1轮删除多一些等待时间吧。
pinterest批量url复制 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 from DrissionPage import ChromiumPage, ChromiumOptionsfrom DrissionPage.common import Byfrom DrissionPage.common import Keysimport timeimport osimport sysimport redo1 = ChromiumOptions().set_paths(local_port=9111 , user_data_path=r'C:/Users/A/AppData/Local/Google/Chrome/User Data' ) p = ChromiumPage(addr_or_opts=do1) tab = p.new_tab() tab.get('https://www.pinterest.com/tomyu2717/_pins/' ) time.sleep(6 ) container_div = tab.ele('.:vbI' ) img_divs = container_div.eles('.:XiG zI7 iyn Hsu' ) with open ('pinterest图片链接.txt' , 'w' ) as file: for img in img_divs: img = img.ele('tag:img' ) src_link = img.attr('src' ) modified_link = re.sub(r'(https://i\.pinimg\.com/)[^/]+' , r'originals' , src_link) modified_link = "https://i.pinimg.com/" + modified_link file.write(modified_link + '\n' ) print ("图片链接已成功提取并保存到output.txt。" )
将pinterest图片url批量提交给midjourney描述并通过文字生图 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 from DrissionPage import ChromiumPage, ChromiumOptionsfrom DrissionPage.common import Byfrom DrissionPage.common import Keysimport timeimport osfrom bs4 import BeautifulSoupimport reinput_text1 = "/des" input_text2 = "cribe" input_imagine1 = "/im" input_imagine2 = "agine" do1 = ChromiumOptions().set_paths(local_port=9111 , user_data_path=r'C:/Users/A/AppData/Local/Google/Chrome/User Data' ) p = ChromiumPage(addr_or_opts=do1) tab = p.new_tab() tab.get('https://discord-d-com-s-mj3.aiwentu.net/channels/1296492298355478559/1297027207134445630' ) time.sleep(6 ) with open ('pinterest图片链接.txt' , 'r' ) as file: lines = file.readlines() max_retries = 25 for line in lines: imageurl = line.strip() if imageurl: retries = 0 while retries < max_retries: try : time.sleep(2 ) shurukuang = (By.XPATH, "//div[@contenteditable='true' and @data-slate-editor='true' and @data-slate-node='value']" ) input1 = tab.ele(shurukuang) input1.click() input1.clear() input1.input (input_text1) time.sleep(1 ) input1.input (input_text2) time.sleep(16 ) input1.input (Keys.ENTER) time.sleep(5 ) menu1 = (By.XPATH, "//div[@class='base_bcc24e']//div[@class='text-md/normal_dc00ef autocompleteRowHeading_bcc24e' and text()='link']" ) linkmenu = tab.ele(menu1) linkmenu.click() time.sleep(5 ) shurukuang2 = (By.XPATH, "//span[@class='optionPillValue_d4df8b']" ) input2 = tab.ele(shurukuang2) time.sleep(1 ) input2.click() time.sleep(1 ) input2.click() input2.input (imageurl) time.sleep(3 ) input2.input (Keys.ENTER) time.sleep(8 ) tab.refresh() time.sleep(8 ) try : for original_link in tab.eles('@class=originalLink_d4597d' ): if original_link.attr('href' ) == imageurl: description_div = original_link.parent(4 ) embed_description_html = description_div.ele('.:embedDescription_b0068a' ).html soup = BeautifulSoup(embed_description_html, 'html.parser' ) description_parts = [span.get_text(strip=True ) for span in soup.find_all('span' ) if span.get_text(strip=True )] full_description = ' ' .join(description_parts) full_description = re.sub(r'\s+-\s*' , '-' , full_description) ar_index = full_description.find('--ar' ) if ar_index != -1 : full_description = full_description[:ar_index].strip() print (full_description) time.sleep(5 ) shurukuang = (By.XPATH, "//div[@contenteditable='true' and @data-slate-editor='true' and @data-slate-node='value']" ) input1 = tab.ele(shurukuang) input1.click() input1.clear() input1.input (input_imagine1) time.sleep(1 ) input1.input (input_imagine2) time.sleep(8 ) input1.input (Keys.ENTER) shurukuang2 = (By.XPATH, "//span[@class='optionPillValue_d4df8b']" ) input2 = tab.ele(shurukuang2) time.sleep(1 ) input2.click() time.sleep(1 ) input2.click() input2.input (full_description) time.sleep(3 ) input2.input (Keys.ENTER) time.sleep(250 ) tab.refresh() time.sleep(55 ) break except Exception as e: print (f"错误: {e} " ) break except Exception as e: retries += 1 print (f"错误: {e} . 重试 ({retries} /{max_retries} )..." ) time.sleep(5 ) tab.driver.quit()
批量将pinterest的pin图删除 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 from DrissionPage import ChromiumPage, ChromiumOptionsfrom DrissionPage.common import Byfrom DrissionPage.common import Keysimport timeimport osimport sysdo1 = ChromiumOptions().set_paths(local_port=9111 , user_data_path=r'C:/Users/A/AppData/Local/Google/Chrome/User Data' ) tab = ChromiumPage(addr_or_opts=do1) tab.get('https://www.pinterest.com/tomyu2717/_pins/' ) time.sleep(5 ) while True : container_div = tab.ele('.:vbI' ) items = container_div.eles('.:Yl- MIw Hb7' ) if not items: print ("没有找到要删除的元素,程序结束。" ) break for item in items: try : tab.actions.move_to(item) time.sleep(1 ) item.click() time.sleep(12 ) button1 = (By.XPATH, '//button[@aria-label="更多选项"]' ) more_button = tab.ele(button1) more_button.click() time.sleep(6 ) button2 = (By.XPATH, "//span[contains(@class, 'X8m') and text()='编辑 Pin 图']" ) edit_button = tab.ele(button2) edit_button.click() time.sleep(6 ) tanchuang1 = tab.ele('.:ZHw XiG XbT _O1 ho- rDA jar CCY' ) button3 = (By.XPATH, "//div[contains(@class, 'RCK') and .//div[text()='删除']]" ) confirm_button1 = tanchuang1.ele(button3) tab.actions.move_to(confirm_button1) confirm_button1.click() time.sleep(2 ) tanchuang2 = tab.ele('.:ZHw XiG XbT _O1 ho- rDA jar CCY' ) confirm_button2 = tanchuang2.ele('.:B1n tg7 tBJ dyH iFc sAJ H2s' ) tab.actions.move_to(confirm_button2) confirm_button2.click() time.sleep(6 ) pinmenu = (By.XPATH, "//div[contains(@class, 'DUt') and contains(@class, 'XiG')]//div[contains(@class, 'X8m') and text()='Pin 图']" ) pinmenu_button = tab.ele(pinmenu) pinmenu_button.click() time.sleep(5 ) tab.refresh() time.sleep(5 ) except Exception as e: print (f"错误: {e} " ) continue print ("一次批量删除已完成。" ) time.sleep(1 ) print ("批量删除已完成。" )
评论