Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-fanficfare for 
openSUSE:Factory checked in at 2021-04-06 17:30:16
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-fanficfare (Old)
 and      /work/SRC/openSUSE:Factory/.python-fanficfare.new.2401 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-fanficfare"

Tue Apr  6 17:30:16 2021 rev:31 rq:882800 version:4.1.0

Changes:
--------
--- /work/SRC/openSUSE:Factory/python-fanficfare/python-fanficfare.changes      
2021-02-19 23:46:15.515421820 +0100
+++ 
/work/SRC/openSUSE:Factory/.python-fanficfare.new.2401/python-fanficfare.changes
    2021-04-06 17:31:41.999253951 +0200
@@ -1,0 +2,39 @@
+Sat Mar 27 12:16:53 UTC 2021 - Matej Cepl <[email protected]>
+
+- Update to 4.1.0:
+  - adapter_literotica - Update for recent site change and fix
+    first chapter - Thanks, davidfor!
+  - adapter_fictionlive - Fix off-by-one error in
+    most_recent_chunk / add_chapter_url interaction, closes #672
+    Thanks, HazelSh!
+  - Fixes for literotica sites changes. Issue #671
+  - Fix for include_dice_rolls when multiple fieldsets.
+  - Check for img 'failedtoload' *before* trying to fetch on
+    updates.
+  - Issue with fiction.live setting in defaults[fiction.live]
+    overriding personal[www.fiction.live]. Could use a more general
+    solution if I can think of one.
+  - minor changes to track fictionlive website updates - Thanks,
+    HazelSh #668
+  - Fix show_timestamps option in adapter_fictionlive
+  - Add include_dice_rolls option
+  - Include error for continue_on_chapter_error in log
+  - Put 'Change theme to Classic' back in
+    adapter_storiesonlinenet
+  - Remove some dup imports/code, thanks akshgpt7. Closes #663
+  - use_ssl_unverified_context:true ignored when
+    use_clouadscraper:true
+  - Fixes for ancient 'import *' getting broken by removing
+    unused imports in base_writer -- Fixes "name 're' is not
+    defined" with HTML output.
+
+-------------------------------------------------------------------
+Mon Feb 22 12:53:15 UTC 2021 - Matej Cepl <[email protected]>
+
+- Update to 4.0.2 (simple bugfix release):
+  - Fix for BG job race conditions.
+  - Fix writer_txt import removeAllEntities
+  - Update plugin about.html
+  - Fix reduce_zalgo not imported. 
+
+-------------------------------------------------------------------

Old:
----
  FanFicFare-4.0.0.tar.gz

New:
----
  FanFicFare-4.1.0.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-fanficfare.spec ++++++
--- /var/tmp/diff_new_pack.NUoZBs/_old  2021-04-06 17:31:42.763254815 +0200
+++ /var/tmp/diff_new_pack.NUoZBs/_new  2021-04-06 17:31:42.767254819 +0200
@@ -21,7 +21,7 @@
 %define skip_python2 1
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-fanficfare
-Version:        4.0.0
+Version:        4.1.0
 Release:        0
 Summary:        Tool for making eBooks from stories on fanfiction and other 
web sites
 License:        GPL-3.0-only

++++++ FanFicFare-4.0.0.tar.gz -> FanFicFare-4.1.0.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/calibre-plugin/__init__.py 
new/FanFicFare-4.1.0/calibre-plugin/__init__.py
--- old/FanFicFare-4.0.0/calibre-plugin/__init__.py     2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/calibre-plugin/__init__.py     2021-03-26 
18:13:57.000000000 +0100
@@ -33,7 +33,7 @@
 from calibre.customize import InterfaceActionBase
 
 # pulled out from FanFicFareBase for saving in prefs.py
-__version__ = (4, 0, 0)
+__version__ = (4, 1, 0)
 
 ## Apparently the name for this class doesn't matter--it was still
 ## 'demo' for the first few versions.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/calibre-plugin/about.html 
new/FanFicFare-4.1.0/calibre-plugin/about.html
--- old/FanFicFare-4.0.0/calibre-plugin/about.html      2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/calibre-plugin/about.html      2021-03-26 
18:13:57.000000000 +0100
@@ -1,6 +1,6 @@
 <hr />
 
-<p>Plugin created by Jim Miller, borrowing heavily from Grant Drake's
+<p>Plugin created by Jim Miller, originally borrowing heavily from Grant 
Drake's
   '<a href="http://www.mobileread.com/forums/showthread.php?t=134856";>Reading 
List</a>',
   '<a href="http://www.mobileread.com/forums/showthread.php?t=126727";>Extract 
ISBN</a>' and
   '<a href="http://www.mobileread.com/forums/showthread.php?t=134000";>Count 
Pages</a>'
@@ -8,12 +8,12 @@
 
 <p>
   Calibre officially distributes plugins from the mobileread.com forum site.
-  The official distro channel for this plugin is there: <a 
href="http://www.mobileread.com/forums/showthread.php?t=259221";>FanFicFare</a>
+  The official distro channel and discussion thread for this plugin is there: 
<a 
href="http://www.mobileread.com/forums/showthread.php?t=259221";>FanFicFare</a>
 </p>
 
 <p> I also monitor the
   <a href="http://groups.google.com/group/fanfic-downloader";>general users
-    group</a> for the downloader.  That covers the web application and CLI, 
too.
+    group</a> for the downloader CLI, too.
 </p>
 
 <p>
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/calibre-plugin/config.py 
new/FanFicFare-4.1.0/calibre-plugin/config.py
--- old/FanFicFare-4.0.0/calibre-plugin/config.py       2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/calibre-plugin/config.py       2021-03-26 
18:13:57.000000000 +0100
@@ -18,13 +18,13 @@
 try:
     from PyQt5 import QtWidgets as QtGui
     from PyQt5.Qt import (QWidget, QVBoxLayout, QHBoxLayout, QGridLayout, 
QLabel,
-                          QLineEdit, QWidget, QComboBox, QCheckBox, 
QPushButton, QTabWidget,
+                          QLineEdit, QComboBox, QCheckBox, QPushButton, 
QTabWidget,
                           QScrollArea, QGroupBox, QButtonGroup, QRadioButton,
                           Qt)
 except ImportError as e:
     from PyQt4 import QtGui
     from PyQt4.Qt import (QWidget, QVBoxLayout, QHBoxLayout, QGridLayout, 
QLabel,
-                          QLineEdit, QWidget, QComboBox, QCheckBox, 
QPushButton, QTabWidget,
+                          QLineEdit, QComboBox, QCheckBox, QPushButton, 
QTabWidget,
                           QScrollArea, QGroupBox, QButtonGroup, QRadioButton,
                           Qt)
 try:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/calibre-plugin/jobs.py 
new/FanFicFare-4.1.0/calibre-plugin/jobs.py
--- old/FanFicFare-4.0.0/calibre-plugin/jobs.py 2021-02-19 02:23:50.000000000 
+0100
+++ new/FanFicFare-4.1.0/calibre-plugin/jobs.py 2021-03-26 18:13:57.000000000 
+0100
@@ -67,7 +67,7 @@
     # logger.debug(sites_lists.keys())
 
     # Queue all the jobs
-    job_count = 0
+    jobs_running = 0
     for site in sites_lists.keys():
         site_list = sites_lists[site]
         logger.info(_("Launch background process for site %s:")%site + "\n" +
@@ -83,7 +83,7 @@
         job._site_list = site_list
         job._processed = False
         server.add_job(job)
-        job_count += len(site_list)
+        jobs_running += 1
 
     # This server is an arbitrary_n job, so there is a notifier available.
     # Set the % complete to a small number to avoid the 'unavailable' indicator
@@ -127,20 +127,22 @@
         ## only process each job once.  We can get more than one loop
         ## after job.is_finished.
         if not job._processed:
-            # sleep(10)
+            # sleep(1)
             # A job really finished. Get the information.
 
             ## This is where bg proc details end up in GUI log.
             ## job.details is the whole debug log for each proc.
             logger.info("\n\n" + ("="*80) + " " + job.details.replace('\r',''))
-            # logger.debug("Finished background process for site %s:\n%s"%
-            #              (job._site_list[0]['site'],"\n".join([ x['url'] for 
x in job._site_list ])))
+            # logger.debug("Finished background process for site 
%s:\n%s"%(job._site_list[0]['site'],"\n".join([ x['url'] for x in 
job._site_list ])))
             for b in job._site_list:
                 book_list.remove(b)
             book_list.extend(job.result)
             job._processed = True
+            jobs_running -= 1
 
-        if job_count == count:
+        ## Can't use individual count--I've seen stories all reported
+        ## finished before results of all jobs processed.
+        if jobs_running == 0:
             book_list = sorted(book_list,key=lambda x : x['listorder'])
             logger.info("\n"+_("Download Results:")+"\n%s\n"%("\n".join([ 
"%(status)s %(url)s %(comment)s" % book for book in book_list])))
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/calibre-plugin/plugin-defaults.ini 
new/FanFicFare-4.1.0/calibre-plugin/plugin-defaults.ini
--- old/FanFicFare-4.0.0/calibre-plugin/plugin-defaults.ini     2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/calibre-plugin/plugin-defaults.ini     2021-03-26 
18:13:57.000000000 +0100
@@ -177,6 +177,8 @@
 
 ## number of seconds to sleep between calls to the story site.  May be
 ## useful if pulling large numbers of stories or if the site is slow.
+## The actual sleep time used on each request is a random number
+## between 0.5 and 1.5 times slow_down_sleep_time.
 #slow_down_sleep_time:0.5
 
 ## How long to wait for each HTTP connection to finish.  Longer times
@@ -626,8 +628,7 @@
 
 ##  We've been requested by the site(s) admin to rein in hits.  If you
 ## download fewer stories less often you can likely get by with
-## reducing this sleep.  There's also a hard-coded 2sec sleep in
-## addition to whatever slow_down_sleep_time is.
+## reducing this sleep.
 slow_down_sleep_time:6
 
 ## sites are sensitive to too many hits.  Users are sensitive to long
@@ -902,7 +903,7 @@
 
 [base_xenforo2forum]
 ## [base_xenforoforum] also applied, but [base_xenforo2forum] takes
-## precedence.  SV & SB only as of Oct 2019.
+## precedence.
 
 ## Some additional 'thread' metadata entries.
 
add_to_extra_valid_entries:,threadmarks_title,threadmarks_description,threadmarks_status
@@ -939,6 +940,19 @@
 ## this was implemented.
 skip_sticky_first_posts:true
 
+## SV/SB sites include a dice roller that can attach dice roll results
+## to a post.  These are outside the actual post text.  Setting
+## include_dice_rolls:true will include a text version of those rolls
+## in the FFF chapter that should be usable for all ebook readers.
+## Setting include_dice_rolls:svg will keep the inline <svg> images of
+## the rolls. It is the user's responsibility to also add
+## add_to_keep_html_attrs and add_to_output_css settings to make them
+## appear correctly.  (include_dice_rolls:svg did *not* work in most
+## ebook readers I tested with, even with appropriate attributes and
+## css.)
+## NOTE: SV requires login (always_login:true) to see dice rolls.
+#include_dice_rolls:false
+
 [epub]
 
 ## Each output format has a section that overrides [defaults]
@@ -1820,28 +1834,26 @@
 ## fiction.live spoilers display as a (blank) block until clicked, then they 
can become inline text.
 ## with true, adds them to an outlined block marked as a spoiler.
 ## with false, the text of the spoiler is unmarked and present in the work, as 
though alreday clicked
-legend_spoilers:true
+#legend_spoilers:true
 ## display 'spoiler' tags in the tag list, which can contain plot details
-show_spoiler_tags:false
+#show_spoiler_tags:false
 ## don't fetch covers marked as nsfw. covers for fiction.live can't be 
pornographic, but can get very close.
-show_nsfw_cover_images:false
+#show_nsfw_cover_images:false
 ## displays the timestamps on the story chunks, showing when each part went 
live.
-show_timestamps:false
+#show_timestamps:false
 
 ## site has more original than fan fiction
 extratags:
 
-extra_valid_entries:key_tags, tags, likes, live, reader_input
-extra_titlepage_entries:key_tags, tags, likes, live, reader_input
-extra_subject_tags:tags, key_tags
-key_tags_label:Key Tags
+extra_valid_entries:tags, likes, live, reader_input
+extra_titlepage_entries:tags, likes, live, reader_input
+extra_subject_tags:tags
 tags_label:Tags
 live_label:Next Live Session
 live_format:%%Y-%%m-%%d at %%H:%%M %%p
 likes_label:Likes
 reader_input_label:Reader Input
 keep_in_order_tags:true
-keep_in_order_key_tags:true
 
 add_to_output_css:
  table.voteblock { border-collapse: collapse; }
@@ -2933,7 +2945,7 @@
 ## reducing this sleep.
 slow_down_sleep_time:12
 
-## ffnet is sensitive to too many hits.  Users are sensitive to long
+## sites are sensitive to too many hits.  Users are sensitive to long
 ## waits during the initial metadata collection in the foreground.
 ## When used, these settings will speed up metadata downloads in the
 ## foreground linearly.
@@ -3042,15 +3054,15 @@
 ## for examples of how to use them.
 extra_valid_entries:reviews,favs,follows
 
-## fanfiction.net shows the user's
+## fictionpress.com shows the user's
 cover_exclusion_regexp:(/imageu/|d_60_90\.jpg)
 
-## fanfiction.net is blocking people more aggressively.  If you
+## fictionpress.com is blocking people more aggressively.  If you
 ## download fewer stories less often you can likely get by with
 ## reducing this sleep.
 slow_down_sleep_time:8
 
-## ffnet is sensitive to too many hits.  Users are sensitive to long
+## sites are sensitive to too many hits.  Users are sensitive to long
 ## waits during the initial metadata collection in the foreground.
 ## When used, these settings will speed up metadata downloads in the
 ## foreground linearly.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/calibre-plugin/translations/et.po 
new/FanFicFare-4.1.0/calibre-plugin/translations/et.po
--- old/FanFicFare-4.0.0/calibre-plugin/translations/et.po      2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/calibre-plugin/translations/et.po      2021-03-26 
18:13:57.000000000 +0100
@@ -2,13 +2,13 @@
 # Copyright (C) YEAR ORGANIZATION
 # 
 # Translators:
-# Maidur, 2016-2020
+# Maidur, 2016-2021
 msgid ""
 msgstr ""
 "Project-Id-Version: calibre-plugins\n"
 "POT-Creation-Date: 2021-02-12 13:32-0600\n"
-"PO-Revision-Date: 2021-02-12 22:21+0000\n"
-"Last-Translator: Transifex Bot <>\n"
+"PO-Revision-Date: 2021-03-02 09:19+0000\n"
+"Last-Translator: Maidur\n"
 "Language-Team: Estonian 
(http://www.transifex.com/calibre/calibre-plugins/language/et/)\n"
 "MIME-Version: 1.0\n"
 "Content-Type: text/plain; charset=UTF-8\n"
@@ -297,19 +297,19 @@
 
 #: config.py:571
 msgid "Success"
-msgstr ""
+msgstr "Edukas"
 
 #: config.py:572
 msgid "Mark successfully downloaded or updated books."
-msgstr ""
+msgstr "M??rgista edukalt alla laaditud v??i uuendatud raamatud."
 
 #: config.py:577
 msgid "Failed"
-msgstr ""
+msgstr "Eba??nnestunud"
 
 #: config.py:578
 msgid "Mark failed downloaded or updated books."
-msgstr ""
+msgstr "M??rgista eba??nnestunult alla laaditud v??i uuendatud raamatud."
 
 #: config.py:583
 msgid "Chapter Error"
@@ -1647,7 +1647,7 @@
 
 #: dialogs.py:1629
 msgid "Show this confirmation again"
-msgstr ""
+msgstr "N??ita seda kinnitust uuesti"
 
 #: fff_plugin.py:152 fff_plugin.py:183
 msgid "FanFicFare"
@@ -1683,7 +1683,7 @@
 
 #: fff_plugin.py:316
 msgid "Anthology Options"
-msgstr ""
+msgstr "Antoloogia valikud"
 
 #: fff_plugin.py:317
 msgid "Make Anthology Epub from Web Page"
@@ -1747,15 +1747,15 @@
 
 #: fff_plugin.py:414
 msgid "Update Existing FanFiction Books"
-msgstr ""
+msgstr "Uuenda olemasolevaid f??nnikirjanduse raamatuid"
 
 #: fff_plugin.py:422
 msgid "Get FanFiction Story URLs from Email"
-msgstr ""
+msgstr "Hangi f??nnikirjanduse juttude URLid e-kirjast"
 
 #: fff_plugin.py:430
 msgid "Get FanFiction Story URLs from Web Page"
-msgstr ""
+msgstr "Hangi f??nnikirjanduse juttude URLid veebilehelt"
 
 #: fff_plugin.py:443
 msgid "Remove \"New\" Chapter Marks from Selected books"
@@ -1814,7 +1814,7 @@
 #: fff_plugin.py:618 fff_plugin.py:1904 fff_plugin.py:2521 fff_plugin.py:2533
 #: fff_plugin.py:2544 fff_plugin.py:2550 fff_plugin.py:2563
 msgid "Warning"
-msgstr ""
+msgstr "Hoiatus"
 
 #: fff_plugin.py:626
 msgid "(%d Story URLs Skipped, on Rejected URL List)"
@@ -2119,7 +2119,7 @@
 msgid ""
 "<b>%(title)s</b> by <b>%(author)s</b> is already in your library with a "
 "different source URL:"
-msgstr ""
+msgstr "<b>%(title)s</b>, autoriga <b>%(author)s</b> on erineva l??hte-URLiga 
sinu kogus juba olemas:"
 
 #: fff_plugin.py:1488
 msgid "In library: <a href=\"%(liburl)s\">%(liburl)s</a>"
@@ -2145,7 +2145,7 @@
 msgid ""
 "<b>%(title)s</b> by <b>%(author)s</b> is already in your library with a "
 "different source URL."
-msgstr ""
+msgstr "<b>%(title)s</b>, autoriga <b>%(author)s</b> on erineva l??hte-URLiga 
sinu kogus juba olemas."
 
 #: fff_plugin.py:1504
 msgid ""
@@ -2233,7 +2233,7 @@
 msgid ""
 "Some of the stories downloaded have chapters errors.  Click View Log in the "
 "next dialog to see which."
-msgstr ""
+msgstr "Osal allalaaditud juttudel on peat??kivigu. Et n??ha, millistel, 
kl??psa j??rgmises dialoogiaknas Vaata Logi."
 
 #: fff_plugin.py:1905
 msgid "<b>%s</b> good stories contain chapter errors."
@@ -2241,7 +2241,7 @@
 
 #: fff_plugin.py:1908
 msgid "FanFicFare: "
-msgstr ""
+msgstr "FanFicFare: "
 
 #: fff_plugin.py:1908
 msgid "No Good Stories for Anthology"
@@ -2359,7 +2359,7 @@
 
 #: fff_plugin.py:2870
 msgid "%(title)s by %(author)s"
-msgstr ""
+msgstr "%(title)s, autoriga %(author)s"
 
 #: fff_plugin.py:2934
 msgid " Anthology"
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/FanFicFare-4.0.0/fanficfare/adapters/adapter_fictionlive.py 
new/FanFicFare-4.1.0/fanficfare/adapters/adapter_fictionlive.py
--- old/FanFicFare-4.0.0/fanficfare/adapters/adapter_fictionlive.py     
2021-02-19 02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/adapters/adapter_fictionlive.py     
2021-03-26 18:13:57.000000000 +0100
@@ -38,7 +38,7 @@
 from .base_adapter import BaseSiteAdapter
 from ..htmlcleanup import stripHTML
 from .. import exceptions as exceptions
-
+from ..six import ensure_text
 
 def getClass():
     return FictionLiveAdapter
@@ -156,9 +156,7 @@
 
         show_spoiler_tags = self.getConfig('show_spoiler_tags')
         spoiler_tags = data['spoilerTags'] if 'spoilerTags' in data else []
-        for tag in tags[:5]:
-            self.story.addToList('key_tags', tag)
-        for tag in tags[5:]:
+        for tag in tags:
             if show_spoiler_tags or not tag in spoiler_tags:
                 self.story.addToList('tags', tag)
 
@@ -246,7 +244,7 @@
         titles = ["Home"] + titles
 
         times = [c['ct'] for c in maintext]
-        times = [data['ct']] + times + [self.most_recent_chunk + 1]
+        times = [data['ct']] + times + [self.most_recent_chunk + 2] # need to 
be 1 over, and add_url etc does -1
 
         # doesn't actually run without the call to list.
         list(map(add_chapter_url, titles, pair(times)))
@@ -297,7 +295,7 @@
             show_timestamps = self.getConfig('show_timestamps')
             if show_timestamps and 'ct' in chunk:
                 #logger.debug("Adding timestamp for chunk...")
-                timestamp = 
six.ensure_text(self.parse_timestamp(chunk['ct']).strftime("%x -- %X"))
+                timestamp = 
ensure_text(self.parse_timestamp(chunk['ct']).strftime("%x -- %X"))
                 text += '<div class="ut">' + timestamp + '</div>'
 
             text += "</div><br />\n"
@@ -313,7 +311,7 @@
 
         soup = self.make_soup(chunk['b'] if 'b' in chunk else "")
 
-        if self.getConfig('legend_spoilers'):
+        if self.getConfig('legend_spoilers',True):
             soup = self.add_spoiler_legends(soup)
 
         if self.achievements:
@@ -439,9 +437,11 @@
 
         num_voters = len(chunk['votes']) if 'votes' in chunk else 0
 
+        vote_title = chunk['b'] if 'b' in chunk else "Choices"
+
         output = ""
         # start with the header
-        output += u"<h4><span>Choices ??? <small>Voting " + closed
+        output += u"<h4><span>" + vote_title + " ??? <small>Voting " + closed
         output += u" ??? " + str(num_voters) + " voters</small></span></h4>\n"
 
         # we've got everything needed to build the html for our vote table.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/FanFicFare-4.0.0/fanficfare/adapters/adapter_literotica.py 
new/FanFicFare-4.1.0/fanficfare/adapters/adapter_literotica.py
--- old/FanFicFare-4.0.0/fanficfare/adapters/adapter_literotica.py      
2021-02-19 02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/adapters/adapter_literotica.py      
2021-03-26 18:13:57.000000000 +0100
@@ -108,7 +108,7 @@
 
     def getCategories(self, soup):
         if self.getConfig("use_meta_keywords"):
-            categories = soup.find("meta", 
{"name":"keywords"})['content'].split(', ')
+            categories = soup.find("meta", 
{"name":"keywords"})['content'].split(',')
             categories = [c for c in categories if not 
self.story.getMetadata('title') in c]
             if self.story.getMetadata('author') in categories:
                 categories.remove(self.story.getMetadata('author'))
@@ -154,13 +154,15 @@
             raise exceptions.StoryDoesNotExist("This submission is awaiting 
moderator's approval. %s"%self.url)
 
         # author
-        a = soup1.find("span", "b-story-user-y")
-        self.story.setMetadata('authorId', 
urlparse.parse_qs(a.a['href'].split('?')[1])['uid'][0])
-        authorurl = a.a['href']
+        authora = soup1.find("a", class_="y_eU")
+        authorurl = authora['href']
+        # logger.debug(authora)
+        # logger.debug(authorurl)
+        self.story.setMetadata('authorId', 
urlparse.parse_qs(authorurl.split('?')[1])['uid'][0])
         if authorurl.startswith('//'):
             authorurl = self.parsedUrl.scheme+':'+authorurl
         self.story.setMetadata('authorUrl', authorurl)
-        self.story.setMetadata('author', a.text)
+        self.story.setMetadata('author', authora.text)
 
         # get the author page
         dataAuth = self.get_request(authorurl)
@@ -226,26 +228,30 @@
             descriptions = []
             ratings = []
             chapters = []
+            chapter_name_type = None
             while chapterTr is not None and 'sl' in chapterTr['class']:
                 description = "%d. %s" % 
(len(descriptions)+1,stripHTML(chapterTr.findAll("td")[1]))
                 description = stripHTML(chapterTr.findAll("td")[1])
                 chapterLink = chapterTr.find("td", "fc").find("a")
                 if self.getConfig('chapter_categories_use_all'):
                     self.story.addToList('category', 
chapterTr.findAll("td")[2].text)
-                self.story.addToList('eroticatags', 
chapterTr.findAll("td")[2].text)
+                # self.story.addToList('eroticatags', 
chapterTr.findAll("td")[2].text)
                 pub_date = makeDate(chapterTr.findAll('td')[-1].text, 
self.dateformat)
                 dates.append(pub_date)
                 chapterTr = chapterTr.nextSibling
 
                 chapter_title = chapterLink.text
                 if self.getConfig("clean_chapter_titles"):
-                    # logger.debug('\tChapter Name: "%s"' % chapterLink.string)
                     # logger.debug('\tChapter Name: "%s"' % chapterLink.text)
                     if 
chapterLink.text.lower().startswith(seriesTitle.lower()):
                         chapter = chapterLink.text[len(seriesTitle):].strip()
                         # logger.debug('\tChapter: "%s"' % chapter)
                         if chapter == '':
                             chapter_title = 'Chapter %d' % 
(self.num_chapters() + 1)
+                            # Sometimes the first chapter does not have type 
of chapter 
+                            if self.num_chapters() == 0:
+                                logger.debug('\tChapter: first chapter without 
chapter type')
+                                chapter_name_type = None
                         else:
                             separater_char = chapter[0]
                             # logger.debug('\tseparater_char: "%s"' % 
separater_char)
@@ -257,14 +263,19 @@
                                     chapter_title = 'Chapter %d' % int(chapter)
                                 except:
                                     chapter_title = 'Chapter %s' % chapter
+                                chapter_name_type = 'Chapter' if 
chapter_name_type is None else chapter_name_type
+                                logger.debug('\tChapter: 
chapter_name_type="%s"' % chapter_name_type)
                             elif chapter.lower().startswith('pt.'):
                                 chapter = chapter[len('pt.'):]
                                 try:
                                     chapter_title = 'Part %d' % int(chapter)
                                 except:
                                     chapter_title = 'Part %s' % chapter
+                                chapter_name_type = 'Part' if 
chapter_name_type is None else chapter_name_type
+                                logger.debug('\tChapter: 
chapter_name_type="%s"' % chapter_name_type)
                             elif separater_char in [":", "-"]:
                                 chapter_title = chapter
+                                logger.debug('\tChapter: taking chapter text 
as whole')
 
                 # pages include full URLs.
                 chapurl = chapterLink['href']
@@ -283,6 +294,18 @@
                 except:
                     pass
 
+            if self.getConfig("clean_chapter_titles") \
+                and chapter_name_type is not None \
+                and not chapters[0][0].startswith(chapter_name_type):
+                logger.debug('\tChapter: chapter_name_type="%s"' % 
chapter_name_type)
+                logger.debug('\tChapter: first chapter="%s"' % chapters[0][0])
+                logger.debug('\tChapter: first chapter number="%s"' % 
chapters[0][0][len('Chapter'):])
+                chapters[0] = ("%s %s" % (chapter_name_type, 
chapters[0][0][len('Chapter'):].strip()),
+                               chapters[0][1],
+                               chapters[0][2],
+                               chapters[0][3]
+                               )
+
             chapters = sorted(chapters, key=lambda chapter: chapter[3])
             for i, chapter in enumerate(chapters):
                 self.add_chapter(chapter[0], chapter[1])
@@ -307,7 +330,7 @@
 
 
         # Add the category from the breadcumb. This might duplicate a category 
already added.
-        self.story.addToList('category', soup1.find('div', 
'b-breadcrumbs').findAll('a')[1].string)
+        self.story.addToList('category', soup1.find('div', 
id='BreadCrumbComponent').findAll('a')[1].string)
         self.getCategories(soup1)
 
         return
@@ -320,7 +343,7 @@
 #         logger.debug("\tChapter text: %s" % raw_page)
         page_soup = self.make_soup(raw_page)
         [comment.extract() for comment in page_soup.findAll(text=lambda 
text:isinstance(text, Comment))]
-        story2 = page_soup.find('div', 'b-story-body-x').div
+        story2 = page_soup.find('div', 'aa_ht').div
 #         logger.debug('getPageText - story2: %s' % story2)
 
         fullhtml = unicode(story2)
@@ -338,8 +361,7 @@
 
         raw_page = self.get_request(url)
         page_soup = self.make_soup(raw_page)
-        pages = page_soup.find('select', {'name' : 'page'})
-        page_nums = [page.text for page in pages.findAll('option')] if pages 
else 0
+        pages = page_soup.find('div',class_='l_bH')
 
         fullhtml = ""
         self.getCategories(page_soup)
@@ -350,7 +372,13 @@
             chapter_description = '<p><b>Description:</b> %s</p><hr />' % 
chapter_description
         fullhtml += self.getPageText(raw_page, url)
         if pages:
-            for page_no in range(2, len(page_nums) + 1):
+            ## look for highest numbered page, they're not all listed
+            ## when there are many.
+
+            last_page_link = pages.find_all('a', class_='l_bJ')[-1]
+            last_page_no = 
int(urlparse.parse_qs(last_page_link['href'].split('?')[1])['page'][0])
+            # logger.debug(last_page_no)
+            for page_no in range(2, last_page_no+1):
                 page_url = url +  "?page=%s" % page_no
                 # logger.debug("page_url= %s" % page_url)
                 raw_page = self.get_request(page_url)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/FanFicFare-4.0.0/fanficfare/adapters/adapter_storiesonlinenet.py 
new/FanFicFare-4.1.0/fanficfare/adapters/adapter_storiesonlinenet.py
--- old/FanFicFare-4.0.0/fanficfare/adapters/adapter_storiesonlinenet.py        
2021-02-19 02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/adapters/adapter_storiesonlinenet.py        
2021-03-26 18:13:57.000000000 +0100
@@ -284,7 +284,11 @@
         story_found = False
         while not story_found:
             page = page + 1
-            data = self.get_request(self.story.getList('authorUrl')[0] + "/" + 
unicode(page))
+            try:
+                data = self.get_request(self.story.getList('authorUrl')[0] + 
"/" + unicode(page))
+            except exceptions.HTTPErrorFFF as e:
+                if e.status_code == 404:
+                    raise exceptions.FailedToDownload("Story not found in 
Author's list--Set Access Level to Full Access and change Listings Theme back 
to "+self.getTheme())
             asoup = self.make_soup(data)
 
             story_row = asoup.find(row_class, {'id' : 'sr' + 
self.story.getMetadata('storyId')})
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/fanficfare/adapters/base_adapter.py 
new/FanFicFare-4.1.0/fanficfare/adapters/base_adapter.py
--- old/FanFicFare-4.0.0/fanficfare/adapters/base_adapter.py    2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/adapters/base_adapter.py    2021-03-26 
18:13:57.000000000 +0100
@@ -33,7 +33,6 @@
 from bs4 import BeautifulSoup, Tag
 
 
-from ..htmlcleanup import stripHTML
 from ..htmlheuristics import replace_br_with_p
 
 logger = logging.getLogger(__name__)
@@ -255,6 +254,8 @@
 <p>Error:<br><pre>%s</pre></p>
 
</div>"""%(url,traceback.format_exc().replace("&","&amp;").replace(">","&gt;").replace("<","&lt;")))
                             title = 
title+self.getConfig("chapter_title_error_mark","(CHAPTER ERROR)")
+                            logger.info("continue_on_chapter_error: (%s) 
%s"%(url,e))
+                            logger.debug(traceback.format_exc())
                             url="chapter url removed due to failure"
                             self.story.chapter_error_count += 1
                         else:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/FanFicFare-4.0.0/fanficfare/adapters/base_xenforo2forum_adapter.py 
new/FanFicFare-4.1.0/fanficfare/adapters/base_xenforo2forum_adapter.py
--- old/FanFicFare-4.0.0/fanficfare/adapters/base_xenforo2forum_adapter.py      
2021-02-19 02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/adapters/base_xenforo2forum_adapter.py      
2021-03-26 18:13:57.000000000 +0100
@@ -177,7 +177,21 @@
         return self.get_post_body(self.get_first_post(topsoup))
 
     def get_post_body(self,souptag):
-        return 
souptag.find('article',{'class':'message-body'}).find('div',{'class':'bbWrapper'})
+        body = 
souptag.find('article',{'class':'message-body'}).find('div',{'class':'bbWrapper'})
+        if self.getConfig('include_dice_rolls',False):
+            # logger.debug("body:%s"%body)
+            for fieldset in 
body.find_next_siblings('fieldset',class_='dice_container'):
+                logger.debug("fieldset:%s"%fieldset)
+                # body.append(fieldset.extract())
+                ## If include_dice_rolls:svg, keep the <svg>
+                ## up to the user to include
+                ## 
add_to_keep_html_attrs:,style,xmlns,height,width,d,x,y,transform,text-anchor,cx,cy,r
+                if self.getConfig('include_dice_rolls') != 'svg':
+                    for d in fieldset.find_all('svg'):
+                        result = d.select_one('title').extract()
+                        result.name='span'
+                        d.replace_with(result)
+        return body
 
     def get_post_created_date(self,souptag):
         return self.make_date(souptag.find('div', {'class':'message-date'}))
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/FanFicFare-4.0.0/fanficfare/browsercache/basebrowsercache.py 
new/FanFicFare-4.1.0/fanficfare/browsercache/basebrowsercache.py
--- old/FanFicFare-4.0.0/fanficfare/browsercache/basebrowsercache.py    
2021-02-19 02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/browsercache/basebrowsercache.py    
2021-03-26 18:13:57.000000000 +0100
@@ -59,8 +59,6 @@
 class BrowserCacheException(Exception):
     pass
 
-from ..six import ensure_binary, ensure_text
-
 ## difference in seconds between Jan 1 1601 and Jan 1 1970.  Chrome
 ## caches (so far) have kept time stamps as microseconds since
 ## 1-1-1601 a Windows/Cobol thing.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/fanficfare/cli.py 
new/FanFicFare-4.1.0/fanficfare/cli.py
--- old/FanFicFare-4.0.0/fanficfare/cli.py      2021-02-19 02:23:50.000000000 
+0100
+++ new/FanFicFare-4.1.0/fanficfare/cli.py      2021-03-26 18:13:57.000000000 
+0100
@@ -27,7 +27,7 @@
 import string
 import os, sys
 
-version="4.0.0"
+version="4.1.0"
 os.environ['CURRENT_VERSION_ID']=version
 
 global_cache = 'global_cache'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/fanficfare/configurable.py 
new/FanFicFare-4.1.0/fanficfare/configurable.py
--- old/FanFicFare-4.0.0/fanficfare/configurable.py     2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/configurable.py     2021-03-26 
18:13:57.000000000 +0100
@@ -67,7 +67,6 @@
 try:
     from . import adapters
 except ImportError:
-    import sys
     if "fanficfare.adapters" in sys.modules:
         adapters = sys.modules["fanficfare.adapters"]
     elif "calibre_plugins.fanficfare_plugin.fanficfare.adapters" in 
sys.modules:
@@ -155,11 +154,11 @@
 boollist=['true','false']
 base_xenforo2_list=['base_xenforo2forum',
                    'forums.sufficientvelocity.com',
+                   'forums.spacebattles.com',
+                   'www.alternatehistory.com',
                    ]
 base_xenforo_list=base_xenforo2_list+['base_xenforoforum',
-                   'forums.spacebattles.com',
                    'forum.questionablequesting.com',
-                   'www.alternatehistory.com',
                    ]
 def get_valid_set_options():
     '''
@@ -283,12 +282,12 @@
                'use_threadmarks_status':(base_xenforo2_list,None,boollist),
                'use_threadmarks_cover':(base_xenforo2_list,None,boollist),
                'skip_sticky_first_posts':(base_xenforo2_list,None,boollist),
+               'include_dice_rolls':(base_xenforo2_list,None,boollist+['svg']),
                'fix_pseudo_html': (['webnovel.com'], None, boollist),
                'fix_excess_space': (['novelonlinefull.com', 'novelall.com'], 
['epub', 'html'], boollist),
                'dedup_order_chapter_list': (['wuxiaworld.co', 
'novelupdates.cc'], None, boollist),
                'show_nsfw_cover_images': (['fiction.live'], None, boollist),
                'show_timestamps': (['fiction.live'], None, boollist),
-               'show_nsfw_cover_images': (['fiction.live'], None, boollist)
                }
 
     return dict(valdict)
@@ -516,6 +515,7 @@
                  'use_threadmarks_status',
                  'use_threadmarks_cover',
                  'skip_sticky_first_posts',
+                 'include_dice_rolls',
                  'datethreadmark_format',
                  'fix_pseudo_html',
                  'fix_excess_space',
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/fanficfare/defaults.ini 
new/FanFicFare-4.1.0/fanficfare/defaults.ini
--- old/FanFicFare-4.0.0/fanficfare/defaults.ini        2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/defaults.ini        2021-03-26 
18:13:57.000000000 +0100
@@ -207,6 +207,8 @@
 
 ## number of seconds to sleep between calls to the story site.  May be
 ## useful if pulling large numbers of stories or if the site is slow.
+## The actual sleep time used on each request is a random number
+## between 0.5 and 1.5 times slow_down_sleep_time.
 #slow_down_sleep_time:0.5
 
 ## How long to wait for each HTTP connection to finish.  Longer times
@@ -224,7 +226,7 @@
 ## each metadata item passed to post_process_cmd before it's called.
 #post_process_safepattern:(^\.|/\.|[^a-zA-Z0-9_\. \[\]\(\)&'-]+)
 
-## For use only with CLI version--run a command 3*before* the output
+## For use only with CLI version--run a command *before* the output
 ## file is written.  All of the titlepage_entries values are
 ## available, (but not output_filename).  Can be used to generate
 ## cover images that are then included in the output ebook using
@@ -653,8 +655,7 @@
 
 ##  We've been requested by the site(s) admin to rein in hits.  If you
 ## download fewer stories less often you can likely get by with
-## reducing this sleep.  There's also a hard-coded 2sec sleep in
-## addition to whatever slow_down_sleep_time is.
+## reducing this sleep.
 slow_down_sleep_time:6
 
 ## exclude emoji and default avatars.
@@ -920,7 +921,7 @@
 
 [base_xenforo2forum]
 ## [base_xenforoforum] also applied, but [base_xenforo2forum] takes
-## precedence.  SV & SB only as of Oct 2019.
+## precedence.
 
 ## Some additional 'thread' metadata entries.
 
add_to_extra_valid_entries:,threadmarks_title,threadmarks_description,threadmarks_status
@@ -957,6 +958,19 @@
 ## this was implemented.
 skip_sticky_first_posts:true
 
+## SV/SB sites include a dice roller that can attach dice roll results
+## to a post.  These are outside the actual post text.  Setting
+## include_dice_rolls:true will include a text version of those rolls
+## in the FFF chapter that should be usable for all ebook readers.
+## Setting include_dice_rolls:svg will keep the inline <svg> images of
+## the rolls. It is the user's responsibility to also add
+## add_to_keep_html_attrs and add_to_output_css settings to make them
+## appear correctly.  (include_dice_rolls:svg did *not* work in most
+## ebook readers I tested with, even with appropriate attributes and
+## css.)
+## NOTE: SV requires login (always_login:true) to see dice rolls.
+#include_dice_rolls:false
+
 [epub]
 
 ## Each output format has a section that overrides [defaults]
@@ -1052,6 +1066,11 @@
 ## include_images is *not* available in the web service in any format.
 #include_images:false
 
+## If set, the first image found will be made the cover image.  If
+## keep_summary_html is true, any images in summary will be before any
+## in chapters.
+#make_firstimage_cover: false
+
 ## When include_images:true, you can also specify a list of images to
 ## include in the EPUB or HTML, such as for use in customized CSS.
 ## Specified images will be included as "images/<basename>".  So
@@ -1065,11 +1084,6 @@
 ## local copy.
 
#additional_images:file:///C:/Users/user/Desktop/nook/background.jpeg,http://www.somesite.com/someimage.gif
 
-## If set, the first image found will be made the cover image.  If
-## keep_summary_html is true, any images in summary will be before any
-## in chapters.
-#make_firstimage_cover: false
-
 ## If set, the epub will never have a cover, even if include_images is
 ## on and the site has specific cover images.
 #never_make_cover: false
@@ -1842,28 +1856,26 @@
 ## fiction.live spoilers display as a (blank) block until clicked, then they 
can become inline text.
 ## with true, adds them to an outlined block marked as a spoiler.
 ## with false, the text of the spoiler is unmarked and present in the work, as 
though alreday clicked
-legend_spoilers:true
+#legend_spoilers:true
 ## display 'spoiler' tags in the tag list, which can contain plot details
-show_spoiler_tags:false
+#show_spoiler_tags:false
 ## don't fetch covers marked as nsfw. covers for fiction.live can't be 
pornographic, but can get very close.
-show_nsfw_cover_images:false
+#show_nsfw_cover_images:false
 ## displays the timestamps on the story chunks, showing when each part went 
live.
-show_timestamps:false
+#show_timestamps:false
 
 ## site has more original than fan fiction
 extratags:
 
-extra_valid_entries:key_tags, tags, likes, live, reader_input
-extra_titlepage_entries:key_tags, tags, likes, live, reader_input
-extra_subject_tags:tags, key_tags
-key_tags_label:Key Tags
+extra_valid_entries:tags, likes, live, reader_input
+extra_titlepage_entries:tags, likes, live, reader_input
+extra_subject_tags:tags
 tags_label:Tags
 live_label:Next Live Session
 live_format:%%Y-%%m-%%d at %%H:%%M %%p
 likes_label:Likes
 reader_input_label:Reader Input
 keep_in_order_tags:true
-keep_in_order_key_tags:true
 
 add_to_output_css:
  table.voteblock { border-collapse: collapse; }
@@ -3046,10 +3058,10 @@
 ## for examples of how to use them.
 extra_valid_entries:reviews,favs,follows
 
-## fanfiction.net shows the user's
+## fictionpress.com shows the user's
 cover_exclusion_regexp:(/imageu/|d_60_90\.jpg)
 
-## fanfiction.net is blocking people more aggressively.  If you
+## fictionpress.com is blocking people more aggressively.  If you
 ## download fewer stories less often you can likely get by with
 ## reducing this sleep.
 slow_down_sleep_time:8
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/fanficfare/fetcher.py 
new/FanFicFare-4.1.0/fanficfare/fetcher.py
--- old/FanFicFare-4.0.0/fanficfare/fetcher.py  2021-02-19 02:23:50.000000000 
+0100
+++ new/FanFicFare-4.1.0/fanficfare/fetcher.py  2021-03-26 18:13:57.000000000 
+0100
@@ -205,6 +205,7 @@
     def set_to_cache(self,cachekey,data,redirectedurl):
         with self.cache_lock:
             self.basic_cache[cachekey] = (data,ensure_text(redirectedurl))
+            # logger.debug("set_to_cache 
%s->%s"%(cachekey,ensure_text(redirectedurl)))
             if self.autosave and self.filename:
                 self.save_cache()
 
@@ -233,6 +234,7 @@
         logger.debug(make_log('BasicCache',method,url,hit=hit))
         if hit:
             data,redirecturl = self.cache.get_from_cache(cachekey)
+            # logger.debug("from_cache %s->%s"%(cachekey,redirecturl))
             return FetcherResponse(data,redirecturl=redirecturl,fromcache=True)
 
         fetchresp = chainfn(
@@ -250,9 +252,6 @@
         ## up.
         if not fetchresp.fromcache:
             self.cache.set_to_cache(cachekey,data,fetchresp.redirecturl)
-            if url != fetchresp.redirecturl: # cache both?
-                # logger.debug("url != 
fetchresp.redirecturl:\n%s\n%s"%(url,fetchresp.redirecturl))
-                self.cache.set_to_cache(cachekey,data,url)
         return fetchresp
 
 class BrowserCacheDecorator(FetcherDecorator):
@@ -431,6 +430,9 @@
                 self.requests_session.cookies = self.cookiejar
         return self.requests_session
 
+    def use_verify(self):
+        return not self.getConfig('use_ssl_unverified_context',False)
+
     def request(self,method,url,headers=None,parameters=None):
         '''Returns a FetcherResponse regardless of mechanism'''
         if method not in ('GET','POST'):
@@ -438,11 +440,10 @@
         try:
             
logger.debug(make_log('RequestsFetcher',method,url,hit='REQ',bar='-'))
             ## resp = requests Response object
-            verify = not self.getConfig('use_ssl_unverified_context',False)
             resp = self.get_requests_session().request(method, url,
                                                        headers=headers,
                                                        data=parameters,
-                                                       verify=verify)
+                                                       
verify=self.use_verify())
             logger.debug("response code:%s"%resp.status_code)
             resp.raise_for_status() # raises RequestsHTTPError if error code.
             # consider 'cached' if from file.
@@ -497,6 +498,14 @@
             del headers['User-Agent']
         return headers
 
+    def use_verify(self):
+        ## cloudscraper doesn't work with verify=False, throws an
+        ## error about "Cannot set verify_mode to CERT_NONE when
+        ## check_hostname is enabled."
+        if self.getConfig('use_ssl_unverified_context',False):
+            logger.warning("use_ssl_unverified_context:true ignored when 
use_clouadscraper:true")
+        return True
+
     def request(self,method,url,headers=None,parameters=None):
         try:
             return 
super(CloudScraperFetcher,self).request(method,url,headers,parameters)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/fanficfare/geturls.py 
new/FanFicFare-4.1.0/fanficfare/geturls.py
--- old/FanFicFare-4.0.0/fanficfare/geturls.py  2021-02-19 02:23:50.000000000 
+0100
+++ new/FanFicFare-4.1.0/fanficfare/geturls.py  2021-03-26 18:13:57.000000000 
+0100
@@ -46,6 +46,8 @@
     except UnknownSite:
         # no adapter with anyurl=True, must be a random site.
         opener = build_opener(HTTPCookieProcessor(),GZipProcessor())
+        opener.addheaders = [('User-Agent',
+                              configuration.getConfig('user_agent'))]
         data = opener.open(url).read()
         return {'urllist':get_urls_from_html(data,url,configuration,normalize)}
     return {}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/fanficfare/requestable.py 
new/FanFicFare-4.1.0/fanficfare/requestable.py
--- old/FanFicFare-4.0.0/fanficfare/requestable.py      2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/requestable.py      2021-03-26 
18:13:57.000000000 +0100
@@ -21,6 +21,7 @@
 logger = logging.getLogger(__name__)
 
 from .configurable import Configurable
+from .htmlcleanup import reduce_zalgo
 
 class Requestable(Configurable):
     def __init__(self, configuration):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/fanficfare/story.py 
new/FanFicFare-4.1.0/fanficfare/story.py
--- old/FanFicFare-4.0.0/fanficfare/story.py    2021-02-19 02:23:50.000000000 
+0100
+++ new/FanFicFare-4.1.0/fanficfare/story.py    2021-03-26 18:13:57.000000000 
+0100
@@ -1260,11 +1260,12 @@
         if imgurl not in self.imgurls:
 
             try:
+                if imgurl.endswith('failedtoload'):
+                    return ("failedtoload","failedtoload")
+
                 if not imgdata:
                     # might already have from data:image in-line
                     imgdata = fetch(imgurl,referer=parenturl)
-                if imgurl.endswith('failedtoload'):
-                    return ("failedtoload","failedtoload")
 
                 if self.getConfig('no_image_processing'):
                     (data,ext,mime) = no_convert_image(imgurl,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/fanficfare/writers/writer_epub.py 
new/FanFicFare-4.1.0/fanficfare/writers/writer_epub.py
--- old/FanFicFare-4.0.0/fanficfare/writers/writer_epub.py      2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/writers/writer_epub.py      2021-03-26 
18:13:57.000000000 +0100
@@ -33,7 +33,7 @@
 
 import bs4
 
-from .base_writer import *
+from .base_writer import BaseStoryWriter
 from ..htmlcleanup import stripHTML,removeEntities
 from ..story import commaGroups
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/fanficfare/writers/writer_html.py 
new/FanFicFare-4.1.0/fanficfare/writers/writer_html.py
--- old/FanFicFare-4.0.0/fanficfare/writers/writer_html.py      2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/writers/writer_html.py      2021-03-26 
18:13:57.000000000 +0100
@@ -18,13 +18,14 @@
 from __future__ import absolute_import
 import logging
 import string
+import re
 
 # py2 vs py3 transition
 from ..six import text_type as unicode
 
 import bs4
 
-from .base_writer import *
+from .base_writer import BaseStoryWriter
 class HTMLWriter(BaseStoryWriter):
 
     @staticmethod
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/fanficfare/writers/writer_mobi.py 
new/FanFicFare-4.1.0/fanficfare/writers/writer_mobi.py
--- old/FanFicFare-4.0.0/fanficfare/writers/writer_mobi.py      2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/writers/writer_mobi.py      2021-03-26 
18:13:57.000000000 +0100
@@ -20,7 +20,7 @@
 import logging
 import string
 
-from .base_writer import *
+from .base_writer import BaseStoryWriter
 from ..mobi import Converter
 from ..exceptions import FailedToWriteOutput
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/fanficfare/writers/writer_txt.py 
new/FanFicFare-4.1.0/fanficfare/writers/writer_txt.py
--- old/FanFicFare-4.0.0/fanficfare/writers/writer_txt.py       2021-02-19 
02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/fanficfare/writers/writer_txt.py       2021-03-26 
18:13:57.000000000 +0100
@@ -20,7 +20,9 @@
 import string
 from textwrap import wrap
 
-from .base_writer import *
+from .base_writer import BaseStoryWriter
+from ..htmlcleanup import removeAllEntities
+logger = logging.getLogger(__name__)
 
 from html2text import html2text
 
@@ -157,7 +159,7 @@
         
         for index, chap in enumerate(self.story.getChapters()):
             if chap['html']:
-                logging.debug('Writing chapter text for: %s' % chap['title'])
+                # logger.debug('Writing chapter text for: %s' % chap['title'])
                 
self._write(out,self.lineends(self.wraplines(removeAllEntities(CHAPTER_START.substitute(chap)))))
                 
self._write(out,self.lineends(html2text(chap['html'],bodywidth=self.wrap_width)))
                 
self._write(out,self.lineends(self.wraplines(removeAllEntities(CHAPTER_END.substitute(chap)))))
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/FanFicFare-4.0.0/setup.py 
new/FanFicFare-4.1.0/setup.py
--- old/FanFicFare-4.0.0/setup.py       2021-02-19 02:23:50.000000000 +0100
+++ new/FanFicFare-4.1.0/setup.py       2021-03-26 18:13:57.000000000 +0100
@@ -26,7 +26,7 @@
     name=package_name,
 
     # Versions should comply with PEP440.
-    version="4.0.0",
+    version="4.1.0",
 
     description='A tool for downloading fanfiction to eBook formats',
     long_description=long_description,

Reply via email to