Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package youtube-dl for openSUSE:Factory 
checked in at 2021-04-17 23:24:49
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/youtube-dl (Old)
 and      /work/SRC/openSUSE:Factory/.youtube-dl.new.12324 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "youtube-dl"

Sat Apr 17 23:24:49 2021 rev:165 rq:886125 version:2021.04.17

Changes:
--------
--- /work/SRC/openSUSE:Factory/youtube-dl/python-youtube-dl.changes     
2021-04-08 22:13:08.745537140 +0200
+++ /work/SRC/openSUSE:Factory/.youtube-dl.new.12324/python-youtube-dl.changes  
2021-04-17 23:24:53.237586092 +0200
@@ -1,0 +2,18 @@
+Fri Apr 16 20:53:44 UTC 2021 - Jan Engelhardt <[email protected]>
+
+- Update to release 2021.04.17
+  * [curiositystream] fix format extraction
+  * [cbssports] fix extraction
+  * [mtv] Fix Viacom A/B Testing Video Player extraction
+  * [youtube:tab] Pass innertube context and x-goog-visitor-id
+    header along with continuation requests
+  * [youtube] Improve URL to extractor routing
+  * [youtube] Add more invidious instances
+  * [youtube:tab] Detect series playlist on playlists page
+  * [youtube:tab] Improve grid extraction
+  * [youtube] Improve stretch extraction and fix stretched ratio
+    calculation
+  * [utils] Add support for support for experimental HTTP
+    response status code 308 Permanent Redirect
+
+-------------------------------------------------------------------
youtube-dl.changes: same change

Old:
----
  youtube-dl-2021.04.07.tar.gz
  youtube-dl-2021.04.07.tar.gz.sig

New:
----
  youtube-dl-2021.04.17.tar.gz
  youtube-dl-2021.04.17.tar.gz.sig

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.igc0Zn/_old  2021-04-17 23:24:53.937587285 +0200
+++ /var/tmp/diff_new_pack.igc0Zn/_new  2021-04-17 23:24:53.937587285 +0200
@@ -19,7 +19,7 @@
 %define modname youtube-dl
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-youtube-dl
-Version:        2021.04.07
+Version:        2021.04.17
 Release:        0
 Summary:        A Python module for downloading from video sites for offline 
watching
 License:        CC-BY-SA-3.0 AND SUSE-Public-Domain

++++++ youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.igc0Zn/_old  2021-04-17 23:24:53.957587318 +0200
+++ /var/tmp/diff_new_pack.igc0Zn/_new  2021-04-17 23:24:53.961587326 +0200
@@ -17,7 +17,7 @@
 
 
 Name:           youtube-dl
-Version:        2021.04.07
+Version:        2021.04.17
 Release:        0
 Summary:        A tool for downloading from video sites for offline watching
 License:        CC-BY-SA-3.0 AND SUSE-Public-Domain

++++++ youtube-dl-2021.04.07.tar.gz -> youtube-dl-2021.04.17.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/ChangeLog new/youtube-dl/ChangeLog
--- old/youtube-dl/ChangeLog    2021-04-06 22:42:21.000000000 +0200
+++ new/youtube-dl/ChangeLog    2021-04-16 22:50:06.000000000 +0200
@@ -1,3 +1,28 @@
+version 2021.04.17
+
+Core
++ [utils] Add support for experimental HTTP response status code
+  308 Permanent Redirect (#27877, #28768)
+
+Extractors
++ [lbry] Add support for HLS videos (#27877, #28768)
+* [youtube] Fix stretched ratio calculation
+* [youtube] Improve stretch extraction (#28769)
+* [youtube:tab] Improve grid extraction (#28725)
++ [youtube:tab] Detect series playlist on playlists page (#28723)
++ [youtube] Add more invidious instances (#28706)
+* [pluralsight] Extend anti-throttling timeout (#28712)
+* [youtube] Improve URL to extractor routing (#27572, #28335, #28742)
++ [maoritv] Add support for maoritelevision.com (#24552)
++ [youtube:tab] Pass innertube context and x-goog-visitor-id header along with
+  continuation requests (#28702)
+* [mtv] Fix Viacom A/B Testing Video Player extraction (#28703)
++ [pornhub] Extract DASH and HLS formats from get_media end point (#28698)
+* [cbssports] Fix extraction (#28682)
+* [jamendo] Fix track extraction (#28686)
+* [curiositystream] Fix format extraction (#26845, #28668)
+
+
 version 2021.04.07
 
 Core
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/docs/supportedsites.md 
new/youtube-dl/docs/supportedsites.md
--- old/youtube-dl/docs/supportedsites.md       2021-04-06 22:42:24.000000000 
+0200
+++ new/youtube-dl/docs/supportedsites.md       2021-04-16 22:50:09.000000000 
+0200
@@ -3,6 +3,7 @@
  - **20min**
  - **220.ro**
  - **23video**
+ - **247sports**
  - **24video**
  - **3qsdn**: 3Q SDN
  - **3sat**
@@ -160,7 +161,8 @@
  - **cbsnews**: CBS News
  - **cbsnews:embed**
  - **cbsnews:livevideo**: CBS News Live Videos
- - **CBSSports**
+ - **cbssports**
+ - **cbssports:embed**
  - **CCMA**
  - **CCTV**: ?????????
  - **CDA**
@@ -490,6 +492,7 @@
  - **mangomolo:live**
  - **mangomolo:video**
  - **ManyVids**
+ - **MaoriTV**
  - **Markiza**
  - **MarkizaPage**
  - **massengeschmack.tv**
Binary files old/youtube-dl/youtube-dl and new/youtube-dl/youtube-dl differ
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/cbssports.py 
new/youtube-dl/youtube_dl/extractor/cbssports.py
--- old/youtube-dl/youtube_dl/extractor/cbssports.py    2021-03-31 
23:51:32.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/cbssports.py    2021-04-16 
22:49:44.000000000 +0200
@@ -1,38 +1,113 @@
 from __future__ import unicode_literals
 
-from .cbs import CBSBaseIE
-
-
-class CBSSportsIE(CBSBaseIE):
-    _VALID_URL = 
r'https?://(?:www\.)?cbssports\.com/[^/]+/(?:video|news)/(?P<id>[^/?#&]+)'
+import re
 
+# from .cbs import CBSBaseIE
+from .common import InfoExtractor
+from ..utils import (
+    int_or_none,
+    try_get,
+)
+
+
+# class CBSSportsEmbedIE(CBSBaseIE):
+class CBSSportsEmbedIE(InfoExtractor):
+    IE_NAME = 'cbssports:embed'
+    _VALID_URL = 
r'''(?ix)https?://(?:(?:www\.)?cbs|embed\.247)sports\.com/player/embed.+?
+        (?:
+            ids%3D(?P<id>[\da-f]{8}-(?:[\da-f]{4}-){3}[\da-f]{12})|
+            pcid%3D(?P<pcid>\d+)
+        )'''
     _TESTS = [{
-        'url': 
'https://www.cbssports.com/nba/video/donovan-mitchell-flashes-star-potential-in-game-2-victory-over-thunder/',
-        'info_dict': {
-            'id': '1214315075735',
-            'ext': 'mp4',
-            'title': 'Donovan Mitchell flashes star potential in Game 2 
victory over Thunder',
-            'description': 'md5:df6f48622612c2d6bd2e295ddef58def',
-            'timestamp': 1524111457,
-            'upload_date': '20180419',
-            'uploader': 'CBSI-NEW',
-        },
-        'params': {
-            # m3u8 download
-            'skip_download': True,
-        }
+        'url': 
'https://www.cbssports.com/player/embed/?args=player_id%3Db56c03a6-231a-4bbe-9c55-af3c8a8e9636%26ids%3Db56c03a6-231a-4bbe-9c55-af3c8a8e9636%26resizable%3D1%26autoplay%3Dtrue%26domain%3Dcbssports.com%26comp_ads_enabled%3Dfalse%26watchAndRead%3D0%26startTime%3D0%26env%3Dprod',
+        'only_matching': True,
     }, {
-        'url': 
'https://www.cbssports.com/nba/news/nba-playoffs-2018-watch-76ers-vs-heat-game-3-series-schedule-tv-channel-online-stream/',
+        'url': 
'https://embed.247sports.com/player/embed/?args=%3fplayer_id%3d1827823171591%26channel%3dcollege-football-recruiting%26pcid%3d1827823171591%26width%3d640%26height%3d360%26autoplay%3dTrue%26comp_ads_enabled%3dFalse%26uvpc%3dhttps%253a%252f%252fwww.cbssports.com%252fapi%252fcontent%252fvideo%252fconfig%252f%253fcfg%253duvp_247sports_v4%2526partner%253d247%26uvpc_m%3dhttps%253a%252f%252fwww.cbssports.com%252fapi%252fcontent%252fvideo%252fconfig%252f%253fcfg%253duvp_247sports_m_v4%2526partner_m%253d247_mobile%26utag%3d247sportssite%26resizable%3dTrue',
         'only_matching': True,
     }]
 
-    def _extract_video_info(self, filter_query, video_id):
-        return self._extract_feed_info('dJ5BDC', 'VxxJg8Ymh8sE', filter_query, 
video_id)
+    # def _extract_video_info(self, filter_query, video_id):
+    #     return self._extract_feed_info('dJ5BDC', 'VxxJg8Ymh8sE', 
filter_query, video_id)
 
     def _real_extract(self, url):
+        uuid, pcid = re.match(self._VALID_URL, url).groups()
+        query = {'id': uuid} if uuid else {'pcid': pcid}
+        video = self._download_json(
+            'https://www.cbssports.com/api/content/video/',
+            uuid or pcid, query=query)[0]
+        video_id = video['id']
+        title = video['title']
+        metadata = video.get('metaData') or {}
+        # return self._extract_video_info('byId=%d' % metadata['mpxOutletId'], 
video_id)
+        # return self._extract_video_info('byGuid=' + metadata['mpxRefId'], 
video_id)
+
+        formats = self._extract_m3u8_formats(
+            metadata['files'][0]['url'], video_id, 'mp4',
+            'm3u8_native', m3u8_id='hls', fatal=False)
+        self._sort_formats(formats)
+
+        image = video.get('image')
+        thumbnails = None
+        if image:
+            image_path = image.get('path')
+            if image_path:
+                thumbnails = [{
+                    'url': image_path,
+                    'width': int_or_none(image.get('width')),
+                    'height': int_or_none(image.get('height')),
+                    'filesize': int_or_none(image.get('size')),
+                }]
+
+        return {
+            'id': video_id,
+            'title': title,
+            'formats': formats,
+            'thumbnails': thumbnails,
+            'description': video.get('description'),
+            'timestamp': int_or_none(try_get(video, lambda x: 
x['dateCreated']['epoch'])),
+            'duration': int_or_none(metadata.get('duration')),
+        }
+
+
+class CBSSportsBaseIE(InfoExtractor):
+    def _real_extract(self, url):
         display_id = self._match_id(url)
         webpage = self._download_webpage(url, display_id)
-        video_id = self._search_regex(
-            [r'(?:=|%26)pcid%3D(\d+)', r'embedVideo(?:Container)?_(\d+)'],
-            webpage, 'video id')
-        return self._extract_video_info('byId=%s' % video_id, video_id)
+        iframe_url = self._search_regex(
+            r'<iframe[^>]+(?:data-)?src="(https?://[^/]+/player/embed[^"]+)"',
+            webpage, 'embed url')
+        return self.url_result(iframe_url, CBSSportsEmbedIE.ie_key())
+
+
+class CBSSportsIE(CBSSportsBaseIE):
+    IE_NAME = 'cbssports'
+    _VALID_URL = 
r'https?://(?:www\.)?cbssports\.com/[^/]+/video/(?P<id>[^/?#&]+)'
+    _TESTS = [{
+        'url': 
'https://www.cbssports.com/college-football/video/cover-3-stanford-spring-gleaning/',
+        'info_dict': {
+            'id': 'b56c03a6-231a-4bbe-9c55-af3c8a8e9636',
+            'ext': 'mp4',
+            'title': 'Cover 3: Stanford Spring Gleaning',
+            'description': 'The Cover 3 crew break down everything you need to 
know about the Stanford Cardinal this spring.',
+            'timestamp': 1617218398,
+            'upload_date': '20210331',
+            'duration': 502,
+        },
+    }]
+
+
+class TwentyFourSevenSportsIE(CBSSportsBaseIE):
+    IE_NAME = '247sports'
+    _VALID_URL = 
r'https?://(?:www\.)?247sports\.com/Video/(?:[^/?#&]+-)?(?P<id>\d+)'
+    _TESTS = [{
+        'url': 
'https://247sports.com/Video/2021-QB-Jake-Garcia-senior-highlights-through-five-games-10084854/',
+        'info_dict': {
+            'id': '4f1265cb-c3b5-44a8-bb1d-1914119a0ccc',
+            'ext': 'mp4',
+            'title': '2021 QB Jake Garcia senior highlights through five 
games',
+            'description': 'md5:8cb67ebed48e2e6adac1701e0ff6e45b',
+            'timestamp': 1607114223,
+            'upload_date': '20201204',
+            'duration': 208,
+        },
+    }]
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/curiositystream.py 
new/youtube-dl/youtube_dl/extractor/curiositystream.py
--- old/youtube-dl/youtube_dl/extractor/curiositystream.py      2021-03-31 
23:51:32.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/curiositystream.py      2021-04-16 
22:49:44.000000000 +0200
@@ -25,12 +25,12 @@
             raise ExtractorError(
                 '%s said: %s' % (self.IE_NAME, error), expected=True)
 
-    def _call_api(self, path, video_id):
+    def _call_api(self, path, video_id, query=None):
         headers = {}
         if self._auth_token:
             headers['X-Auth-Token'] = self._auth_token
         result = self._download_json(
-            self._API_BASE_URL + path, video_id, headers=headers)
+            self._API_BASE_URL + path, video_id, headers=headers, query=query)
         self._handle_errors(result)
         return result['data']
 
@@ -52,62 +52,75 @@
     _VALID_URL = r'https?://(?:app\.)?curiositystream\.com/video/(?P<id>\d+)'
     _TEST = {
         'url': 'https://app.curiositystream.com/video/2',
-        'md5': '262bb2f257ff301115f1973540de8983',
         'info_dict': {
             'id': '2',
             'ext': 'mp4',
             'title': 'How Did You Develop The Internet?',
             'description': 'Vint Cerf, Google\'s Chief Internet Evangelist, 
describes how he and Bob Kahn created the internet.',
-        }
+        },
+        'params': {
+            'format': 'bestvideo',
+            # m3u8 download
+            'skip_download': True,
+        },
     }
 
     def _real_extract(self, url):
         video_id = self._match_id(url)
-        media = self._call_api('media/' + video_id, video_id)
-        title = media['title']
 
         formats = []
-        for encoding in media.get('encodings', []):
-            m3u8_url = encoding.get('master_playlist_url')
-            if m3u8_url:
-                formats.extend(self._extract_m3u8_formats(
-                    m3u8_url, video_id, 'mp4', 'm3u8_native',
-                    m3u8_id='hls', fatal=False))
-            encoding_url = encoding.get('url')
-            file_url = encoding.get('file_url')
-            if not encoding_url and not file_url:
-                continue
-            f = {
-                'width': int_or_none(encoding.get('width')),
-                'height': int_or_none(encoding.get('height')),
-                'vbr': int_or_none(encoding.get('video_bitrate')),
-                'abr': int_or_none(encoding.get('audio_bitrate')),
-                'filesize': int_or_none(encoding.get('size_in_bytes')),
-                'vcodec': encoding.get('video_codec'),
-                'acodec': encoding.get('audio_codec'),
-                'container': encoding.get('container_type'),
-            }
-            for f_url in (encoding_url, file_url):
-                if not f_url:
+        for encoding_format in ('m3u8', 'mpd'):
+            media = self._call_api('media/' + video_id, video_id, query={
+                'encodingsNew': 'true',
+                'encodingsFormat': encoding_format,
+            })
+            for encoding in media.get('encodings', []):
+                playlist_url = encoding.get('master_playlist_url')
+                if encoding_format == 'm3u8':
+                    # use `m3u8` entry_protocol until EXT-X-MAP is properly 
supported by `m3u8_native` entry_protocol
+                    formats.extend(self._extract_m3u8_formats(
+                        playlist_url, video_id, 'mp4',
+                        m3u8_id='hls', fatal=False))
+                elif encoding_format == 'mpd':
+                    formats.extend(self._extract_mpd_formats(
+                        playlist_url, video_id, mpd_id='dash', fatal=False))
+                encoding_url = encoding.get('url')
+                file_url = encoding.get('file_url')
+                if not encoding_url and not file_url:
                     continue
-                fmt = f.copy()
-                rtmp = 
re.search(r'^(?P<url>rtmpe?://(?P<host>[^/]+)/(?P<app>.+))/(?P<playpath>mp[34]:.+)$',
 f_url)
-                if rtmp:
-                    fmt.update({
-                        'url': rtmp.group('url'),
-                        'play_path': rtmp.group('playpath'),
-                        'app': rtmp.group('app'),
-                        'ext': 'flv',
-                        'format_id': 'rtmp',
-                    })
-                else:
-                    fmt.update({
-                        'url': f_url,
-                        'format_id': 'http',
-                    })
-                formats.append(fmt)
+                f = {
+                    'width': int_or_none(encoding.get('width')),
+                    'height': int_or_none(encoding.get('height')),
+                    'vbr': int_or_none(encoding.get('video_bitrate')),
+                    'abr': int_or_none(encoding.get('audio_bitrate')),
+                    'filesize': int_or_none(encoding.get('size_in_bytes')),
+                    'vcodec': encoding.get('video_codec'),
+                    'acodec': encoding.get('audio_codec'),
+                    'container': encoding.get('container_type'),
+                }
+                for f_url in (encoding_url, file_url):
+                    if not f_url:
+                        continue
+                    fmt = f.copy()
+                    rtmp = 
re.search(r'^(?P<url>rtmpe?://(?P<host>[^/]+)/(?P<app>.+))/(?P<playpath>mp[34]:.+)$',
 f_url)
+                    if rtmp:
+                        fmt.update({
+                            'url': rtmp.group('url'),
+                            'play_path': rtmp.group('playpath'),
+                            'app': rtmp.group('app'),
+                            'ext': 'flv',
+                            'format_id': 'rtmp',
+                        })
+                    else:
+                        fmt.update({
+                            'url': f_url,
+                            'format_id': 'http',
+                        })
+                    formats.append(fmt)
         self._sort_formats(formats)
 
+        title = media['title']
+
         subtitles = {}
         for closed_caption in media.get('closed_captions', []):
             sub_url = closed_caption.get('file')
@@ -140,7 +153,7 @@
             'title': 'Curious Minds: The Internet',
             'description': 'How is the internet shaping our lives in the 21st 
Century?',
         },
-        'playlist_mincount': 17,
+        'playlist_mincount': 16,
     }, {
         'url': 'https://curiositystream.com/series/2',
         'only_matching': True,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/extractors.py 
new/youtube-dl/youtube_dl/extractor/extractors.py
--- old/youtube-dl/youtube_dl/extractor/extractors.py   2021-03-31 
23:51:37.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/extractors.py   2021-04-16 
22:49:44.000000000 +0200
@@ -191,7 +191,11 @@
     CBSNewsIE,
     CBSNewsLiveVideoIE,
 )
-from .cbssports import CBSSportsIE
+from .cbssports import (
+    CBSSportsEmbedIE,
+    CBSSportsIE,
+    TwentyFourSevenSportsIE,
+)
 from .ccc import (
     CCCIE,
     CCCPlaylistIE,
@@ -636,6 +640,7 @@
     MangomoloLiveIE,
 )
 from .manyvids import ManyVidsIE
+from .maoritv import MaoriTVIE
 from .markiza import (
     MarkizaIE,
     MarkizaPageIE,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/jamendo.py 
new/youtube-dl/youtube_dl/extractor/jamendo.py
--- old/youtube-dl/youtube_dl/extractor/jamendo.py      2021-03-31 
23:51:32.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/jamendo.py      2021-04-16 
22:49:44.000000000 +0200
@@ -29,34 +29,51 @@
             'id': '196219',
             'display_id': 'stories-from-emona-i',
             'ext': 'flac',
-            'title': 'Maya Filipi?? - Stories from Emona I',
-            'artist': 'Maya Filipi??',
+            # 'title': 'Maya Filipi?? - Stories from Emona I',
+            'title': 'Stories from Emona I',
+            # 'artist': 'Maya Filipi??',
             'track': 'Stories from Emona I',
             'duration': 210,
             'thumbnail': r're:^https?://.*\.jpg',
             'timestamp': 1217438117,
             'upload_date': '20080730',
+            'license': 'by-nc-nd',
+            'view_count': int,
+            'like_count': int,
+            'average_rating': int,
+            'tags': ['piano', 'peaceful', 'newage', 'strings', 'upbeat'],
         }
     }, {
         'url': 'https://licensing.jamendo.com/en/track/1496667/energetic-rock',
         'only_matching': True,
     }]
 
+    def _call_api(self, resource, resource_id):
+        path = '/api/%ss' % resource
+        rand = compat_str(random.random())
+        return self._download_json(
+            'https://www.jamendo.com' + path, resource_id, query={
+                'id[]': resource_id,
+            }, headers={
+                'X-Jam-Call': '$%s*%s~' % (hashlib.sha1((path + 
rand).encode()).hexdigest(), rand)
+            })[0]
+
     def _real_extract(self, url):
         track_id, display_id = self._VALID_URL_RE.match(url).groups()
-        webpage = self._download_webpage(
-            'https://www.jamendo.com/track/' + track_id, track_id)
-        models = self._parse_json(self._html_search_regex(
-            r"data-bundled-models='([^']+)",
-            webpage, 'bundled models'), track_id)
-        track = models['track']['models'][0]
+        # webpage = self._download_webpage(
+        #     'https://www.jamendo.com/track/' + track_id, track_id)
+        # models = self._parse_json(self._html_search_regex(
+        #     r"data-bundled-models='([^']+)",
+        #     webpage, 'bundled models'), track_id)
+        # track = models['track']['models'][0]
+        track = self._call_api('track', track_id)
         title = track_name = track['name']
-        get_model = lambda x: try_get(models, lambda y: y[x]['models'][0], 
dict) or {}
-        artist = get_model('artist')
-        artist_name = artist.get('name')
-        if artist_name:
-            title = '%s - %s' % (artist_name, title)
-        album = get_model('album')
+        # get_model = lambda x: try_get(models, lambda y: y[x]['models'][0], 
dict) or {}
+        # artist = get_model('artist')
+        # artist_name = artist.get('name')
+        # if artist_name:
+        #     title = '%s - %s' % (artist_name, title)
+        # album = get_model('album')
 
         formats = [{
             'url': 
'https://%s.jamendo.com/?trackid=%s&format=%s&from=app-97dab294'
@@ -74,7 +91,7 @@
 
         urls = []
         thumbnails = []
-        for _, covers in track.get('cover', {}).items():
+        for covers in (track.get('cover') or {}).values():
             for cover_id, cover_url in covers.items():
                 if not cover_url or cover_url in urls:
                     continue
@@ -88,13 +105,14 @@
                 })
 
         tags = []
-        for tag in track.get('tags', []):
+        for tag in (track.get('tags') or []):
             tag_name = tag.get('name')
             if not tag_name:
                 continue
             tags.append(tag_name)
 
         stats = track.get('stats') or {}
+        license = track.get('licenseCC') or []
 
         return {
             'id': track_id,
@@ -103,11 +121,11 @@
             'title': title,
             'description': track.get('description'),
             'duration': int_or_none(track.get('duration')),
-            'artist': artist_name,
+            # 'artist': artist_name,
             'track': track_name,
-            'album': album.get('name'),
+            # 'album': album.get('name'),
             'formats': formats,
-            'license': '-'.join(track.get('licenseCC', [])) or None,
+            'license': '-'.join(license) if license else None,
             'timestamp': int_or_none(track.get('dateCreated')),
             'view_count': int_or_none(stats.get('listenedAll')),
             'like_count': int_or_none(stats.get('favorited')),
@@ -116,9 +134,9 @@
         }
 
 
-class JamendoAlbumIE(InfoExtractor):
+class JamendoAlbumIE(JamendoIE):
     _VALID_URL = r'https?://(?:www\.)?jamendo\.com/album/(?P<id>[0-9]+)'
-    _TEST = {
+    _TESTS = [{
         'url': 'https://www.jamendo.com/album/121486/duck-on-cover',
         'info_dict': {
             'id': '121486',
@@ -151,17 +169,7 @@
         'params': {
             'playlistend': 2
         }
-    }
-
-    def _call_api(self, resource, resource_id):
-        path = '/api/%ss' % resource
-        rand = compat_str(random.random())
-        return self._download_json(
-            'https://www.jamendo.com' + path, resource_id, query={
-                'id[]': resource_id,
-            }, headers={
-                'X-Jam-Call': '$%s*%s~' % (hashlib.sha1((path + 
rand).encode()).hexdigest(), rand)
-            })[0]
+    }]
 
     def _real_extract(self, url):
         album_id = self._match_id(url)
@@ -169,7 +177,7 @@
         album_name = album.get('name')
 
         entries = []
-        for track in album.get('tracks', []):
+        for track in (album.get('tracks') or []):
             track_id = track.get('id')
             if not track_id:
                 continue
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/lbry.py 
new/youtube-dl/youtube_dl/extractor/lbry.py
--- old/youtube-dl/youtube_dl/extractor/lbry.py 2021-03-31 23:51:32.000000000 
+0200
+++ new/youtube-dl/youtube_dl/extractor/lbry.py 2021-04-16 22:49:44.000000000 
+0200
@@ -121,6 +121,26 @@
             'vcodec': 'none',
         }
     }, {
+        # HLS
+        'url': 
'https://odysee.com/@gardeningincanada:b/plants-i-will-never-grow-again.-the:e',
+        'md5': 'fc82f45ea54915b1495dd7cb5cc1289f',
+        'info_dict': {
+            'id': 'e51671357333fe22ae88aad320bde2f6f96b1410',
+            'ext': 'mp4',
+            'title': 'PLANTS I WILL NEVER GROW AGAIN. THE BLACK LIST PLANTS 
FOR A CANADIAN GARDEN | Gardening in Canada ????',
+            'description': 'md5:9c539c6a03fb843956de61a4d5288d5e',
+            'timestamp': 1618254123,
+            'upload_date': '20210412',
+            'release_timestamp': 1618254002,
+            'release_date': '20210412',
+            'tags': list,
+            'duration': 554,
+            'channel': 'Gardening In Canada',
+            'channel_id': 'b8be0e93b423dad221abe29545fbe8ec36e806bc',
+            'channel_url': 
'https://odysee.com/@gardeningincanada:b8be0e93b423dad221abe29545fbe8ec36e806bc',
+            'formats': 'mincount:3',
+        }
+    }, {
         'url': 
'https://odysee.com/@BrodieRobertson:5/apple-is-tracking-everything-you-do-on:e',
         'only_matching': True,
     }, {
@@ -163,10 +183,18 @@
         streaming_url = self._call_api_proxy(
             'get', claim_id, {'uri': uri}, 'streaming url')['streaming_url']
         info = self._parse_stream(result, url)
+        urlh = self._request_webpage(
+            streaming_url, display_id, note='Downloading streaming redirect 
url info')
+        if determine_ext(urlh.geturl()) == 'm3u8':
+            info['formats'] = self._extract_m3u8_formats(
+                urlh.geturl(), display_id, 'mp4', entry_protocol='m3u8_native',
+                m3u8_id='hls')
+            self._sort_formats(info['formats'])
+        else:
+            info['url'] = streaming_url
         info.update({
             'id': claim_id,
             'title': title,
-            'url': streaming_url,
         })
         return info
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/maoritv.py 
new/youtube-dl/youtube_dl/extractor/maoritv.py
--- old/youtube-dl/youtube_dl/extractor/maoritv.py      1970-01-01 
01:00:00.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/maoritv.py      2021-04-16 
22:49:44.000000000 +0200
@@ -0,0 +1,31 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+
+
+class MaoriTVIE(InfoExtractor):
+    _VALID_URL = 
r'https?://(?:www\.)?maoritelevision\.com/shows/(?:[^/]+/)+(?P<id>[^/?&#]+)'
+    _TEST = {
+        'url': 
'https://www.maoritelevision.com/shows/korero-mai/S01E054/korero-mai-series-1-episode-54',
+        'md5': '5ade8ef53851b6a132c051b1cd858899',
+        'info_dict': {
+            'id': '4774724855001',
+            'ext': 'mp4',
+            'title': 'K??rero Mai, Series 1 Episode 54',
+            'upload_date': '20160226',
+            'timestamp': 1456455018,
+            'description': 'md5:59bde32fd066d637a1a55794c56d8dcb',
+            'uploader_id': '1614493167001',
+        },
+    }
+    BRIGHTCOVE_URL_TEMPLATE = 
'http://players.brightcove.net/1614493167001/HJlhIQhQf_default/index.html?videoId=%s'
+
+    def _real_extract(self, url):
+        display_id = self._match_id(url)
+        webpage = self._download_webpage(url, display_id)
+        brightcove_id = self._search_regex(
+            r'data-main-video-id=["\'](\d+)', webpage, 'brightcove id')
+        return self.url_result(
+            self.BRIGHTCOVE_URL_TEMPLATE % brightcove_id,
+            'BrightcoveNew', brightcove_id)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/mtv.py 
new/youtube-dl/youtube_dl/extractor/mtv.py
--- old/youtube-dl/youtube_dl/extractor/mtv.py  2021-03-31 23:51:32.000000000 
+0200
+++ new/youtube-dl/youtube_dl/extractor/mtv.py  2021-04-16 22:49:44.000000000 
+0200
@@ -255,7 +255,9 @@
 
     @staticmethod
     def _extract_child_with_type(parent, t):
-        return next(c for c in parent['children'] if c.get('type') == t)
+        for c in parent['children']:
+            if c.get('type') == t:
+                return c
 
     def _extract_mgid(self, webpage):
         try:
@@ -286,7 +288,8 @@
             data = self._parse_json(self._search_regex(
                 r'__DATA__\s*=\s*({.+?});', webpage, 'data'), None)
             main_container = self._extract_child_with_type(data, 
'MainContainer')
-            video_player = self._extract_child_with_type(main_container, 
'VideoPlayer')
+            ab_testing = self._extract_child_with_type(main_container, 
'ABTesting')
+            video_player = self._extract_child_with_type(ab_testing or 
main_container, 'VideoPlayer')
             mgid = video_player['props']['media']['video']['config']['uri']
 
         return mgid
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/pluralsight.py 
new/youtube-dl/youtube_dl/extractor/pluralsight.py
--- old/youtube-dl/youtube_dl/extractor/pluralsight.py  2021-03-31 
23:51:32.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/pluralsight.py  2021-04-16 
22:49:44.000000000 +0200
@@ -393,7 +393,7 @@
                 # To somewhat reduce the probability of these consequences
                 # we will sleep random amount of time before each call to 
ViewClip.
                 self._sleep(
-                    random.randint(2, 5), display_id,
+                    random.randint(5, 10), display_id,
                     '%(video_id)s: Waiting for %(timeout)s seconds to avoid 
throttling')
 
                 if not viewclip:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/pornhub.py 
new/youtube-dl/youtube_dl/extractor/pornhub.py
--- old/youtube-dl/youtube_dl/extractor/pornhub.py      2021-03-31 
23:51:32.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/pornhub.py      2021-04-16 
22:49:44.000000000 +0200
@@ -398,6 +398,16 @@
         formats = []
 
         def add_format(format_url, height=None):
+            ext = determine_ext(format_url)
+            if ext == 'mpd':
+                formats.extend(self._extract_mpd_formats(
+                    format_url, video_id, mpd_id='dash', fatal=False))
+                return
+            if ext == 'm3u8':
+                formats.extend(self._extract_m3u8_formats(
+                    format_url, video_id, 'mp4', entry_protocol='m3u8_native',
+                    m3u8_id='hls', fatal=False))
+                return
             tbr = None
             mobj = re.search(r'(?P<height>\d+)[pP]?_(?P<tbr>\d+)[kK]', 
format_url)
             if mobj:
@@ -417,16 +427,6 @@
                     r'/(\d{6}/\d{2})/', video_url, 'upload data', default=None)
                 if upload_date:
                     upload_date = upload_date.replace('/', '')
-            ext = determine_ext(video_url)
-            if ext == 'mpd':
-                formats.extend(self._extract_mpd_formats(
-                    video_url, video_id, mpd_id='dash', fatal=False))
-                continue
-            elif ext == 'm3u8':
-                formats.extend(self._extract_m3u8_formats(
-                    video_url, video_id, 'mp4', entry_protocol='m3u8_native',
-                    m3u8_id='hls', fatal=False))
-                continue
             if '/video/get_media' in video_url:
                 medias = self._download_json(video_url, video_id, fatal=False)
                 if isinstance(medias, list):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/youtube.py 
new/youtube-dl/youtube_dl/extractor/youtube.py
--- old/youtube-dl/youtube_dl/extractor/youtube.py      2021-03-31 
23:51:37.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/youtube.py      2021-04-16 
22:49:44.000000000 +0200
@@ -46,6 +46,10 @@
 )
 
 
+def parse_qs(url):
+    return compat_urlparse.parse_qs(compat_urlparse.urlparse(url).query)
+
+
 class YoutubeBaseInfoExtractor(InfoExtractor):
     """Provide base functions for Youtube extractors"""
     _LOGIN_URL = 'https://accounts.google.com/ServiceLogin'
@@ -306,7 +310,7 @@
         return self._parse_json(
             self._search_regex(
                 r'ytcfg\.set\s*\(\s*({.+?})\s*\)\s*;', webpage, 'ytcfg',
-                default='{}'), video_id, fatal=False)
+                default='{}'), video_id, fatal=False) or {}
 
     def _extract_video(self, renderer):
         video_id = renderer['videoId']
@@ -355,21 +359,28 @@
         r'(?:www\.)?invidious\.mastodon\.host',
         r'(?:www\.)?invidious\.zapashcanon\.fr',
         r'(?:www\.)?invidious\.kavin\.rocks',
+        r'(?:www\.)?invidious\.tinfoil-hat\.net',
+        r'(?:www\.)?invidious\.himiko\.cloud',
+        r'(?:www\.)?invidious\.reallyancient\.tech',
         r'(?:www\.)?invidious\.tube',
         r'(?:www\.)?invidiou\.site',
         r'(?:www\.)?invidious\.site',
         r'(?:www\.)?invidious\.xyz',
         r'(?:www\.)?invidious\.nixnet\.xyz',
+        r'(?:www\.)?invidious\.048596\.xyz',
         r'(?:www\.)?invidious\.drycat\.fr',
+        r'(?:www\.)?inv\.skyn3t\.in',
         r'(?:www\.)?tube\.poal\.co',
         r'(?:www\.)?tube\.connect\.cafe',
         r'(?:www\.)?vid\.wxzm\.sx',
         r'(?:www\.)?vid\.mint\.lgbt',
+        r'(?:www\.)?vid\.puffyan\.us',
         r'(?:www\.)?yewtu\.be',
         r'(?:www\.)?yt\.elukerio\.org',
         r'(?:www\.)?yt\.lelux\.fi',
         r'(?:www\.)?invidious\.ggc-project\.de',
         r'(?:www\.)?yt\.maisputain\.ovh',
+        r'(?:www\.)?ytprivate\.com',
         r'(?:www\.)?invidious\.13ad\.de',
         r'(?:www\.)?invidious\.toot\.koeln',
         r'(?:www\.)?invidious\.fdn\.fr',
@@ -413,16 +424,9 @@
                          
|(?:www\.)?cleanvideosearch\.com/media/action/yt/watch\?videoId=
                          )
                      )?                                                       
# all until now is optional -> you can pass the naked ID
-                     (?P<id>[0-9A-Za-z_-]{11})                                 
     # here is it! the YouTube video ID
-                     (?!.*?\blist=
-                        (?:
-                            %(playlist_id)s|                                  
# combined list/video URLs are handled by the playlist IE
-                            WL                                                
# WL are handled by the watch later IE
-                        )
-                     )
+                     (?P<id>[0-9A-Za-z_-]{11})                                
# here is it! the YouTube video ID
                      (?(1).+)?                                                
# if we found the ID, everything can follow
                      $""" % {
-        'playlist_id': YoutubeBaseInfoExtractor._PLAYLIST_ID_RE,
         'invidious': '|'.join(_INVIDIOUS_SITES),
     }
     _PLAYER_INFO_RE = (
@@ -809,6 +813,11 @@
             'skip': 'This video does not exist.',
         },
         {
+            # Video with incomplete 'yt:stretch=16:'
+            'url': 'https://www.youtube.com/watch?v=FRhJzUSJbGI',
+            'only_matching': True,
+        },
+        {
             # Video licensed under Creative Commons
             'url': 'https://www.youtube.com/watch?v=M4gD1WSo5mA',
             'info_dict': {
@@ -1208,6 +1217,13 @@
         '397': {'acodec': 'none', 'vcodec': 'av01.0.05M.08'},
     }
 
+    @classmethod
+    def suitable(cls, url):
+        qs = parse_qs(url)
+        if qs.get('list', [None])[0]:
+            return False
+        return super(YoutubeIE, cls).suitable(url)
+
     def __init__(self, *args, **kwargs):
         super(YoutubeIE, self).__init__(*args, **kwargs)
         self._code_cache = {}
@@ -1706,13 +1722,16 @@
                 for m in re.finditer(self._meta_regex('og:video:tag'), 
webpage)]
         for keyword in keywords:
             if keyword.startswith('yt:stretch='):
-                w, h = keyword.split('=')[1].split(':')
-                w, h = int(w), int(h)
-                if w > 0 and h > 0:
-                    ratio = w / h
-                    for f in formats:
-                        if f.get('vcodec') != 'none':
-                            f['stretched_ratio'] = ratio
+                mobj = re.search(r'(\d+)\s*:\s*(\d+)', keyword)
+                if mobj:
+                    # NB: float is intentional for forcing float division
+                    w, h = (float(v) for v in mobj.groups())
+                    if w > 0 and h > 0:
+                        ratio = w / h
+                        for f in formats:
+                            if f.get('vcodec') != 'none':
+                                f['stretched_ratio'] = ratio
+                        break
 
         thumbnails = []
         for container in (video_details, microformat):
@@ -2009,6 +2028,15 @@
             'description': 'md5:be97ee0f14ee314f1f002cf187166ee2',
         },
     }, {
+        # playlists, series
+        'url': 
'https://www.youtube.com/c/3blue1brown/playlists?view=50&sort=dd&shelf_id=3',
+        'playlist_mincount': 5,
+        'info_dict': {
+            'id': 'UCYO_jab_esuFRV4b17AJtAw',
+            'title': '3Blue1Brown - Playlists',
+            'description': 'md5:e1384e8a133307dd10edee76e875d62f',
+        },
+    }, {
         # playlists, singlepage
         'url': 'https://www.youtube.com/user/ThirstForScience/playlists',
         'playlist_mincount': 4,
@@ -2275,6 +2303,9 @@
             'title': '#cctv9',
         },
         'playlist_mincount': 350,
+    }, {
+        'url': 
'https://www.youtube.com/watch?list=PLW4dVinRY435CBE_JD3t-0SRXKfnZHS1P&feature=youtu.be&v=M9cJMXmQ_ZU',
+        'only_matching': True,
     }]
 
     @classmethod
@@ -2297,10 +2328,13 @@
 
     @staticmethod
     def _extract_grid_item_renderer(item):
-        for item_kind in ('Playlist', 'Video', 'Channel'):
-            renderer = item.get('grid%sRenderer' % item_kind)
-            if renderer:
-                return renderer
+        assert isinstance(item, dict)
+        for key, renderer in item.items():
+            if not key.startswith('grid') or not key.endswith('Renderer'):
+                continue
+            if not isinstance(renderer, dict):
+                continue
+            return renderer
 
     def _grid_entries(self, grid_renderer):
         for item in grid_renderer['items']:
@@ -2310,7 +2344,8 @@
             if not isinstance(renderer, dict):
                 continue
             title = try_get(
-                renderer, lambda x: x['title']['runs'][0]['text'], compat_str)
+                renderer, (lambda x: x['title']['runs'][0]['text'],
+                           lambda x: x['title']['simpleText']), compat_str)
             # playlist
             playlist_id = renderer.get('playlistId')
             if playlist_id:
@@ -2318,10 +2353,12 @@
                     'https://www.youtube.com/playlist?list=%s' % playlist_id,
                     ie=YoutubeTabIE.ie_key(), video_id=playlist_id,
                     video_title=title)
+                continue
             # video
             video_id = renderer.get('videoId')
             if video_id:
                 yield self._extract_video(renderer)
+                continue
             # channel
             channel_id = renderer.get('channelId')
             if channel_id:
@@ -2330,6 +2367,17 @@
                 yield self.url_result(
                     'https://www.youtube.com/channel/%s' % channel_id,
                     ie=YoutubeTabIE.ie_key(), video_title=title)
+                continue
+            # generic endpoint URL support
+            ep_url = urljoin('https://www.youtube.com/', try_get(
+                renderer, lambda x: 
x['navigationEndpoint']['commandMetadata']['webCommandMetadata']['url'],
+                compat_str))
+            if ep_url:
+                for ie in (YoutubeTabIE, YoutubePlaylistIE, YoutubeIE):
+                    if ie.suitable(ep_url):
+                        yield self.url_result(
+                            ep_url, ie=ie.ie_key(), 
video_id=ie._match_id(ep_url), video_title=title)
+                        break
 
     def _shelf_entries_from_content(self, shelf_renderer):
         content = shelf_renderer.get('content')
@@ -2475,7 +2523,7 @@
             ctp = continuation_ep.get('clickTrackingParams')
             return YoutubeTabIE._build_continuation_query(continuation, ctp)
 
-    def _entries(self, tab, identity_token):
+    def _entries(self, tab, item_id, webpage):
         tab_content = try_get(tab, lambda x: x['content'], dict)
         if not tab_content:
             return
@@ -2535,26 +2583,37 @@
                 yield entry
             continuation = self._extract_continuation(rich_grid_renderer)
 
+        ytcfg = self._extract_ytcfg(item_id, webpage)
+        client_version = try_get(
+            ytcfg, lambda x: x['INNERTUBE_CLIENT_VERSION'], compat_str) or 
'2.20210407.08.00'
+
         headers = {
             'x-youtube-client-name': '1',
-            'x-youtube-client-version': '2.20201112.04.01',
+            'x-youtube-client-version': client_version,
             'content-type': 'application/json',
         }
+
+        context = try_get(ytcfg, lambda x: x['INNERTUBE_CONTEXT'], dict) or {
+            'client': {
+                'clientName': 'WEB',
+                'clientVersion': client_version,
+            }
+        }
+        visitor_data = try_get(context, lambda x: x['client']['visitorData'], 
compat_str)
+
+        identity_token = self._extract_identity_token(ytcfg, webpage)
         if identity_token:
             headers['x-youtube-identity-token'] = identity_token
 
         data = {
-            'context': {
-                'client': {
-                    'clientName': 'WEB',
-                    'clientVersion': '2.20201021.03.00',
-                }
-            },
+            'context': context,
         }
 
         for page_num in itertools.count(1):
             if not continuation:
                 break
+            if visitor_data:
+                headers['x-goog-visitor-id'] = visitor_data
             data['continuation'] = continuation['continuation']
             data['clickTracking'] = {
                 'clickTrackingParams': continuation['itct']
@@ -2579,6 +2638,9 @@
             if not response:
                 break
 
+            visitor_data = try_get(
+                response, lambda x: x['responseContext']['visitorData'], 
compat_str) or visitor_data
+
             continuation_contents = try_get(
                 response, lambda x: x['continuationContents'], dict)
             if continuation_contents:
@@ -2687,7 +2749,7 @@
                 alerts.append(text)
         return '\n'.join(alerts)
 
-    def _extract_from_tabs(self, item_id, webpage, data, tabs, identity_token):
+    def _extract_from_tabs(self, item_id, webpage, data, tabs):
         selected_tab = self._extract_selected_tab(tabs)
         renderer = try_get(
             data, lambda x: x['metadata']['channelMetadataRenderer'], dict)
@@ -2712,7 +2774,7 @@
                 if renderer:
                     title = try_get(renderer, lambda x: 
x['hashtag']['simpleText'])
         playlist = self.playlist_result(
-            self._entries(selected_tab, identity_token),
+            self._entries(selected_tab, item_id, webpage),
             playlist_id=playlist_id, playlist_title=title,
             playlist_description=description)
         playlist.update(self._extract_uploader(data))
@@ -2736,8 +2798,7 @@
             self._playlist_entries(playlist), playlist_id=playlist_id,
             playlist_title=title)
 
-    def _extract_identity_token(self, webpage, item_id):
-        ytcfg = self._extract_ytcfg(item_id, webpage)
+    def _extract_identity_token(self, ytcfg, webpage):
         if ytcfg:
             token = try_get(ytcfg, lambda x: x['ID_TOKEN'], compat_str)
             if token:
@@ -2751,7 +2812,7 @@
         url = compat_urlparse.urlunparse(
             compat_urlparse.urlparse(url)._replace(netloc='www.youtube.com'))
         # Handle both video/playlist URLs
-        qs = compat_urlparse.parse_qs(compat_urlparse.urlparse(url).query)
+        qs = parse_qs(url)
         video_id = qs.get('v', [None])[0]
         playlist_id = qs.get('list', [None])[0]
         if video_id and playlist_id:
@@ -2760,12 +2821,11 @@
                 return self.url_result(video_id, ie=YoutubeIE.ie_key(), 
video_id=video_id)
             self.to_screen('Downloading playlist %s - add --no-playlist to 
just download video %s' % (playlist_id, video_id))
         webpage = self._download_webpage(url, item_id)
-        identity_token = self._extract_identity_token(webpage, item_id)
         data = self._extract_yt_initial_data(item_id, webpage)
         tabs = try_get(
             data, lambda x: 
x['contents']['twoColumnBrowseResultsRenderer']['tabs'], list)
         if tabs:
-            return self._extract_from_tabs(item_id, webpage, data, tabs, 
identity_token)
+            return self._extract_from_tabs(item_id, webpage, data, tabs)
         playlist = try_get(
             data, lambda x: 
x['contents']['twoColumnWatchNextResults']['playlist']['playlist'], dict)
         if playlist:
@@ -2848,12 +2908,16 @@
 
     @classmethod
     def suitable(cls, url):
-        return False if YoutubeTabIE.suitable(url) else super(
-            YoutubePlaylistIE, cls).suitable(url)
+        if YoutubeTabIE.suitable(url):
+            return False
+        qs = parse_qs(url)
+        if qs.get('v', [None])[0]:
+            return False
+        return super(YoutubePlaylistIE, cls).suitable(url)
 
     def _real_extract(self, url):
         playlist_id = self._match_id(url)
-        qs = compat_urlparse.parse_qs(compat_urlparse.urlparse(url).query)
+        qs = parse_qs(url)
         if not qs:
             qs = {'list': playlist_id}
         return self.url_result(
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/utils.py 
new/youtube-dl/youtube_dl/utils.py
--- old/youtube-dl/youtube_dl/utils.py  2021-03-31 23:51:32.000000000 +0200
+++ new/youtube-dl/youtube_dl/utils.py  2021-04-16 22:49:44.000000000 +0200
@@ -39,6 +39,7 @@
 from .compat import (
     compat_HTMLParseError,
     compat_HTMLParser,
+    compat_HTTPError,
     compat_basestring,
     compat_chr,
     compat_cookiejar,
@@ -2879,12 +2880,60 @@
 
 
 class YoutubeDLRedirectHandler(compat_urllib_request.HTTPRedirectHandler):
-    if sys.version_info[0] < 3:
-        def redirect_request(self, req, fp, code, msg, headers, newurl):
-            # On python 2 urlh.geturl() may sometimes return redirect URL
-            # as byte string instead of unicode. This workaround allows
-            # to force it always return unicode.
-            return 
compat_urllib_request.HTTPRedirectHandler.redirect_request(self, req, fp, code, 
msg, headers, compat_str(newurl))
+    """YoutubeDL redirect handler
+
+    The code is based on HTTPRedirectHandler implementation from CPython [1].
+
+    This redirect handler solves two issues:
+     - ensures redirect URL is always unicode under python 2
+     - introduces support for experimental HTTP response status code
+       308 Permanent Redirect [2] used by some sites [3]
+
+    1. https://github.com/python/cpython/blob/master/Lib/urllib/request.py
+    2. https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/308
+    3. https://github.com/ytdl-org/youtube-dl/issues/28768
+    """
+
+    http_error_301 = http_error_303 = http_error_307 = http_error_308 = 
compat_urllib_request.HTTPRedirectHandler.http_error_302
+
+    def redirect_request(self, req, fp, code, msg, headers, newurl):
+        """Return a Request or None in response to a redirect.
+
+        This is called by the http_error_30x methods when a
+        redirection response is received.  If a redirection should
+        take place, return a new Request to allow http_error_30x to
+        perform the redirect.  Otherwise, raise HTTPError if no-one
+        else should try to handle this url.  Return None if you can't
+        but another Handler might.
+        """
+        m = req.get_method()
+        if (not (code in (301, 302, 303, 307, 308) and m in ("GET", "HEAD")
+                 or code in (301, 302, 303) and m == "POST")):
+            raise compat_HTTPError(req.full_url, code, msg, headers, fp)
+        # Strictly (according to RFC 2616), 301 or 302 in response to
+        # a POST MUST NOT cause a redirection without confirmation
+        # from the user (of urllib.request, in this case).  In practice,
+        # essentially all clients do redirect in this case, so we do
+        # the same.
+
+        # On python 2 urlh.geturl() may sometimes return redirect URL
+        # as byte string instead of unicode. This workaround allows
+        # to force it always return unicode.
+        if sys.version_info[0] < 3:
+            newurl = compat_str(newurl)
+
+        # Be conciliant with URIs containing a space.  This is mainly
+        # redundant with the more complete encoding done in http_error_302(),
+        # but it is kept for compatibility with other callers.
+        newurl = newurl.replace(' ', '%20')
+
+        CONTENT_HEADERS = ("content-length", "content-type")
+        # NB: don't use dict comprehension for python 2.6 compatibility
+        newheaders = dict((k, v) for k, v in req.headers.items()
+                          if k.lower() not in CONTENT_HEADERS)
+        return compat_urllib_request.Request(
+            newurl, headers=newheaders, origin_req_host=req.origin_req_host,
+            unverifiable=True)
 
 
 def extract_timezone(date_str):
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/version.py 
new/youtube-dl/youtube_dl/version.py
--- old/youtube-dl/youtube_dl/version.py        2021-04-06 22:42:21.000000000 
+0200
+++ new/youtube-dl/youtube_dl/version.py        2021-04-16 22:50:06.000000000 
+0200
@@ -1,3 +1,3 @@
 from __future__ import unicode_literals
 
-__version__ = '2021.04.07'
+__version__ = '2021.04.17'

Reply via email to