Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package youtube-dl for openSUSE:Factory 
checked in at 2021-04-08 22:13:04
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/youtube-dl (Old)
 and      /work/SRC/openSUSE:Factory/.youtube-dl.new.2401 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "youtube-dl"

Thu Apr  8 22:13:04 2021 rev:164 rq:883461 version:2021.04.07

Changes:
--------
--- /work/SRC/openSUSE:Factory/youtube-dl/python-youtube-dl.changes     
2021-04-01 14:19:27.236167290 +0200
+++ /work/SRC/openSUSE:Factory/.youtube-dl.new.2401/python-youtube-dl.changes   
2021-04-08 22:13:08.745537140 +0200
@@ -1,0 +2,6 @@
+Tue Apr  6 23:01:41 UTC 2021 - Jan Engelhardt <jeng...@inai.de>
+
+- Update to release 2021.04.07
+  * youtube: Add support for hashtag videos extraction
+
+-------------------------------------------------------------------
youtube-dl.changes: same change

Old:
----
  youtube-dl-2021.04.01.tar.gz
  youtube-dl-2021.04.01.tar.gz.sig

New:
----
  youtube-dl-2021.04.07.tar.gz
  youtube-dl-2021.04.07.tar.gz.sig

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.0gMvJv/_old  2021-04-08 22:13:10.161538533 +0200
+++ /var/tmp/diff_new_pack.0gMvJv/_new  2021-04-08 22:13:10.165538537 +0200
@@ -19,7 +19,7 @@
 %define modname youtube-dl
 %{?!python_module:%define python_module() python-%{**} python3-%{**}}
 Name:           python-youtube-dl
-Version:        2021.04.01
+Version:        2021.04.07
 Release:        0
 Summary:        A Python module for downloading from video sites for offline 
watching
 License:        CC-BY-SA-3.0 AND SUSE-Public-Domain

++++++ youtube-dl.spec ++++++
--- /var/tmp/diff_new_pack.0gMvJv/_old  2021-04-08 22:13:10.193538565 +0200
+++ /var/tmp/diff_new_pack.0gMvJv/_new  2021-04-08 22:13:10.197538568 +0200
@@ -17,7 +17,7 @@
 
 
 Name:           youtube-dl
-Version:        2021.04.01
+Version:        2021.04.07
 Release:        0
 Summary:        A tool for downloading from video sites for offline watching
 License:        CC-BY-SA-3.0 AND SUSE-Public-Domain

++++++ youtube-dl-2021.04.01.tar.gz -> youtube-dl-2021.04.07.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/ChangeLog new/youtube-dl/ChangeLog
--- old/youtube-dl/ChangeLog    2021-03-31 23:47:08.000000000 +0200
+++ new/youtube-dl/ChangeLog    2021-04-06 22:42:21.000000000 +0200
@@ -1,3 +1,25 @@
+version 2021.04.07
+
+Core
+* [extractor/common] Use compat_cookies_SimpleCookie for _get_cookies
++ [compat] Introduce compat_cookies_SimpleCookie
+* [extractor/common] Improve JSON-LD author extraction
+* [extractor/common] Fix _get_cookies on python 2 (#20673, #23256, #20326,
+  #28640)
+
+Extractors
+* [youtube] Fix extraction of videos with restricted location (#28685)
++ [line] Add support for live.line.me (#17205, #28658)
+* [vimeo] Improve extraction (#28591)
+* [youku] Update ccode (#17852, #28447, #28460, #28648)
+* [youtube] Prefer direct entry metadata over entry metadata from playlist
+  (#28619, #28636)
+* [screencastomatic] Fix extraction (#11976, #24489)
++ [palcomp3] Add support for palcomp3.com (#13120)
++ [arnes] Add support for video.arnes.si (#28483)
++ [youtube:tab] Add support for hashtags (#28308)
+
+
 version 2021.04.01
 
 Extractors
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/docs/supportedsites.md 
new/youtube-dl/docs/supportedsites.md
--- old/youtube-dl/docs/supportedsites.md       2021-03-31 23:47:11.000000000 
+0200
+++ new/youtube-dl/docs/supportedsites.md       2021-04-06 22:42:24.000000000 
+0200
@@ -463,6 +463,8 @@
  - **limelight**
  - **limelight:channel**
  - **limelight:channel_list**
+ - **LineLive**
+ - **LineLiveChannel**
  - **LineTV**
  - **linkedin:learning**
  - **linkedin:learning:course**
@@ -679,6 +681,9 @@
  - **OutsideTV**
  - **PacktPub**
  - **PacktPubCourse**
+ - **PalcoMP3:artist**
+ - **PalcoMP3:song**
+ - **PalcoMP3:video**
  - **pandora.tv**: ?????????TV
  - **ParamountNetwork**
  - **parliamentlive.tv**: UK parliament videos
@@ -1059,6 +1064,7 @@
  - **Vidbit**
  - **Viddler**
  - **Videa**
+ - **video.arnes.si**: Arnes Video
  - **video.google:search**: Google Video search
  - **video.sky.it**
  - **video.sky.it:live**
Binary files old/youtube-dl/youtube-dl and new/youtube-dl/youtube-dl differ
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/compat.py 
new/youtube-dl/youtube_dl/compat.py
--- old/youtube-dl/youtube_dl/compat.py 2021-03-30 22:01:34.000000000 +0200
+++ new/youtube-dl/youtube_dl/compat.py 2021-03-31 23:51:37.000000000 +0200
@@ -73,6 +73,15 @@
 except ImportError:  # Python 2
     import Cookie as compat_cookies
 
+if sys.version_info[0] == 2:
+    class compat_cookies_SimpleCookie(compat_cookies.SimpleCookie):
+        def load(self, rawdata):
+            if isinstance(rawdata, compat_str):
+                rawdata = str(rawdata)
+            return super(compat_cookies_SimpleCookie, self).load(rawdata)
+else:
+    compat_cookies_SimpleCookie = compat_cookies.SimpleCookie
+
 try:
     import html.entities as compat_html_entities
 except ImportError:  # Python 2
@@ -3000,6 +3009,7 @@
     'compat_cookiejar',
     'compat_cookiejar_Cookie',
     'compat_cookies',
+    'compat_cookies_SimpleCookie',
     'compat_ctypes_WINFUNCTYPE',
     'compat_etree_Element',
     'compat_etree_fromstring',
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/arnes.py 
new/youtube-dl/youtube_dl/extractor/arnes.py
--- old/youtube-dl/youtube_dl/extractor/arnes.py        1970-01-01 
01:00:00.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/arnes.py        2021-03-31 
23:51:37.000000000 +0200
@@ -0,0 +1,101 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+from .common import InfoExtractor
+from ..compat import (
+    compat_parse_qs,
+    compat_urllib_parse_urlparse,
+)
+from ..utils import (
+    float_or_none,
+    int_or_none,
+    parse_iso8601,
+    remove_start,
+)
+
+
+class ArnesIE(InfoExtractor):
+    IE_NAME = 'video.arnes.si'
+    IE_DESC = 'Arnes Video'
+    _VALID_URL = 
r'https?://video\.arnes\.si/(?:[a-z]{2}/)?(?:watch|embed|api/(?:asset|public/video))/(?P<id>[0-9a-zA-Z]{12})'
+    _TESTS = [{
+        'url': 'https://video.arnes.si/watch/a1qrWTOQfVoU?t=10',
+        'md5': '4d0f4d0a03571b33e1efac25fd4a065d',
+        'info_dict': {
+            'id': 'a1qrWTOQfVoU',
+            'ext': 'mp4',
+            'title': 'Linearna neodvisnost, definicija',
+            'description': 'Linearna neodvisnost, definicija',
+            'license': 'PRIVATE',
+            'creator': 'Polona Oblak',
+            'timestamp': 1585063725,
+            'upload_date': '20200324',
+            'channel': 'Polona Oblak',
+            'channel_id': 'q6pc04hw24cj',
+            'channel_url': 'https://video.arnes.si/?channel=q6pc04hw24cj',
+            'duration': 596.75,
+            'view_count': int,
+            'tags': ['linearna_algebra'],
+            'start_time': 10,
+        }
+    }, {
+        'url': 'https://video.arnes.si/api/asset/s1YjnV7hadlC/play.mp4',
+        'only_matching': True,
+    }, {
+        'url': 'https://video.arnes.si/embed/s1YjnV7hadlC',
+        'only_matching': True,
+    }, {
+        'url': 'https://video.arnes.si/en/watch/s1YjnV7hadlC',
+        'only_matching': True,
+    }, {
+        'url': 'https://video.arnes.si/embed/s1YjnV7hadlC?t=123&hideRelated=1',
+        'only_matching': True,
+    }, {
+        'url': 'https://video.arnes.si/api/public/video/s1YjnV7hadlC',
+        'only_matching': True,
+    }]
+    _BASE_URL = 'https://video.arnes.si'
+
+    def _real_extract(self, url):
+        video_id = self._match_id(url)
+
+        video = self._download_json(
+            self._BASE_URL + '/api/public/video/' + video_id, video_id)['data']
+        title = video['title']
+
+        formats = []
+        for media in (video.get('media') or []):
+            media_url = media.get('url')
+            if not media_url:
+                continue
+            formats.append({
+                'url': self._BASE_URL + media_url,
+                'format_id': remove_start(media.get('format'), 'FORMAT_'),
+                'format_note': media.get('formatTranslation'),
+                'width': int_or_none(media.get('width')),
+                'height': int_or_none(media.get('height')),
+            })
+        self._sort_formats(formats)
+
+        channel = video.get('channel') or {}
+        channel_id = channel.get('url')
+        thumbnail = video.get('thumbnailUrl')
+
+        return {
+            'id': video_id,
+            'title': title,
+            'formats': formats,
+            'thumbnail': self._BASE_URL + thumbnail,
+            'description': video.get('description'),
+            'license': video.get('license'),
+            'creator': video.get('author'),
+            'timestamp': parse_iso8601(video.get('creationTime')),
+            'channel': channel.get('name'),
+            'channel_id': channel_id,
+            'channel_url': self._BASE_URL + '/?channel=' + channel_id if 
channel_id else None,
+            'duration': float_or_none(video.get('duration'), 1000),
+            'view_count': int_or_none(video.get('views')),
+            'tags': video.get('hashtags'),
+            'start_time': int_or_none(compat_parse_qs(
+                compat_urllib_parse_urlparse(url).query).get('t', [None])[0]),
+        }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/common.py 
new/youtube-dl/youtube_dl/extractor/common.py
--- old/youtube-dl/youtube_dl/extractor/common.py       2021-03-30 
22:01:34.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/common.py       2021-03-31 
23:51:37.000000000 +0200
@@ -17,7 +17,7 @@
 
 from ..compat import (
     compat_cookiejar_Cookie,
-    compat_cookies,
+    compat_cookies_SimpleCookie,
     compat_etree_Element,
     compat_etree_fromstring,
     compat_getpass,
@@ -1275,6 +1275,7 @@
 
         def extract_video_object(e):
             assert e['@type'] == 'VideoObject'
+            author = e.get('author')
             info.update({
                 'url': url_or_none(e.get('contentUrl')),
                 'title': unescapeHTML(e.get('name')),
@@ -1282,7 +1283,11 @@
                 'thumbnail': url_or_none(e.get('thumbnailUrl') or 
e.get('thumbnailURL')),
                 'duration': parse_duration(e.get('duration')),
                 'timestamp': unified_timestamp(e.get('uploadDate')),
-                'uploader': str_or_none(e.get('author')),
+                # author can be an instance of 'Organization' or 'Person' 
types.
+                # both types can have 'name' property(inherited from 'Thing' 
type). [1]
+                # however some websites are using 'Text' type instead.
+                # 1. https://schema.org/VideoObject
+                'uploader': author.get('name') if isinstance(author, dict) 
else author if isinstance(author, compat_str) else None,
                 'filesize': float_or_none(e.get('contentSize')),
                 'tbr': int_or_none(e.get('bitrate')),
                 'width': int_or_none(e.get('width')),
@@ -2896,10 +2901,10 @@
         self._downloader.cookiejar.set_cookie(cookie)
 
     def _get_cookies(self, url):
-        """ Return a compat_cookies.SimpleCookie with the cookies for the url 
"""
+        """ Return a compat_cookies_SimpleCookie with the cookies for the url 
"""
         req = sanitized_Request(url)
         self._downloader.cookiejar.add_cookie_header(req)
-        return compat_cookies.SimpleCookie(req.get_header('Cookie'))
+        return compat_cookies_SimpleCookie(req.get_header('Cookie'))
 
     def _apply_first_set_cookie_header(self, url_handle, cookie):
         """
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/extractors.py 
new/youtube-dl/youtube_dl/extractor/extractors.py
--- old/youtube-dl/youtube_dl/extractor/extractors.py   2021-03-30 
22:01:34.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/extractors.py   2021-03-31 
23:51:37.000000000 +0200
@@ -72,6 +72,7 @@
     ArteTVEmbedIE,
     ArteTVPlaylistIE,
 )
+from .arnes import ArnesIE
 from .asiancrush import (
     AsianCrushIE,
     AsianCrushPlaylistIE,
@@ -594,7 +595,11 @@
     LimelightChannelIE,
     LimelightChannelListIE,
 )
-from .line import LineTVIE
+from .line import (
+    LineTVIE,
+    LineLiveIE,
+    LineLiveChannelIE,
+)
 from .linkedin import (
     LinkedInLearningIE,
     LinkedInLearningCourseIE,
@@ -878,6 +883,11 @@
     PacktPubIE,
     PacktPubCourseIE,
 )
+from .palcomp3 import (
+    PalcoMP3IE,
+    PalcoMP3ArtistIE,
+    PalcoMP3VideoIE,
+)
 from .pandoratv import PandoraTVIE
 from .parliamentliveuk import ParliamentLiveUKIE
 from .patreon import PatreonIE
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/line.py 
new/youtube-dl/youtube_dl/extractor/line.py
--- old/youtube-dl/youtube_dl/extractor/line.py 2021-03-30 22:01:34.000000000 
+0200
+++ new/youtube-dl/youtube_dl/extractor/line.py 2021-03-31 23:51:37.000000000 
+0200
@@ -4,7 +4,13 @@
 import re
 
 from .common import InfoExtractor
-from ..utils import js_to_json
+from ..compat import compat_str
+from ..utils import (
+    ExtractorError,
+    int_or_none,
+    js_to_json,
+    str_or_none,
+)
 
 
 class LineTVIE(InfoExtractor):
@@ -88,3 +94,137 @@
                            for thumbnail in video_info.get('thumbnails', 
{}).get('list', [])],
             'view_count': video_info.get('meta', {}).get('count'),
         }
+
+
+class LineLiveBaseIE(InfoExtractor):
+    _API_BASE_URL = 'https://live-api.line-apps.com/web/v4.0/channel/'
+
+    def _parse_broadcast_item(self, item):
+        broadcast_id = compat_str(item['id'])
+        title = item['title']
+        is_live = item.get('isBroadcastingNow')
+
+        thumbnails = []
+        for thumbnail_id, thumbnail_url in (item.get('thumbnailURLs') or 
{}).items():
+            if not thumbnail_url:
+                continue
+            thumbnails.append({
+                'id': thumbnail_id,
+                'url': thumbnail_url,
+            })
+
+        channel = item.get('channel') or {}
+        channel_id = str_or_none(channel.get('id'))
+
+        return {
+            'id': broadcast_id,
+            'title': self._live_title(title) if is_live else title,
+            'thumbnails': thumbnails,
+            'timestamp': int_or_none(item.get('createdAt')),
+            'channel': channel.get('name'),
+            'channel_id': channel_id,
+            'channel_url': 'https://live.line.me/channels/' + channel_id if 
channel_id else None,
+            'duration': int_or_none(item.get('archiveDuration')),
+            'view_count': int_or_none(item.get('viewerCount')),
+            'comment_count': int_or_none(item.get('chatCount')),
+            'is_live': is_live,
+        }
+
+
+class LineLiveIE(LineLiveBaseIE):
+    _VALID_URL = 
r'https?://live\.line\.me/channels/(?P<channel_id>\d+)/broadcast/(?P<id>\d+)'
+    _TESTS = [{
+        'url': 'https://live.line.me/channels/4867368/broadcast/16331360',
+        'md5': 'bc931f26bf1d4f971e3b0982b3fab4a3',
+        'info_dict': {
+            'id': '16331360',
+            'title': '??????????????????????????????',
+            'ext': 'mp4',
+            'timestamp': 1617095132,
+            'upload_date': '20210330',
+            'channel': '???????????????',
+            'channel_id': '4867368',
+            'view_count': int,
+            'comment_count': int,
+            'is_live': False,
+        }
+    }, {
+        # archiveStatus == 'DELETED'
+        'url': 'https://live.line.me/channels/4778159/broadcast/16378488',
+        'only_matching': True,
+    }]
+
+    def _real_extract(self, url):
+        channel_id, broadcast_id = re.match(self._VALID_URL, url).groups()
+        broadcast = self._download_json(
+            self._API_BASE_URL + '%s/broadcast/%s' % (channel_id, 
broadcast_id),
+            broadcast_id)
+        item = broadcast['item']
+        info = self._parse_broadcast_item(item)
+        protocol = 'm3u8' if info['is_live'] else 'm3u8_native'
+        formats = []
+        for k, v in (broadcast.get(('live' if info['is_live'] else 'archived') 
+ 'HLSURLs') or {}).items():
+            if not v:
+                continue
+            if k == 'abr':
+                formats.extend(self._extract_m3u8_formats(
+                    v, broadcast_id, 'mp4', protocol,
+                    m3u8_id='hls', fatal=False))
+                continue
+            f = {
+                'ext': 'mp4',
+                'format_id': 'hls-' + k,
+                'protocol': protocol,
+                'url': v,
+            }
+            if not k.isdigit():
+                f['vcodec'] = 'none'
+            formats.append(f)
+        if not formats:
+            archive_status = item.get('archiveStatus')
+            if archive_status != 'ARCHIVED':
+                raise ExtractorError('this video has been ' + 
archive_status.lower(), expected=True)
+        self._sort_formats(formats)
+        info['formats'] = formats
+        return info
+
+
+class LineLiveChannelIE(LineLiveBaseIE):
+    _VALID_URL = 
r'https?://live\.line\.me/channels/(?P<id>\d+)(?!/broadcast/\d+)(?:[/?&#]|$)'
+    _TEST = {
+        'url': 'https://live.line.me/channels/5893542',
+        'info_dict': {
+            'id': '5893542',
+            'title': '??????????????????',
+            'description': 'md5:c3a4af801f43b2fac0b02294976580be',
+        },
+        'playlist_mincount': 29
+    }
+
+    def _archived_broadcasts_entries(self, archived_broadcasts, channel_id):
+        while True:
+            for row in (archived_broadcasts.get('rows') or []):
+                share_url = str_or_none(row.get('shareURL'))
+                if not share_url:
+                    continue
+                info = self._parse_broadcast_item(row)
+                info.update({
+                    '_type': 'url',
+                    'url': share_url,
+                    'ie_key': LineLiveIE.ie_key(),
+                })
+                yield info
+            if not archived_broadcasts.get('hasNextPage'):
+                return
+            archived_broadcasts = self._download_json(
+                self._API_BASE_URL + channel_id + '/archived_broadcasts',
+                channel_id, query={
+                    'lastId': info['id'],
+                })
+
+    def _real_extract(self, url):
+        channel_id = self._match_id(url)
+        channel = self._download_json(self._API_BASE_URL + channel_id, 
channel_id)
+        return self.playlist_result(
+            
self._archived_broadcasts_entries(channel.get('archivedBroadcasts') or {}, 
channel_id),
+            channel_id, channel.get('title'), channel.get('information'))
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/palcomp3.py 
new/youtube-dl/youtube_dl/extractor/palcomp3.py
--- old/youtube-dl/youtube_dl/extractor/palcomp3.py     1970-01-01 
01:00:00.000000000 +0100
+++ new/youtube-dl/youtube_dl/extractor/palcomp3.py     2021-03-31 
23:51:37.000000000 +0200
@@ -0,0 +1,148 @@
+# coding: utf-8
+from __future__ import unicode_literals
+
+import re
+
+from .common import InfoExtractor
+from ..compat import compat_str
+from ..utils import (
+    int_or_none,
+    str_or_none,
+    try_get,
+)
+
+
+class PalcoMP3BaseIE(InfoExtractor):
+    _GQL_QUERY_TMPL = '''{
+  artist(slug: "%s") {
+    %s
+  }
+}'''
+    _ARTIST_FIELDS_TMPL = '''music(slug: "%%s") {
+      %s
+    }'''
+    _MUSIC_FIELDS = '''duration
+      hls
+      mp3File
+      musicID
+      plays
+      title'''
+
+    def _call_api(self, artist_slug, artist_fields):
+        return self._download_json(
+            'https://www.palcomp3.com.br/graphql/', artist_slug, query={
+                'query': self._GQL_QUERY_TMPL % (artist_slug, artist_fields),
+            })['data']
+
+    def _parse_music(self, music):
+        music_id = compat_str(music['musicID'])
+        title = music['title']
+
+        formats = []
+        hls_url = music.get('hls')
+        if hls_url:
+            formats.append({
+                'url': hls_url,
+                'protocol': 'm3u8_native',
+                'ext': 'mp4',
+            })
+        mp3_file = music.get('mp3File')
+        if mp3_file:
+            formats.append({
+                'url': mp3_file,
+            })
+
+        return {
+            'id': music_id,
+            'title': title,
+            'formats': formats,
+            'duration': int_or_none(music.get('duration')),
+            'view_count': int_or_none(music.get('plays')),
+        }
+
+    def _real_initialize(self):
+        self._ARTIST_FIELDS_TMPL = self._ARTIST_FIELDS_TMPL % 
self._MUSIC_FIELDS
+
+    def _real_extract(self, url):
+        artist_slug, music_slug = re.match(self._VALID_URL, url).groups()
+        artist_fields = self._ARTIST_FIELDS_TMPL % music_slug
+        music = self._call_api(artist_slug, artist_fields)['artist']['music']
+        return self._parse_music(music)
+
+
+class PalcoMP3IE(PalcoMP3BaseIE):
+    IE_NAME = 'PalcoMP3:song'
+    _VALID_URL = 
r'https?://(?:www\.)?palcomp3\.com(?:\.br)?/(?P<artist>[^/]+)/(?P<id>[^/?&#]+)'
+    _TESTS = [{
+        'url': 
'https://www.palcomp3.com/maiaraemaraisaoficial/nossas-composicoes-cuida-bem-dela/',
+        'md5': '99fd6405b2d8fd589670f6db1ba3b358',
+        'info_dict': {
+            'id': '3162927',
+            'ext': 'mp3',
+            'title': 'Nossas Composi????es - CUIDA BEM DELA',
+            'duration': 210,
+            'view_count': int,
+        }
+    }]
+
+    @classmethod
+    def suitable(cls, url):
+        return False if PalcoMP3VideoIE.suitable(url) else super(PalcoMP3IE, 
cls).suitable(url)
+
+
+class PalcoMP3ArtistIE(PalcoMP3BaseIE):
+    IE_NAME = 'PalcoMP3:artist'
+    _VALID_URL = r'https?://(?:www\.)?palcomp3\.com(?:\.br)?/(?P<id>[^/?&#]+)'
+    _TESTS = [{
+        'url': 'https://www.palcomp3.com.br/condedoforro/',
+        'info_dict': {
+            'id': '358396',
+            'title': 'Conde do Forr??',
+        },
+        'playlist_mincount': 188,
+    }]
+    _ARTIST_FIELDS_TMPL = '''artistID
+    musics {
+      nodes {
+        %s
+      }
+    }
+    name'''
+
+    @ classmethod
+    def suitable(cls, url):
+        return False if re.match(PalcoMP3IE._VALID_URL, url) else 
super(PalcoMP3ArtistIE, cls).suitable(url)
+
+    def _real_extract(self, url):
+        artist_slug = self._match_id(url)
+        artist = self._call_api(artist_slug, 
self._ARTIST_FIELDS_TMPL)['artist']
+
+        def entries():
+            for music in (try_get(artist, lambda x: x['musics']['nodes'], 
list) or []):
+                yield self._parse_music(music)
+
+        return self.playlist_result(
+            entries(), str_or_none(artist.get('artistID')), artist.get('name'))
+
+
+class PalcoMP3VideoIE(PalcoMP3BaseIE):
+    IE_NAME = 'PalcoMP3:video'
+    _VALID_URL = 
r'https?://(?:www\.)?palcomp3\.com(?:\.br)?/(?P<artist>[^/]+)/(?P<id>[^/?&#]+)/?#clipe'
+    _TESTS = [{
+        'url': 
'https://www.palcomp3.com/maiaraemaraisaoficial/maiara-e-maraisa-voce-faz-falta-aqui-ao-vivo-em-vicosa-mg/#clipe',
+        'add_ie': ['Youtube'],
+        'info_dict': {
+            'id': '_pD1nR2qqPg',
+            'ext': 'mp4',
+            'title': 'Maiara e Maraisa - Voc?? Faz Falta Aqui - DVD Ao Vivo Em 
Campo Grande',
+            'description': 'md5:7043342c09a224598e93546e98e49282',
+            'upload_date': '20161107',
+            'uploader_id': 'maiaramaraisaoficial',
+            'uploader': 'Maiara e Maraisa',
+        }
+    }]
+    _MUSIC_FIELDS = 'youtubeID'
+
+    def _parse_music(self, music):
+        youtube_id = music['youtubeID']
+        return self.url_result(youtube_id, 'Youtube', youtube_id)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/screencastomatic.py 
new/youtube-dl/youtube_dl/extractor/screencastomatic.py
--- old/youtube-dl/youtube_dl/extractor/screencastomatic.py     2021-03-30 
22:01:34.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/screencastomatic.py     2021-03-31 
23:51:37.000000000 +0200
@@ -2,12 +2,18 @@
 from __future__ import unicode_literals
 
 from .common import InfoExtractor
-from ..utils import js_to_json
+from ..utils import (
+    get_element_by_class,
+    int_or_none,
+    remove_start,
+    strip_or_none,
+    unified_strdate,
+)
 
 
 class ScreencastOMaticIE(InfoExtractor):
-    _VALID_URL = r'https?://screencast-o-matic\.com/watch/(?P<id>[0-9a-zA-Z]+)'
-    _TEST = {
+    _VALID_URL = 
r'https?://screencast-o-matic\.com/(?:(?:watch|player)/|embed\?.*?\bsc=)(?P<id>[0-9a-zA-Z]+)'
+    _TESTS = [{
         'url': 'http://screencast-o-matic.com/watch/c2lD3BeOPl',
         'md5': '483583cb80d92588f15ccbedd90f0c18',
         'info_dict': {
@@ -16,22 +22,30 @@
             'title': 'Welcome to 3-4 Philosophy @ DECV!',
             'thumbnail': r're:^https?://.*\.jpg$',
             'description': 'as the title says! also: some general info re 1) 
VCE philosophy and 2) distance learning.',
-            'duration': 369.163,
+            'duration': 369,
+            'upload_date': '20141216',
         }
-    }
+    }, {
+        'url': 'http://screencast-o-matic.com/player/c2lD3BeOPl',
+        'only_matching': True,
+    }, {
+        'url': 
'http://screencast-o-matic.com/embed?ff=true&sc=cbV2r4Q5TL&fromPH=true&a=1',
+        'only_matching': True,
+    }]
 
     def _real_extract(self, url):
         video_id = self._match_id(url)
-        webpage = self._download_webpage(url, video_id)
-
-        jwplayer_data = self._parse_json(
-            self._search_regex(
-                r"(?s)jwplayer\('mp4Player'\).setup\((\{.*?\})\);", webpage, 
'setup code'),
-            video_id, transform_source=js_to_json)
-
-        info_dict = self._parse_jwplayer_data(jwplayer_data, video_id, 
require_title=False)
-        info_dict.update({
-            'title': self._og_search_title(webpage),
-            'description': self._og_search_description(webpage),
+        webpage = self._download_webpage(
+            'https://screencast-o-matic.com/player/' + video_id, video_id)
+        info = self._parse_html5_media_entries(url, webpage, video_id)[0]
+        info.update({
+            'id': video_id,
+            'title': get_element_by_class('overlayTitle', webpage),
+            'description': 
strip_or_none(get_element_by_class('overlayDescription', webpage)) or None,
+            'duration': int_or_none(self._search_regex(
+                
r'player\.duration\s*=\s*function\(\)\s*{\s*return\s+(\d+);\s*};',
+                webpage, 'duration', default=None)),
+            'upload_date': unified_strdate(remove_start(
+                get_element_by_class('overlayPublished', webpage), 'Published: 
')),
         })
-        return info_dict
+        return info
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/vimeo.py 
new/youtube-dl/youtube_dl/extractor/vimeo.py
--- old/youtube-dl/youtube_dl/extractor/vimeo.py        2021-03-30 
22:01:39.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/vimeo.py        2021-03-31 
23:51:37.000000000 +0200
@@ -3,7 +3,6 @@
 
 import base64
 import functools
-import json
 import re
 import itertools
 
@@ -17,15 +16,14 @@
 from ..utils import (
     clean_html,
     determine_ext,
-    dict_get,
     ExtractorError,
+    get_element_by_class,
     js_to_json,
     int_or_none,
     merge_dicts,
     OnDemandPagedList,
     parse_filesize,
     parse_iso8601,
-    RegexNotFoundError,
     sanitized_Request,
     smuggle_url,
     std_headers,
@@ -127,10 +125,11 @@
         video_title = video_data['title']
         live_event = video_data.get('live_event') or {}
         is_live = live_event.get('status') == 'started'
+        request = config.get('request') or {}
 
         formats = []
-        config_files = video_data.get('files') or 
config['request'].get('files', {})
-        for f in config_files.get('progressive', []):
+        config_files = video_data.get('files') or request.get('files') or {}
+        for f in (config_files.get('progressive') or []):
             video_url = f.get('url')
             if not video_url:
                 continue
@@ -146,7 +145,7 @@
         # TODO: fix handling of 308 status code returned for live archive 
manifest requests
         sep_pattern = r'/sep/video/'
         for files_type in ('hls', 'dash'):
-            for cdn_name, cdn_data in config_files.get(files_type, 
{}).get('cdns', {}).items():
+            for cdn_name, cdn_data in (try_get(config_files, lambda x: 
x[files_type]['cdns']) or {}).items():
                 manifest_url = cdn_data.get('url')
                 if not manifest_url:
                     continue
@@ -192,17 +191,15 @@
                 f['preference'] = -40
 
         subtitles = {}
-        text_tracks = config['request'].get('text_tracks')
-        if text_tracks:
-            for tt in text_tracks:
-                subtitles[tt['lang']] = [{
-                    'ext': 'vtt',
-                    'url': urljoin('https://vimeo.com', tt['url']),
-                }]
+        for tt in (request.get('text_tracks') or []):
+            subtitles[tt['lang']] = [{
+                'ext': 'vtt',
+                'url': urljoin('https://vimeo.com', tt['url']),
+            }]
 
         thumbnails = []
         if not is_live:
-            for key, thumb in video_data.get('thumbs', {}).items():
+            for key, thumb in (video_data.get('thumbs') or {}).items():
                 thumbnails.append({
                     'id': key,
                     'width': int_or_none(key),
@@ -322,6 +319,7 @@
                 'duration': 1595,
                 'upload_date': '20130610',
                 'timestamp': 1370893156,
+                'license': 'by',
             },
             'params': {
                 'format': 'best[protocol=https]',
@@ -400,6 +398,12 @@
                 'uploader_id': 'staff',
                 'uploader': 'Vimeo Staff',
                 'duration': 62,
+                'subtitles': {
+                    'de': [{'ext': 'vtt'}],
+                    'en': [{'ext': 'vtt'}],
+                    'es': [{'ext': 'vtt'}],
+                    'fr': [{'ext': 'vtt'}],
+                },
             }
         },
         {
@@ -572,6 +576,37 @@
     def _real_initialize(self):
         self._login()
 
+    def _extract_from_api(self, video_id, unlisted_hash=None):
+        token = self._download_json(
+            'https://vimeo.com/_rv/jwt', video_id, headers={
+                'X-Requested-With': 'XMLHttpRequest'
+            })['token']
+        api_url = 'https://api.vimeo.com/videos/' + video_id
+        if unlisted_hash:
+            api_url += ':' + unlisted_hash
+        video = self._download_json(
+            api_url, video_id, headers={
+                'Authorization': 'jwt ' + token,
+            }, query={
+                'fields': 
'config_url,created_time,description,license,metadata.connections.comments.total,metadata.connections.likes.total,release_time,stats.plays',
+            })
+        info = self._parse_config(self._download_json(
+            video['config_url'], video_id), video_id)
+        self._vimeo_sort_formats(info['formats'])
+        get_timestamp = lambda x: parse_iso8601(video.get(x + '_time'))
+        info.update({
+            'description': video.get('description'),
+            'license': video.get('license'),
+            'release_timestamp': get_timestamp('release'),
+            'timestamp': get_timestamp('created'),
+            'view_count': int_or_none(try_get(video, lambda x: 
x['stats']['plays'])),
+        })
+        connections = try_get(
+            video, lambda x: x['metadata']['connections'], dict) or {}
+        for k in ('comment', 'like'):
+            info[k + '_count'] = int_or_none(try_get(connections, lambda x: 
x[k + 's']['total']))
+        return info
+
     def _real_extract(self, url):
         url, data = unsmuggle_url(url, {})
         headers = std_headers.copy()
@@ -580,48 +615,19 @@
         if 'Referer' not in headers:
             headers['Referer'] = url
 
-        # Extract ID from URL
-        video_id, unlisted_hash = re.match(self._VALID_URL, url).groups()
+        mobj = re.match(self._VALID_URL, url).groupdict()
+        video_id, unlisted_hash = mobj['id'], mobj.get('unlisted_hash')
         if unlisted_hash:
-            token = self._download_json(
-                'https://vimeo.com/_rv/jwt', video_id, headers={
-                    'X-Requested-With': 'XMLHttpRequest'
-                })['token']
-            video = self._download_json(
-                'https://api.vimeo.com/videos/%s:%s' % (video_id, 
unlisted_hash),
-                video_id, headers={
-                    'Authorization': 'jwt ' + token,
-                }, query={
-                    'fields': 
'config_url,created_time,description,license,metadata.connections.comments.total,metadata.connections.likes.total,release_time,stats.plays',
-                })
-            info = self._parse_config(self._download_json(
-                video['config_url'], video_id), video_id)
-            self._vimeo_sort_formats(info['formats'])
-            get_timestamp = lambda x: parse_iso8601(video.get(x + '_time'))
-            info.update({
-                'description': video.get('description'),
-                'license': video.get('license'),
-                'release_timestamp': get_timestamp('release'),
-                'timestamp': get_timestamp('created'),
-                'view_count': int_or_none(try_get(video, lambda x: 
x['stats']['plays'])),
-            })
-            connections = try_get(
-                video, lambda x: x['metadata']['connections'], dict) or {}
-            for k in ('comment', 'like'):
-                info[k + '_count'] = int_or_none(try_get(connections, lambda 
x: x[k + 's']['total']))
-            return info
+            return self._extract_from_api(video_id, unlisted_hash)
 
         orig_url = url
         is_pro = 'vimeopro.com/' in url
-        is_player = '://player.vimeo.com/video/' in url
         if is_pro:
             # some videos require portfolio_id to be present in player url
             # https://github.com/ytdl-org/youtube-dl/issues/20070
             url = self._extract_url(url, self._download_webpage(url, video_id))
             if not url:
                 url = 'https://vimeo.com/' + video_id
-        elif is_player:
-            url = 'https://player.vimeo.com/video/' + video_id
         elif any(p in url for p in ('play_redirect_hls', 'moogaloop.swf')):
             url = 'https://vimeo.com/' + video_id
 
@@ -641,14 +647,25 @@
                         expected=True)
             raise
 
-        # Now we begin extracting as much information as we can from what we
-        # retrieved. First we extract the information common to all extractors,
-        # and latter we extract those that are Vimeo specific.
-        self.report_extraction(video_id)
+        if '://player.vimeo.com/video/' in url:
+            config = self._parse_json(self._search_regex(
+                r'\bconfig\s*=\s*({.+?})\s*;', webpage, 'info section'), 
video_id)
+            if config.get('view') == 4:
+                config = self._verify_player_video_password(
+                    redirect_url, video_id, headers)
+            info = self._parse_config(config, video_id)
+            self._vimeo_sort_formats(info['formats'])
+            return info
+
+        if re.search(r'<form[^>]+?id="pw_form"', webpage):
+            video_password = self._get_video_password()
+            token, vuid = self._extract_xsrft_and_vuid(webpage)
+            webpage = self._verify_video_password(
+                redirect_url, video_id, video_password, token, vuid)
 
         vimeo_config = self._extract_vimeo_config(webpage, video_id, 
default=None)
         if vimeo_config:
-            seed_status = vimeo_config.get('seed_status', {})
+            seed_status = vimeo_config.get('seed_status') or {}
             if seed_status.get('state') == 'failed':
                 raise ExtractorError(
                     '%s said: %s' % (self.IE_NAME, seed_status['title']),
@@ -657,70 +674,40 @@
         cc_license = None
         timestamp = None
         video_description = None
+        info_dict = {}
 
-        # Extract the config JSON
-        try:
-            try:
-                config_url = self._html_search_regex(
-                    r' data-config-url="(.+?)"', webpage,
-                    'config URL', default=None)
-                if not config_url:
-                    # Sometimes new react-based page is served instead of old 
one that require
-                    # different config URL extraction approach (see
-                    # https://github.com/ytdl-org/youtube-dl/pull/7209)
-                    page_config = self._parse_json(self._search_regex(
-                        
r'vimeo\.(?:clip|vod_title)_page_config\s*=\s*({.+?});',
-                        webpage, 'page config'), video_id)
-                    config_url = page_config['player']['config_url']
-                    cc_license = page_config.get('cc_license')
-                    timestamp = try_get(
-                        page_config, lambda x: x['clip']['uploaded_on'],
-                        compat_str)
-                    video_description = clean_html(dict_get(
-                        page_config, ('description', 
'description_html_escaped')))
-                config = self._download_json(config_url, video_id)
-            except RegexNotFoundError:
-                # For pro videos or player.vimeo.com urls
-                # We try to find out to which variable is assigned the config 
dic
-                m_variable_name = re.search(r'(\w)\.video\.id', webpage)
-                if m_variable_name is not None:
-                    config_re = [r'%s=({[^}].+?});' % 
re.escape(m_variable_name.group(1))]
-                else:
-                    config_re = [r' = {config:({.+?}),assets:', 
r'(?:[abc])=({.+?});']
-                config_re.append(r'\bvar\s+r\s*=\s*({.+?})\s*;')
-                config_re.append(r'\bconfig\s*=\s*({.+?})\s*;')
-                config = self._search_regex(config_re, webpage, 'info section',
-                                            flags=re.DOTALL)
-                config = json.loads(config)
-        except Exception as e:
-            if re.search('The creator of this video has not given you 
permission to embed it on this domain.', webpage):
-                raise ExtractorError('The author has restricted the access to 
this video, try with the "--referer" option')
-
-            if re.search(r'<form[^>]+?id="pw_form"', webpage) is not None:
-                if '_video_password_verified' in data:
-                    raise ExtractorError('video password verification failed!')
-                video_password = self._get_video_password()
-                token, vuid = self._extract_xsrft_and_vuid(webpage)
-                self._verify_video_password(
-                    redirect_url, video_id, video_password, token, vuid)
-                return self._real_extract(
-                    smuggle_url(redirect_url, {'_video_password_verified': 
'verified'}))
-            else:
-                raise ExtractorError('Unable to extract info section',
-                                     cause=e)
+        channel_id = self._search_regex(
+            r'vimeo\.com/channels/([^/]+)', url, 'channel id', default=None)
+        if channel_id:
+            config_url = self._html_search_regex(
+                r'\bdata-config-url="([^"]+)"', webpage, 'config URL')
+            video_description = clean_html(get_element_by_class('description', 
webpage))
+            info_dict.update({
+                'channel_id': channel_id,
+                'channel_url': 'https://vimeo.com/channels/' + channel_id,
+            })
         else:
-            if config.get('view') == 4:
-                config = self._verify_player_video_password(redirect_url, 
video_id, headers)
-
+            page_config = self._parse_json(self._search_regex(
+                r'vimeo\.(?:clip|vod_title)_page_config\s*=\s*({.+?});',
+                webpage, 'page config', default='{}'), video_id, fatal=False)
+            if not page_config:
+                return self._extract_from_api(video_id)
+            config_url = page_config['player']['config_url']
+            cc_license = page_config.get('cc_license')
+            clip = page_config.get('clip') or {}
+            timestamp = clip.get('uploaded_on')
+            video_description = clean_html(
+                clip.get('description') or 
page_config.get('description_html_escaped'))
+        config = self._download_json(config_url, video_id)
         video = config.get('video') or {}
         vod = video.get('vod') or {}
 
         def is_rented():
             if '>You rented this title.<' in webpage:
                 return True
-            if config.get('user', {}).get('purchased'):
+            if try_get(config, lambda x: x['user']['purchased']):
                 return True
-            for purchase_option in vod.get('purchase_options', []):
+            for purchase_option in (vod.get('purchase_options') or []):
                 if purchase_option.get('purchased'):
                     return True
                 label = purchase_option.get('label_string')
@@ -735,14 +722,10 @@
                     'https://player.vimeo.com/player/%s' % feature_id,
                     {'force_feature_id': True}), 'Vimeo')
 
-        # Extract video description
-        if not video_description:
-            video_description = self._html_search_regex(
-                r'(?s)<div\s+class="[^"]*description[^"]*"[^>]*>(.*?)</div>',
-                webpage, 'description', default=None)
         if not video_description:
             video_description = self._html_search_meta(
-                'description', webpage, default=None)
+                ['description', 'og:description', 'twitter:description'],
+                webpage, default=None)
         if not video_description and is_pro:
             orig_webpage = self._download_webpage(
                 orig_url, video_id,
@@ -751,25 +734,14 @@
             if orig_webpage:
                 video_description = self._html_search_meta(
                     'description', orig_webpage, default=None)
-        if not video_description and not is_player:
+        if not video_description:
             self._downloader.report_warning('Cannot find video description')
 
-        # Extract upload date
         if not timestamp:
             timestamp = self._search_regex(
                 r'<time[^>]+datetime="([^"]+)"', webpage,
                 'timestamp', default=None)
 
-        try:
-            view_count = int(self._search_regex(r'UserPlays:(\d+)', webpage, 
'view count'))
-            like_count = int(self._search_regex(r'UserLikes:(\d+)', webpage, 
'like count'))
-            comment_count = int(self._search_regex(r'UserComments:(\d+)', 
webpage, 'comment count'))
-        except RegexNotFoundError:
-            # This info is only available in vimeo.com/{id} urls
-            view_count = None
-            like_count = None
-            comment_count = None
-
         formats = []
 
         source_format = self._extract_original_format(
@@ -788,31 +760,20 @@
                 
r'<link[^>]+rel=["\']license["\'][^>]+href=(["\'])(?P<license>(?:(?!\1).)+)\1',
                 webpage, 'license', default=None, group='license')
 
-        channel_id = self._search_regex(
-            r'vimeo\.com/channels/([^/]+)', url, 'channel id', default=None)
-        channel_url = 'https://vimeo.com/channels/%s' % channel_id if 
channel_id else None
-
-        info_dict = {
+        info_dict.update({
             'formats': formats,
             'timestamp': unified_timestamp(timestamp),
             'description': video_description,
             'webpage_url': url,
-            'view_count': view_count,
-            'like_count': like_count,
-            'comment_count': comment_count,
             'license': cc_license,
-            'channel_id': channel_id,
-            'channel_url': channel_url,
-        }
-
-        info_dict = merge_dicts(info_dict, info_dict_config, json_ld)
+        })
 
-        return info_dict
+        return merge_dicts(info_dict, info_dict_config, json_ld)
 
 
 class VimeoOndemandIE(VimeoIE):
     IE_NAME = 'vimeo:ondemand'
-    _VALID_URL = 
r'https?://(?:www\.)?vimeo\.com/ondemand/([^/]+/)?(?P<id>[^/?#&]+)'
+    _VALID_URL = 
r'https?://(?:www\.)?vimeo\.com/ondemand/(?:[^/]+/)?(?P<id>[^/?#&]+)'
     _TESTS = [{
         # ondemand video not available via https://vimeo.com/id
         'url': 'https://vimeo.com/ondemand/20704',
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/youku.py 
new/youtube-dl/youtube_dl/extractor/youku.py
--- old/youtube-dl/youtube_dl/extractor/youku.py        2021-03-30 
22:01:34.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/youku.py        2021-03-31 
23:51:37.000000000 +0200
@@ -154,7 +154,7 @@
         # request basic data
         basic_data_params = {
             'vid': video_id,
-            'ccode': '0590',
+            'ccode': '0532',
             'client_ip': '192.168.1.1',
             'utid': cna,
             'client_ts': time.time() / 1000,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/extractor/youtube.py 
new/youtube-dl/youtube_dl/extractor/youtube.py
--- old/youtube-dl/youtube_dl/extractor/youtube.py      2021-03-30 
22:01:39.000000000 +0200
+++ new/youtube-dl/youtube_dl/extractor/youtube.py      2021-03-31 
23:51:37.000000000 +0200
@@ -329,7 +329,7 @@
             (lambda x: x['ownerText']['runs'][0]['text'],
              lambda x: x['shortBylineText']['runs'][0]['text']), compat_str)
         return {
-            '_type': 'url_transparent',
+            '_type': 'url',
             'ie_key': YoutubeIE.ie_key(),
             'id': video_id,
             'url': video_id,
@@ -1084,6 +1084,23 @@
             'url': 'https://www.youtube.com/watch?v=nGC3D_FkCmg',
             'only_matching': True,
         },
+        {
+            # restricted location, 
https://github.com/ytdl-org/youtube-dl/issues/28685
+            'url': 'cBvYw8_A0vQ',
+            'info_dict': {
+                'id': 'cBvYw8_A0vQ',
+                'ext': 'mp4',
+                'title': '4K Ueno Okachimachi  Street  Scenes  
?????????????????????',
+                'description': 'md5:ea770e474b7cd6722b4c95b833c03630',
+                'upload_date': '20201120',
+                'uploader': 'Walk around Japan',
+                'uploader_id': 'UC3o_t8PzBmXf5S9b7GLx1Mw',
+                'uploader_url': 
r're:https?://(?:www\.)?youtube\.com/channel/UC3o_t8PzBmXf5S9b7GLx1Mw',
+            },
+            'params': {
+                'skip_download': True,
+            },
+        },
     ]
     _formats = {
         '5': {'ext': 'flv', 'width': 400, 'height': 240, 'acodec': 'mp3', 
'abr': 64, 'vcodec': 'h263'},
@@ -1485,7 +1502,13 @@
         def get_text(x):
             if not x:
                 return
-            return x.get('simpleText') or ''.join([r['text'] for r in 
x['runs']])
+            text = x.get('simpleText')
+            if text and isinstance(text, compat_str):
+                return text
+            runs = x.get('runs')
+            if not isinstance(runs, list):
+                return
+            return ''.join([r['text'] for r in runs if 
isinstance(r.get('text'), compat_str)])
 
         search_meta = (
             lambda x: self._html_search_meta(x, webpage, default=None)) \
@@ -1959,7 +1982,7 @@
                             invidio\.us
                         )/
                         (?:
-                            (?:channel|c|user|feed)/|
+                            (?:channel|c|user|feed|hashtag)/|
                             (?:playlist|watch)\?.*?\blist=|
                             (?!(?:watch|embed|v|e)\b)
                         )
@@ -2245,6 +2268,13 @@
     }, {
         'url': 'https://www.youtube.com/TheYoungTurks/live',
         'only_matching': True,
+    }, {
+        'url': 'https://www.youtube.com/hashtag/cctv9',
+        'info_dict': {
+            'id': 'cctv9',
+            'title': '#cctv9',
+        },
+        'playlist_mincount': 350,
     }]
 
     @classmethod
@@ -2392,6 +2422,14 @@
             for entry in self._post_thread_entries(renderer):
                 yield entry
 
+    def _rich_grid_entries(self, contents):
+        for content in contents:
+            video_renderer = try_get(content, lambda x: 
x['richItemRenderer']['content']['videoRenderer'], dict)
+            if video_renderer:
+                entry = self._video_entry(video_renderer)
+                if entry:
+                    yield entry
+
     @staticmethod
     def _build_continuation_query(continuation, ctp=None):
         query = {
@@ -2442,55 +2480,60 @@
         if not tab_content:
             return
         slr_renderer = try_get(tab_content, lambda x: 
x['sectionListRenderer'], dict)
-        if not slr_renderer:
-            return
-        is_channels_tab = tab.get('title') == 'Channels'
-        continuation = None
-        slr_contents = try_get(slr_renderer, lambda x: x['contents'], list) or 
[]
-        for slr_content in slr_contents:
-            if not isinstance(slr_content, dict):
-                continue
-            is_renderer = try_get(slr_content, lambda x: 
x['itemSectionRenderer'], dict)
-            if not is_renderer:
-                continue
-            isr_contents = try_get(is_renderer, lambda x: x['contents'], list) 
or []
-            for isr_content in isr_contents:
-                if not isinstance(isr_content, dict):
-                    continue
-                renderer = isr_content.get('playlistVideoListRenderer')
-                if renderer:
-                    for entry in self._playlist_entries(renderer):
-                        yield entry
-                    continuation = self._extract_continuation(renderer)
-                    continue
-                renderer = isr_content.get('gridRenderer')
-                if renderer:
-                    for entry in self._grid_entries(renderer):
-                        yield entry
-                    continuation = self._extract_continuation(renderer)
-                    continue
-                renderer = isr_content.get('shelfRenderer')
-                if renderer:
-                    for entry in self._shelf_entries(renderer, not 
is_channels_tab):
-                        yield entry
-                    continue
-                renderer = isr_content.get('backstagePostThreadRenderer')
-                if renderer:
-                    for entry in self._post_thread_entries(renderer):
-                        yield entry
-                    continuation = self._extract_continuation(renderer)
-                    continue
-                renderer = isr_content.get('videoRenderer')
-                if renderer:
-                    entry = self._video_entry(renderer)
-                    if entry:
-                        yield entry
+        if slr_renderer:
+            is_channels_tab = tab.get('title') == 'Channels'
+            continuation = None
+            slr_contents = try_get(slr_renderer, lambda x: x['contents'], 
list) or []
+            for slr_content in slr_contents:
+                if not isinstance(slr_content, dict):
+                    continue
+                is_renderer = try_get(slr_content, lambda x: 
x['itemSectionRenderer'], dict)
+                if not is_renderer:
+                    continue
+                isr_contents = try_get(is_renderer, lambda x: x['contents'], 
list) or []
+                for isr_content in isr_contents:
+                    if not isinstance(isr_content, dict):
+                        continue
+                    renderer = isr_content.get('playlistVideoListRenderer')
+                    if renderer:
+                        for entry in self._playlist_entries(renderer):
+                            yield entry
+                        continuation = self._extract_continuation(renderer)
+                        continue
+                    renderer = isr_content.get('gridRenderer')
+                    if renderer:
+                        for entry in self._grid_entries(renderer):
+                            yield entry
+                        continuation = self._extract_continuation(renderer)
+                        continue
+                    renderer = isr_content.get('shelfRenderer')
+                    if renderer:
+                        for entry in self._shelf_entries(renderer, not 
is_channels_tab):
+                            yield entry
+                        continue
+                    renderer = isr_content.get('backstagePostThreadRenderer')
+                    if renderer:
+                        for entry in self._post_thread_entries(renderer):
+                            yield entry
+                        continuation = self._extract_continuation(renderer)
+                        continue
+                    renderer = isr_content.get('videoRenderer')
+                    if renderer:
+                        entry = self._video_entry(renderer)
+                        if entry:
+                            yield entry
 
+                if not continuation:
+                    continuation = self._extract_continuation(is_renderer)
             if not continuation:
-                continuation = self._extract_continuation(is_renderer)
-
-        if not continuation:
-            continuation = self._extract_continuation(slr_renderer)
+                continuation = self._extract_continuation(slr_renderer)
+        else:
+            rich_grid_renderer = tab_content.get('richGridRenderer')
+            if not rich_grid_renderer:
+                return
+            for entry in 
self._rich_grid_entries(rich_grid_renderer.get('contents') or []):
+                yield entry
+            continuation = self._extract_continuation(rich_grid_renderer)
 
         headers = {
             'x-youtube-client-name': '1',
@@ -2586,6 +2629,12 @@
                         yield entry
                     continuation = 
self._extract_continuation(continuation_renderer)
                     continue
+                renderer = continuation_item.get('richItemRenderer')
+                if renderer:
+                    for entry in self._rich_grid_entries(continuation_items):
+                        yield entry
+                    continuation = self._extract_continuation({'contents': 
continuation_items})
+                    continue
 
             break
 
@@ -2642,7 +2691,8 @@
         selected_tab = self._extract_selected_tab(tabs)
         renderer = try_get(
             data, lambda x: x['metadata']['channelMetadataRenderer'], dict)
-        playlist_id = title = description = None
+        playlist_id = item_id
+        title = description = None
         if renderer:
             channel_title = renderer.get('title') or item_id
             tab_title = selected_tab.get('title')
@@ -2651,12 +2701,16 @@
                 title += ' - %s' % tab_title
             description = renderer.get('description')
             playlist_id = renderer.get('externalId')
-        renderer = try_get(
-            data, lambda x: x['metadata']['playlistMetadataRenderer'], dict)
-        if renderer:
-            title = renderer.get('title')
-            description = None
-            playlist_id = item_id
+        else:
+            renderer = try_get(
+                data, lambda x: x['metadata']['playlistMetadataRenderer'], 
dict)
+            if renderer:
+                title = renderer.get('title')
+            else:
+                renderer = try_get(
+                    data, lambda x: x['header']['hashtagHeaderRenderer'], dict)
+                if renderer:
+                    title = try_get(renderer, lambda x: 
x['hashtag']['simpleText'])
         playlist = self.playlist_result(
             self._entries(selected_tab, identity_token),
             playlist_id=playlist_id, playlist_title=title,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/youtube-dl/youtube_dl/version.py 
new/youtube-dl/youtube_dl/version.py
--- old/youtube-dl/youtube_dl/version.py        2021-03-31 23:47:08.000000000 
+0200
+++ new/youtube-dl/youtube_dl/version.py        2021-04-06 22:42:21.000000000 
+0200
@@ -1,3 +1,3 @@
 from __future__ import unicode_literals
 
-__version__ = '2021.04.01'
+__version__ = '2021.04.07'

Reply via email to