Den 03.12.2024 22:20, skrev Andrew Randrianasulu:


вт, 3 дек. 2024 г., 23:59 Terje J. Hanssen <[email protected]>:


    From a previous thread:
    Re: [Cin] another set of test profiles

    Den 18.10.2024 02:08, skrev Andrew Randrianasulu:

    чт, 17 окт. 2024 г., 15:06 Terje J. Hanssen
    <[email protected]>:


        Den 17.10.2024 13:51, skrev Andrew Randrianasulu:

        чт, 17 окт. 2024 г., 13:40 Terje J. Hanssen
        <[email protected]>:


            Den 14.10.2024 00:38, skrev Andrew Randrianasulu:

            пн, 14 окт. 2024 г., 01:36 Phyllis Smith
            <[email protected]>:

                Andrew, so it seems prudent to check into GIT, the
                av1_vaapi.mp4 render format (after successfully
                tested of course); but what about the QSV encoders?



            wait for Terje's testing OR try to build oneVPL-cpu (it
            sort of circles back to different branch of ffmpeg, so
            ffmpeg will think it uses qsv but it in fact will use
            another ffmpeg .... well, in theory! it does not work
            for me on 32-bit!)



            I wonder if Hw accellerated encoding support via Vaapi
            and QSV is to be embedded in future Cingg Appimage
            and/or packages if possible?
            What about a list of supported dGPUs/iGPUs?


        Problem is - QSV/vaapi basically search for driver component
        and this one might be in different location on different
        distros, and interface between two also not set in stone.

        For appimage you can just unpack them and remove libva.so so
        on startup cingg will link to system's libva.

        QSV as we learned is another layer with their own runtime
        path for yet another set of driver components. So, while
        building libvpl itself is relatively easily making sure it
        finds its drivers is not easy (at least for me).

        speaking about GPU list I think it will be fairly short,
        you,Phyllis and Andrea probably only ones who use it and
        report back. Stephan noticed some troubles and reverted back
        to software. I can test nvdec/nvenc on livecd but this is
        not my everyday setup (Nvidia proprietary drivers enforce
        64-bit system).

        But well, feel free to post short summary of that works on
        your GPUs in cingg as another thread, hopefully others will
        chime in!

        If we get available a packaged Cingg test build (rpm/Leap for
        me), it would be more useful to do this test. Then I have
        available three gen. Intel, legacy Skylake/Kabylake iGPUs and
        current DG2/Arc GPU. I also have/had a Nvidia GPU on Skylake,
        but it looks like it past away.


    I think you can build rpm yourself, but for this we need to
    update spec file, so it will point at new source and add openvpl
    as requirements.

    In meantime  you can just make your own appimage from just build
    cingg-with-system-ffmpeg, so it hopefully will not be lost after
    few system updates.



    Andrew,
    I don't know how busy you are currently with other tasks, but i
    case you have time, I would be interested to fulfill this rpm and
    (possibly Appimage) exercise?
    That is from my current build with third-party (internal) ffmpeg7.0.



for rpm you need to edit blds/cinelerra.spec at the very top there is date, I think latest tar version is

https://cinelerra-gg.org/download/src/cin_5.1.20241031-src.tgz


so replace 2020 something with 20241031

but then it need to be patched up, and I do not have tested procedure for doing this. Probably rpm  should wait until new tagged release .... you can search for rpmbuild command on your system and read its manpage/help and may be test run it on some other (faster to rebuild) .spec file in meantime


Appimage should be simpler from  existing source directory


just run

bld_appimage.sh

but be sure to get additional file and put it where it belong  as described in comment:
=====

# Get the appropriate appimagetool from https://github.com/AppImage/AppImageKit/releases
# and put it in your path. Only install the version for your platform
# and mark it executable. The file name must start with "appimagetool".

====

probably /usr/local/bin will be simplest place  to put it as root?

/Cin # sh ./bld_appimage.sh
.....snip
-- Copying files into AppDir --
Copying file image/cin.desktop to AppDir/usr/share/applications/cin.desktop
Copying file image/cin.svg to AppDir/usr/share/icons/hicolor/scalable/apps/cin.svg

-- Deploying files into AppDir root directory --
Deploying files to AppDir root using desktop file: AppDir/usr/share/applications/cin.desktop Deploying desktop file to AppDir root: AppDir/usr/share/applications/cin.desktop Creating symlink for file AppDir/usr/share/applications/cin.desktop in/as AppDir Deploying icon to AppDir root: AppDir/usr/share/icons/hicolor/scalable/apps/cin.svg Creating symlink for file AppDir/usr/share/icons/hicolor/scalable/apps/cin.svg in/as AppDir
Deploying AppRun symlink for executable in AppDir root: AppDir/usr/bin/cin
Creating symlink for file AppDir/usr/bin/cin in/as AppDir/AppRun
Found appimagetool: /usr/local/bin/appimagetool-x86_64.AppImage
Running command: /usr/local/bin/appimagetool-x86_64.AppImage "appimagetool" "AppDir" "


Thanks, I think I got AppImage(?) built and it seemingly runs OK.

That is when I found the CinGG executable file, because I expected a file somewhere with a name "CinGG*.AppImage"

/Cin # file -sh AppDir/*
AppDir/AppRun:      symbolic link to usr/bin/cin
AppDir/cin.desktop: symbolic link to usr/share/applications/cin.desktop
AppDir/cin.svg:     symbolic link to usr/share/icons/hicolor/scalable/apps/cin.svg
AppDir/usr:         directory


Cin # du -sh AppDir
216M    AppDir

/Cin # du -sh AppDir/*/*
198M    AppDir/usr/bin
19M    AppDir/usr/lib
100K    AppDir/usr/share


/Cin # AppDir/usr/bin/cin
Cinelerra Infinity - built: Nov 20 2024 22:06:05
.......
BC_DisplayInfo::gl_fb_config failed
build plugin index for: /home/cinelerra/cinelerra-5.1/AppDir/usr/bin/plugins
PluginFFilter::new_ffilter(overlay_qsv)
  err: Input/output error
PluginFFilter::new_ffilter(hstack_qsv)
  err: Operation not permitted
PluginFFilter::new_ffilter(vstack_qsv)
  err: Operation not permitted
PluginFFilter::new_ffilter(xstack_qsv)
  err: Operation not permitted
build lv2 index for: $CIN_PATH/lv2
build ladspa plugin index for: /home/cinelerra/cinelerra-5.1/AppDir/usr/bin/ladspa

Loaded hdv09_04.m2t (tff interlaced)
Tested rendering using preset hevc_qsv_10b420 which worked fine

libva info: VA-API version 1.22.0
libva info: Trying to open /usr/lib64/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva info: va_openDriver() returns 0
libva info: VA-API version 1.22.0
libva info: Trying to open /usr/lib64/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_22
libva info: va_openDriver() returns 0
Render::render_single: Session finished.
** rendered 5972 frames in 19.320 secs, 309.110 fps

---------------------------

So some questions when comparing the above AppDir result with the pre-build Appimage file I download to and run from


du -sh ~/Applications/Cin*
171M    CinGG-20241031-x86_64.AppImage

./CinGG-20241031-x86_64.AppImage

I notice the prebuild has no symlink as in the above AppDir

My own built appimage has not startup errors:

(AppImageLauncher:127697): GdkPixbuf-CRITICAL **: 23:56:28.831: gdk_pixbuf_get_rowstride: assertion 'GDK_IS_PIXBUF (pixbuf)' failed


I wonder the larger total space 216M vs 171M is due to oneVPL and maybe some other additional libs ?

How to possibly build an equivalent single AppImage file directly?









-- 
Cin mailing list
[email protected]
https://lists.cinelerra-gg.org/mailman/listinfo/cin

Reply via email to