Script 'mail_helper' called by obssrc Hello community, here is the log from the commit of package python-gpt4all for openSUSE:Factory checked in at 2024-05-24 19:53:02 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Comparing /work/SRC/openSUSE:Factory/python-gpt4all (Old) and /work/SRC/openSUSE:Factory/.python-gpt4all.new.24587 (New) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "python-gpt4all" Fri May 24 19:53:02 2024 rev:2 rq:1176683 version:2.7.3 Changes: -------- --- /work/SRC/openSUSE:Factory/python-gpt4all/python-gpt4all.changes 2024-05-23 15:36:32.002387458 +0200 +++ /work/SRC/openSUSE:Factory/.python-gpt4all.new.24587/python-gpt4all.changes 2024-05-24 19:53:24.702493427 +0200 @@ -1,0 +2,7 @@ +Thu May 23 17:56:11 UTC 2024 - Christian Goll <[email protected]> + +- added gpt4all-chat what is a QT6-GUI and updated to latest + llamacpp a3f03b7 +- rename gpt4all.rpmlintrc to python-gpt4all.rpmlintrc + +------------------------------------------------------------------- Old: ---- b2245.tar.gz gpt4all.rpmlintrc New: ---- a3f03b7.tar.gz python-gpt4all.rpmlintrc ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Other differences: ------------------ ++++++ python-gpt4all.spec ++++++ --- /var/tmp/diff_new_pack.bYPCDV/_old 2024-05-24 19:53:25.534523864 +0200 +++ /var/tmp/diff_new_pack.bYPCDV/_new 2024-05-24 19:53:25.542524156 +0200 @@ -16,7 +16,7 @@ # -%define llamavers b2245 +%define llamavers a3f03b7 %define komputevers c339310 %{?sle15_python_module_pythons} @@ -25,12 +25,15 @@ Version: 2.7.3 Release: 0 Summary: open source llms for all -License: MIT +License: Apache-2.0 AND MIT URL: https://github.com/nomic-ai/gpt4all +#MIT Source0: https://github.com/nomic-ai/gpt4all/archive/refs/tags/v%{version}.tar.gz#/%{name}-v%{version}.tar.gz -Source1: https://github.com/nomic-ai/llama.cpp/archive/refs/tags/%{llamavers}.tar.gz +#MIT +Source1: https://github.com/nomic-ai/llama.cpp/archive/%{llamavers}.tar.gz +# Apache-2.0 Source2: https://github.com/nomic-ai/kompute/archive/c339310.tar.gz -Source3: gpt4all.rpmlintrc +Source3: %{name}.rpmlintrc BuildRequires: %{python_module setuptools} BuildRequires: cmake BuildRequires: fdupes @@ -39,26 +42,51 @@ %else BuildRequires: gcc-c++ %endif +BuildRequires: fmt-devel BuildRequires: python-rpm-macros -BuildRequires: vulkan-devel -BuildRequires: vulkan-utility-libraries-devel +BuildRequires: qt6-httpserver-devel +BuildRequires: qt6-pdf-devel +BuildRequires: qt6-quickdialogs2-devel +BuildRequires: qt6-sql-devel +BuildRequires: qt6-svg-devel +BuildRequires: qt6-wayland-devel BuildRequires: shaderc BuildRequires: shaderc-devel -BuildRequires: fmt-devel +BuildRequires: update-desktop-files +BuildRequires: vulkan-devel +BuildRequires: vulkan-utility-libraries-devel +Requires: %{python_module importlib-metadata} Requires: %{python_module requests} Requires: %{python_module tqdm} +Requires: %{python_module typer} +Requires: %{python_module typing_extensions} %python_subpackages %description GPT4All is an ecosystem to run powerful and customized large language models that work locally on consumer grade CPUs and any GPU. +%package -n gpt4all-chat +Summary: qt6 gui for gpt4all +Requires: qt6-qt5compat-imports +Requires: qt6-sql-sqlite +Requires: qt6ct + +%description -n gpt4all-chat +Qt based GUI for GPT4All versions with GPT-J as the base model. + +%package -n libllmodel0 +Summary: gpt4all libllmodel0 + +%description -n libllmodel0 +Libnrairy for aessing the models + %prep %setup -n gpt4all-%{version} cd gpt4all-backend rmdir llama.cpp-mainline tar xzf %{S:1} -mv llama.cpp-%{llamavers} llama.cpp-mainline +mv llama.cpp-%{llamavers}* llama.cpp-mainline cd llama.cpp-mainline rmdir kompute tar xzf %{S:2} @@ -78,6 +106,16 @@ -DKOMPUTE_OPT_USE_BUILT_IN_FMT=OFF \ -DCMAKE_BUILD_TYPE=RelWithDebInfo %cmake_build +cd ../../gpt4all-chat +%cmake -DLLAMA_KOMPUTE=ON \ + -DLLMODEL_CUDA=OFF \ + -DLLMODEL_VULKAN=ON \ + -DKOMPUTE_OPT_USE_BUILT_IN_VULKAN_HEADER=OFF \ + -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON \ + -DKOMPUTE_OPT_USE_BUILT_IN_FMT=OFF \ + -DCMAKE_BUILD_TYPE=RelWithDebInfo +%cmake_build + cd ../../gpt4all-bindings/python %python_build @@ -85,7 +123,38 @@ cd gpt4all-bindings/python %python_install %python_expand %fdupes %{buildroot}%{$python_sitearch} +install -D -m 0755 ../cli/app.py %{buildroot}/%{_bindir}/gpt4all-app +%{python_expand # fix shebang +sed -i 's|%{_bindir}/env python.*$|%{_bindir}/$python|' %{buildroot}/%{_bindir}/gpt4all-app +} +%python_clone -a %{buildroot}/%{_bindir}/gpt4all-app +cd ../../gpt4all-chat +%cmake_install + +%suse_update_desktop_file -c gpt4all-chat chat "Open-source assistant-style large language models that run locally on your CPU" gpt4all-chat gpt4all-chat.svg + +mv %{buildroot}%{_bindir}/chat %{buildroot}%{_bindir}/gpt4all-chat +rm -v %{buildroot}%{_prefix}/lib/*.a +mkdir -p %{buildroot}%{_libdir} +mv -v %{buildroot}%{_prefix}/lib/libllmodel.so* %{buildroot}%{_libdir} + +%post +%python_install_alternative gpt4all-app + +%postun +%python_uninstall_alternative gpt4all-app %files %{python_files} %{python_sitelib}/* +%python_alternative %{_bindir}/gpt4all-app + +%files -n gpt4all-chat +%{_bindir}/gpt4all-chat +%{_prefix}/lib/libgptj* +%{_prefix}/lib/libllamamodel* +%{_prefix}/lib/libbert* +%{_datadir}/applications/gpt4all-chat.desktop + +%files -n libllmodel0 +%{_libdir}/libllmodel.so* ++++++ b2245.tar.gz -> a3f03b7.tar.gz ++++++ /work/SRC/openSUSE:Factory/python-gpt4all/b2245.tar.gz /work/SRC/openSUSE:Factory/.python-gpt4all.new.24587/a3f03b7.tar.gz differ: char 12, line 1 ++++++ python-gpt4all.rpmlintrc ++++++ addFilter('devel-file-in-non-devel-package')
