Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package tensorflow-lite for openSUSE:Factory 
checked in at 2022-09-21 14:42:53
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/tensorflow-lite (Old)
 and      /work/SRC/openSUSE:Factory/.tensorflow-lite.new.2083 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "tensorflow-lite"

Wed Sep 21 14:42:53 2022 rev:2 rq:1005092 version:2.10.0

Changes:
--------
--- /work/SRC/openSUSE:Factory/tensorflow-lite/tensorflow-lite.changes  
2022-05-31 15:48:26.180030208 +0200
+++ 
/work/SRC/openSUSE:Factory/.tensorflow-lite.new.2083/tensorflow-lite.changes    
    2022-09-21 14:43:55.357995006 +0200
@@ -1,0 +2,292 @@
+Tue Sep 20 00:13:13 UTC 2022 - Ben Greiner <c...@bnavigator.de>
+
+- Update to 2.10.0
+  * boo#1203507 (CVE-2022-35934)
+- Breaking Changes
+  * Causal attention in keras.layers.Attention and
+    keras.layers.AdditiveAttention is now specified in the call()
+    method via the use_causal_mask argument (rather than in the
+    constructor), for consistency with other layers.
+  * Some files in tensorflow/python/training have been moved to
+    tensorflow/python/tracking and tensorflow/python/checkpoint.
+    Please update your imports accordingly, the old files will be
+    removed in Release 2.11.
+  * tf.keras.optimizers.experimental.Optimizer will graduate in
+    Release 2.11, which means tf.keras.optimizers.Optimizer will
+    be an alias of tf.keras.optimizers.experimental.Optimizer. The
+    current tf.keras.optimizers.Optimizer will continue to be
+    supported as tf.keras.optimizers.legacy.Optimizer,
+    e.g.,tf.keras.optimizers.legacy.Adam. Most users won't be
+    affected by this change, but please check the API doc if any
+    API used in your workflow is changed or deprecated, and make
+    adaptions. If you decide to keep using the old optimizer,
+    please explicitly change your optimizer to
+    tf.keras.optimizers.legacy.Optimizer.
+  * RNG behavior change for tf.keras.initializers. Keras
+    initializers will now use stateless random ops to generate
+    random numbers.
+    - Both seeded and unseeded initializers will always generate
+      the same values every time they are called (for a given
+      variable shape). For unseeded initializers (seed=None), a
+      random seed will be created and assigned at initializer
+      creation (different initializer instances get different
+      seeds).
+    - An unseeded initializer will raise a warning if it is reused
+      (called) multiple times. This is because it would produce
+      the same values each time, which may not be intended.
+- Deprecations
+  * The C++ tensorflow::Code and tensorflow::Status will become
+    aliases of respectively absl::StatusCode and absl::Status in
+    some future release.
+    - Use tensorflow::OkStatus() instead of
+      tensorflow::Status::OK().
+    - Stop constructing Status objects from
+      tensorflow::error::Code.
+    - One MUST NOT access tensorflow::errors::Code fields.
+      Accessing tensorflow::error::Code fields is fine.
+      + Use the constructors such as
+        tensorflow::errors:InvalidArgument to create status using
+        an error code without accessing it.
+      + Use the free functions such as
+        tensorflow::errors::IsInvalidArgument if needed.
+      + In the last resort, use
+        
e.g.static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT)
+        or static_cast<int>(code) for comparisons.
+  * tensorflow::StatusOr will also become in the future alias to
+    absl::StatusOr, so use StatusOr::value instead of
+    StatusOr::ConsumeValueOrDie.
+- Major Features and Improvements
+  * tf.lite:
+    - New operations supported:
+      + tflite SelectV2 now supports 5D.
+      + tf.einsum is supported with multiple unknown shapes.
+      + tf.unsortedsegmentprod op is supported.
+      + tf.unsortedsegmentmax op is supported.
+      + tf.unsortedsegmentsum op is supported.
+    - Updates to existing operations:
+      + tfl.scatter_nd now supports I1 for update arg.
+    - Upgrade Flatbuffers v2.0.5 from v1.12.0
+  * tf.keras:
+    - EinsumDense layer is moved from experimental to core. Its
+      import path is moved from
+      tf.keras.layers.experimental.EinsumDense to
+      tf.keras.layers.EinsumDense.
+    - Added tf.keras.utils.audio_dataset_from_directory utility to
+      easily generate audio classification datasets from
+      directories of .wav files.
+    - Added subset="both" support in
+      
tf.keras.utils.image_dataset_from_directory,tf.keras.utils.text_dataset_from_directory,
+      and audio_dataset_from_directory, to be used with the
+      validation_split argument, for returning both dataset splits
+      at once, as a tuple.
+    - Added tf.keras.utils.split_dataset utility to split a
+      Dataset object or a list/tuple of arrays into two Dataset
+      objects (e.g. train/test).
+    - Added step granularity to BackupAndRestore callback for
+      handling distributed training failures & restarts. The
+      training state can now be restored at the exact epoch and
+      step at which it was previously saved before failing.
+    - Added tf.keras.dtensor.experimental.optimizers.AdamW. This
+      optimizer is similar as the existing
+      keras.optimizers.experimental.AdamW, and works in the
+      DTensor training use case.
+    - Improved masking support for
+      tf.keras.layers.MultiHeadAttention.
+      + Implicit masks for query, key and value inputs will
+        automatically be used to compute a correct attention mask
+        for the layer. These padding masks will be combined with
+        any attention_mask passed in directly when calling the
+        layer. This can be used with tf.keras.layers.Embedding
+        with mask_zero=True to automatically infer a correct
+        padding mask.
+      + Added a use_causal_mask call time arugment to the layer.
+        Passing use_causal_mask=True will compute a causal
+        attention mask, and optionally combine it with any
+        attention_mask passed in directly when calling the layer.
+    - Added ignore_class argument in the loss
+      SparseCategoricalCrossentropy and metrics IoU and MeanIoU,
+      to specify a class index to be ignored during loss/metric
+      computation (e.g. a background/void class).
+    - Added
+      tf.keras.models.experimental.SharpnessAwareMinimization.
+      This class implements the sharpness-aware minimization
+      technique, which boosts model performance on various tasks,
+      e.g., ResNet on image classification.
+  * tf.data:
+    - Added support for cross-trainer data caching in tf.data
+      service. This saves computation resources when concurrent
+      training jobs train from the same dataset. See
+      
(https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers)
+      for more details.
+    - Added dataset_id to
+      tf.data.experimental.service.register_dataset. If provided,
+      tf.data service will use the provided ID for the dataset. If
+      the dataset ID already exists, no new dataset will be
+      registered. This is useful if multiple training jobs need to
+      use the same dataset for training. In this case, users
+      should call register_dataset with the same dataset_id.
+    - Added a new field, inject_prefetch, to
+      tf.data.experimental.OptimizationOptions. If it is set to
+      True,tf.data will now automatically add a prefetch
+      transformation to datasets that end in synchronous
+      transformations. This enables data generation to be
+      overlapped with data consumption. This may cause a small
+      increase in memory usage due to buffering. To enable this
+      behavior, set inject_prefetch=True in
+      tf.data.experimental.OptimizationOptions.
+    - Added a new value to
+      tf.data.Options.autotune.autotune_algorithm: STAGE_BASED. If
+      the autotune algorithm is set to STAGE_BASED, then it runs a
+      new algorithm that can get the same performance with lower
+      CPU/memory usage.
+    - Added tf.data.experimental.from_list, a new API for creating
+      Datasets from lists of elements.
+  * tf.distribute:
+    - Added tf.distribute.experimental.PreemptionCheckpointHandler
+      to handle worker preemption/maintenance and cluster-wise
+      consistent error reporting for
+      tf.distribute.MultiWorkerMirroredStrategy. Specifically, for
+      the type of interruption with advance notice, it
+      automatically saves a checkpoint, exits the program without
+      raising an unrecoverable error, and restores the progress
+      when training restarts.
+  * tf.math:
+    - Added tf.math.approx_max_k and tf.math.approx_min_k which
+      are the optimized alternatives to tf.math.top_k on TPU. The
+      performance difference range from 8 to 100 times depending
+      on the size of k. When running on CPU and GPU, a
+      non-optimized XLA kernel is used.
+  * tf.train:
+    - Added tf.train.TrackableView which allows users to inspect
+      the TensorFlow Trackable object (e.g. tf.Module, Keras
+      Layers and models).
+  * tf.vectorized_map:
+    - Added an optional parameter: warn. This parameter controls
+      whether or not warnings will be printed when operations in
+      the provided fn fall back to a while loop.
+  * XLA:
+    - MWMS is now compilable with XLA.
+    - Compute Library for the Arm?? Architecture (ACL) is supported
+      for aarch64 CPU XLA runtime
+  * CPU performance optimizations:
+    - x86 CPUs: oneDNN bfloat16 auto-mixed precision grappler
+      graph optimization pass has been renamed from
+      auto_mixed_precision_mkl to
+      auto_mixed_precision_onednn_bfloat16. See example usage
+      here.
+    - aarch64 CPUs: Experimental performance optimizations from
+      Compute Library for the Arm?? Architecture (ACL) are
+      available through oneDNN in the default Linux aarch64
+      package (pip install tensorflow).
+      + The optimizations are disabled by default.
+      + Set the environment variable TF_ENABLE_ONEDNN_OPTS=1 to
+        enable the optimizations. Setting the variable to 0 or
+        unsetting it will disable the optimizations.
+      + These optimizations can yield slightly different numerical
+        results from when they are off due to floating-point
+        round-off errors from different computation approaches and
+        orders.
+      + To verify that the optimizations are on, look for a
+        message with "oneDNN custom operations are on" in the log.
+        If the exact phrase is not there, it means they are off.
+- Bug Fixes and Other Changes
+  * New argument experimental_device_ordinal in
+    LogicalDeviceConfiguration to control the order of logical
+    devices. (GPU only)
+  * tf.keras:
+    - Changed the TensorBoard tag names produced by the
++++ 95 more lines (skipped)
++++ between /work/SRC/openSUSE:Factory/tensorflow-lite/tensorflow-lite.changes
++++ and 
/work/SRC/openSUSE:Factory/.tensorflow-lite.new.2083/tensorflow-lite.changes

Old:
----
  clog.tar.gz
  cpuinfo.zip
  tensorflow-2.9.1.tar.gz

New:
----
  cpuinfo.tar.gz
  tensorflow-2.10.0.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ tensorflow-lite.spec ++++++
--- /var/tmp/diff_new_pack.V3jMrl/_old  2022-09-21 14:43:57.229999895 +0200
+++ /var/tmp/diff_new_pack.V3jMrl/_new  2022-09-21 14:43:57.237999916 +0200
@@ -1,5 +1,5 @@
 #
-# spec file
+# spec file for package tensorflow-lite
 #
 # Copyright (c) 2022 SUSE LLC
 #
@@ -21,7 +21,7 @@
 %define pythons python3
 
 Name:           tensorflow-lite
-Version:        2.9.1
+Version:        2.10.0
 Release:        0
 Summary:        A framework used for deep learning for mobile and embedded 
devices
 License:        Apache-2.0 AND BSD-2-Clause AND BSD-3-Clause AND MIT AND 
MPL-2.0
@@ -44,18 +44,16 @@
 Source14:       
https://github.com/intel/ARM_NEON_2_x86_SSE/archive/1200fe90bb174a6224a525ee60148671a786a71f.tar.gz#/arm_neon_2_x86_sse.tar.gz
 # Deps sources for Tensorflow-Lite (use same eigen, gemmlowp and abseil_cpp 
packages as non lite version)
 # License15: Apache-2.0
-Source15:       
https://github.com/google/flatbuffers/archive/v1.12.0.tar.gz#/flatbuffers.tar.gz
+Source15:       
https://github.com/google/flatbuffers/archive/v2.0.6.tar.gz#/flatbuffers.tar.gz
 # License16: BSD like
 Source16:       
https://storage.googleapis.com/mirror.tensorflow.org/github.com/petewarden/OouraFFT/archive/v1.0.tar.gz#/fft2d.tgz
 # Source17: Apache-2.0
-Source17:       
https://github.com/google/ruy/archive/e6c1b8dc8a8b00ee74e7268aac8b18d7260ab1ce.zip#/ruy.zip
+Source17:       
https://github.com/google/ruy/archive/841ea4172ba904fe3536789497f9565f2ef64129.zip#/ruy.zip
 # License18: BSD-3-Clause
-Source18:       
https://github.com/google/XNNPACK/archive/11b2812d64e49bab9b6c489f79067fc94e69db9f.tar.gz#/xnnpack.zip
+Source18:       
https://github.com/google/XNNPACK/archive/6b409ac0a3090ebe74d0cdfb494c4cd91485ad39.zip#/xnnpack.zip
 # transitive tensorflow-lite dependencies for xnnpack
-# License20: BSD 2-Clause
-Source20:       
https://github.com/pytorch/cpuinfo/archive/d5e37adf1406cf899d7d9ec1d317c47506ccb970.tar.gz#/clog.tar.gz
 # License21: BSD 2-Clause
-Source21:       
https://storage.googleapis.com/mirror.tensorflow.org/github.com/pytorch/cpuinfo/archive/5916273f79a21551890fd3d56fc5375a78d1598d.zip#/cpuinfo.zip
+Source21:       
https://github.com/pytorch/cpuinfo/archive/5e63739504f0f8e18e941bd63b2d6d42536c7d90.tar.gz#/cpuinfo.tar.gz
 # NOTE: the github url is non-deterministic for the following zipfile archives 
(!?) Content is the same, but the hashes of the zipfiles differ.
 # License30: MIT
 # Source30:       
https://github.com/Maratyszcza/FP16/archive/4dfe081cf6bcd15db339cf2680b9281b8451eeb3.zip
 #/FP16.zip
@@ -79,9 +77,9 @@
 # We use some macros here but not singlespec
 BuildRequires:  python-rpm-macros
 BuildRequires:  python3-devel >= 3.7
-BuildRequires:  python3-setuptools
 BuildRequires:  python3-numpy-devel >= 1.19.2
 BuildRequires:  python3-pybind11-devel
+BuildRequires:  python3-setuptools
 BuildRequires:  unzip
 Provides:       python3-tflite-runtime = %{version}-%{release}
 Provides:       python3-tflite_runtime = %{version}-%{release}
@@ -129,9 +127,6 @@
 sed -i '1{/env python/d}' tensorflow/lite/tools/visualize.py
 
 # prepare third-party sources, transitive dependencies of FP16
-# cpuinfo and clog required both as overridable fetch content as well as FP16 
transitive sources
-tar -x -f %{SOURCE20} -C third_party/clog
-unzip %{SOURCE21} -d third_party/cpuinfo
 unzip %{SOURCE30} -d third_party/FP16
 unzip %{SOURCE31} -d third_party/FP16
 unzip %{SOURCE32} -d third_party/FP16
@@ -156,6 +151,8 @@
 
 %build
 # --- Build tensorflow-lite as part of the minimal executable ---
+# -Werror=return-type fails in xnnpack
+%global optflags %(echo %{optflags} | sed s/-Werror=return-type//)
 pushd tflite-build
 %cmake ../../tensorflow/lite/examples/minimal \
   -DBUILD_STATIC_LIBS:BOOL=ON \
@@ -170,10 +167,8 @@
   -DOVERRIDABLE_FETCH_CONTENT_fft2d_URL=%{SOURCE16} \
   -DOVERRIDABLE_FETCH_CONTENT_ruy_URL=%{SOURCE17} \
   -DOVERRIDABLE_FETCH_CONTENT_xnnpack_URL=%{SOURCE18} \
-  -DOVERRIDABLE_FETCH_CONTENT_clog_URL=%{SOURCE20} \
+  -DOVERRIDABLE_FETCH_CONTENT_clog_URL=%{SOURCE21} \
   -DOVERRIDABLE_FETCH_CONTENT_cpuinfo_URL=%{SOURCE21} \
-  -DCLOG_SOURCE_DIR:PATH=$(realpath ../../third_party/clog/cpuinfo-*) \
-  -DCPUINFO_SOURCE_DIR:PATH=$(realpath ../../third_party/cpuinfo/cpuinfo-*) \
   -DFP16_SOURCE_DIR:PATH=$(realpath ../../third_party/FP16/FP16-*) \
   -DFXDIV_SOURCE_DIR:PATH=$(realpath ../../third_party/FP16/FXdiv-*) \
   -DPTHREADPOOL_SOURCE_DIR:PATH=$(realpath 
../../third_party/FP16/pthreadpool-*) \
@@ -231,7 +226,7 @@
 export PROJECT_NAME=tflite_runtime
 export PACKAGE_VERSION=%{version}
 %python3_install --install-lib=%{python3_sitearch}
-%fdupes %{buildroot}%{python3_siterarch}
+%fdupes %{buildroot}%{python3_sitearch}
 popd
 
 %check



++++++ flatbuffers.tar.gz ++++++
++++ 180502 lines of diff (skipped)




++++++ ruy.zip ++++++
Binary files /var/tmp/diff_new_pack.V3jMrl/_old and 
/var/tmp/diff_new_pack.V3jMrl/_new differ

++++++ tensorflow-2.9.1.tar.gz -> tensorflow-2.10.0.tar.gz ++++++
/work/SRC/openSUSE:Factory/tensorflow-lite/tensorflow-2.9.1.tar.gz 
/work/SRC/openSUSE:Factory/.tensorflow-lite.new.2083/tensorflow-2.10.0.tar.gz 
differ: char 12, line 1

++++++ xnnpack.zip ++++++
Binary files /var/tmp/diff_new_pack.V3jMrl/_old and 
/var/tmp/diff_new_pack.V3jMrl/_new differ

Reply via email to