Deep Learning

TensorFlow 2.6.0 Released

August 14, 2021
20 min read
blog-TensorFlow-2.6.0.jpg

TensorFlow 2.6.0 Now Available

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

The newest version of TensorFlow is a stability release and brings a number of major features, improvements, bug fixes and other changes.


Interested in a deep learning solution?
Learn more about Exxact AI workstations starting at $3,700


Major Features and Improvements

tf.keras

  • Keras has been split into a separate PIP package (keras), and its code has been moved to the GitHub repository keras-team/keras.
    The API endpoints for tf.keras stay unchanged, but are now backed by the keras PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository keras-team/keras.
  • tf.keras.utils.experimental.DatasetCreator now takes an optional tf.distribute.InputOptions for specific options when used with distribution.
  • tf.keras.experimental.SidecarEvaluator is now available for a program intended to be run on an evaluator task, which is commonly used to supplement a training cluster running with ParameterServerStrategy (see here). It can also be used with single-worker training or other strategies. See docstring for more info.
  • Preprocessing layers moved from experimental to core.
    • Import paths moved from tf.keras.layers.preprocessing.experimental to tf.keras.layers.
  • Updates to Preprocessing layers API for consistency and clarity:
    • StringLookup and IntegerLookup default for mask_token changed to None. This matches the default masking behavior of Hashing and Embedding layers. To keep existing behavior, pass mask_token="" during layer creation.
    • Renamed "binary" output mode to "multi_hot" for CategoryEncoding, StringLookup, IntegerLookup, and TextVectorization. Multi-hot encoding will no longer automatically uprank rank 1 inputs, so these layers can now multi-hot encode unbatched multi-dimensional samples.
    • Added a new output mode "one_hot" for CategoryEncoding, StringLookup, IntegerLookup, which will encode each element in an input batch individually, and automatically append a new output dimension if necessary. Use this mode on rank 1 inputs for the old "binary" behavior of one-hot encoding a batch of scalars.
    • Normalization will no longer automatically uprank rank 1 inputs, allowing normalization of unbatched multi-dimensional samples

tf.lite

  • The recommended Android NDK version for building TensorFlow Lite has been changed from r18b to r19c.
  • Supports int64 for mul.
  • Supports native variable builtin ops - ReadVariable, AssignVariable.
  • Converter:
    • Experimental support for variables in TFLite. To enable through conversion, users need to set experimental_enable_resource_variables on tf.lite.TFLiteConverter to True.
      Note: mutable variables is only available using from_saved_model in this release, support for other methods is coming soon.
    • Old Converter (TOCO) is getting removed from next release. It's been deprecated for few releases already.

tf.saved_model:

  • SavedModels can now save custom gradients. The documentation in Advanced autodiff has been updated.
  • Object metadata has now been deprecated and no longer saved to the SavedModel.

TF Core:

  • Added tf.config.experimental.reset_memory_stats to reset the tracked peak memory returned by tf.config.experimental.get_memory_info

tf.data:

  • Added target_workers param to data_service_ops.from_dataset_id and data_service_ops.distribute. Users can specify "AUTO", "ANY", or "LOCAL" (case insensitive). If "AUTO", tf.data service runtime decides which workers to read from. If "ANY", TF workers read from any tf.data service workers. If "LOCAL", TF workers will only read from local in-processs tf.data service workers. "AUTO" works well for most cases, while users can specify other targets. For example, "LOCAL" would help avoid RPCs and data copy if every TF worker colocates with a tf.data service worker. Currently, "AUTO" reads from any tf.data service workers to preserve existing behavior. The default value is "AUTO".

Bug Fixes and Other Changes

TF Core:

  • Added tf.lookup.experimental.MutableHashTable, which provides a generic mutable hash table implementation.
    • Compared to tf.lookup.experimental.DenseHashTable this offers lower overall memory usage, and a cleaner API. It does not require specifying a delete_key and empty_key that cannot be inserted into the table.
  • Added support for specifying number of subdivisions in all reduce host collective. This parallelizes work on CPU and speeds up the collective performance. Default behavior is unchanged.
  • Add an option perturb_singular to tf.linalg.tridiagonal_solve that allows solving linear systems with a numerically singular tridiagonal matrix, e.g. for use in inverse iteration.
  • Added tf.linalg.eigh_tridiagonal that computes the eigenvalues of a Hermitian tridiagonal matrix.
  • tf.constant now places its output on the current default device.
  • SavedModel
    • Added TrackableResource, which allows the creation of custom wrapper objects for resource tensors.
    • Added a SavedModel load option to allow restoring partial checkpoints into the SavedModel. See [tf.saved_model.LoadOptions] (link) for details.
  • Added a new op SparseSegmentSumGrad to match the other sparse segment gradient ops and avoid an extra gather operation that was in the previous gradient implementation.
  • Added a new session config setting internal_fragmentation_fraction, which controls when the BFC Allocator needs to split an oversized chunk to satisfy an allocation request.
  • Added tf.get_current_name_scope() which returns the current full name scope string that will be prepended to op names.

tf.data:

  • Promoting tf.data.Dataset.bucket_by_sequence_length from experimental
  • Promoting tf.data.Dataset.get_single_element from experimental
  • Promoting tf.data.Dataset.group_by_window from experimental.
  • Promoting tf.data.Dataset.random from experimental.
  • Promoting tf.data.Dataset.scan from experimental.
  • Promoting tf.data.Dataset.shapshot from experimental.
  • Promoting tf.data.Dataset.take_while from experimental.
  • Promoting tf.data.ThreadingOptions from experimental.
  • Promoting tf.data.Dataset.unique from experimental.
  • Added stop_on_empty_dataset parameter to sample_from_datasets and choose_from_datasets. Setting stop_on_empty_dataset=True will stop sampling if it encounters an empty dataset. This preserves the sampling ratio throughout training. The prior behavior was to continue sampling, skipping over exhausted datasets, until all datasets are exhausted. By default, the original behavior (stop_on_empty_dataset=False) is preserved.
  • Removed previously deprecated tf.data statistics related APIs:
  • Removed the following experimental tf.data optimization APIs:
    • MapVectorizationOptions.*
    • .filter_with_random_uniform_fusion
    • .hoist_random_uniform
    • .map_vectorization *
    • .reorder_data_discarding_ops

tf.keras

  • Fix usage of __getitem__ slicing in Keras Functional APIs when the inputs are RaggedTensor objects.
  • Add keepdims argument to all GlobalPooling layers.
  • Add include_preprocessing argument to MobileNetV3 architectures to control the inclusion of Rescaling layer in the model.
  • Add optional argument (force) to make_(train|test|predict)_funtion methods to skip the cached function and generate a new one. This is useful to regenerate in a single call the compiled training function when any .trainable attribute of any model's layer has changed.
  • Models now have a save_spec property which contains the TensorSpec specs for calling the model. This spec is automatically saved when the model is called for the first time.

tf.linalg

  • Add CompositeTensor as a base class to LinearOperator.

tf.lite

  • Fix mean op reference quantization rounding issue.
  • Added framework_stable BUILD target, which links in only the non-experimental TF Lite APIs.
  • Remove deprecated Java Interpreter methods:
    • modifyGraphWithDelegate - Use Interpreter.Options.addDelegate
    • setNumThreads - Use Interpreter.Options.setNumThreads
  • Add Conv3DTranspose as a builtin op.

tf.summary

  • correctly reflects when summaries will be written, even when tf.summary.record_if() is not n effect, by returning True tensor if default writer is present.

Grappler

Disable default Grappler optimization timeout to make the optimization pipeline deterministic. This may lead to increased model loading time, because time spent in graph optimizations is now unbounded (was 20 minutes).

Deterministic Op Functionality (enabled by setting TF_DETERMINISTIC_OPS to "true" or "1"):

  • Add a deterministic GPU implementation of tf.nn.softmax_cross_entropy_with_logits. See PR 49178.
  • Add a deterministic CPU implementation of tf.image.crop_and_resize. See PR 48905.
  • Add determinism-unimplemented exception-throwing to the following ops. When op-determinism is expected, an attempt to use the specified paths through the following ops on a GPU will cause tf.errors.UnimplementedError (with an understandable message) to be thrown.
    • tf.nn.sparse_softmax_cross_entropy_with _logits forwards and/or backwards. See PR 47925.
    • tf.image.crop_and_resize gradient w.r.t. either image or boxes. See PR 48905.
    • tf.sparse.sparse_dense_matmul forwards. See PR 50355.

Security

  • Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
  • Fixes a floating point exception in SparseDenseCwiseDiv (CVE-2021-37636)
  • Fixes a null pointer dereference in CompressElement (CVE-2021-37637)
  • Fixes a null pointer dereference in RaggedTensorToTensor (CVE-2021-37638)
  • Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
  • Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
  • Fixes a division by 0 in ResourceScatterDiv (CVE-2021-37642)
  • Fixes a heap OOB in RaggedGather (CVE-2021-37641)
  • Fixes a std::abort raised from TensorListReserve (CVE-2021-37644)
  • Fixes a null pointer dereference in MatrixDiagPartOp (CVE-2021-37643)
  • Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
  • Fixes a bad allocation error in StringNGrams caused by integer conversion (CVE-2021-37646)
  • Fixes a null pointer dereference in SparseTensorSliceDataset (CVE-2021-37647)
  • Fixes an incorrect validation of SaveV2 inputs (CVE-2021-37648)
  • Fixes a null pointer dereference in UncompressElement (CVE-2021-37649)
  • Fixes a segfault and a heap buffer overflow in {Experimental,}DatasetToTFRecord (CVE-2021-37650)
  • Fixes a heap buffer overflow in FractionalAvgPoolGrad (CVE-2021-37651)
  • Fixes a use after free in boosted trees creation (CVE-2021-37652)
  • Fixes a division by 0 in ResourceGather (CVE-2021-37653)
  • Fixes a heap OOB and a CHECK fail in ResourceGather (CVE-2021-37654)
  • Fixes a heap OOB in ResourceScatterUpdate (CVE-2021-37655)
  • Fixes an undefined behavior arising from reference binding to nullptr in RaggedTensorToSparse (CVE-2021-37656)
  • Fixes an undefined behavior arising from reference binding to nullptr in MatrixDiagV* ops (CVE-2021-37657)
  • Fixes an undefined behavior arising from reference binding to nullptr in MatrixSetDiagV* ops (CVE-2021-37658)
  • Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
  • Fixes a division by 0 in inplace operations (CVE-2021-37660)
  • Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
  • Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
  • Fixes a heap OOB in boosted trees (CVE-2021-37664)
  • Fixes vulnerabilities arising from incomplete validation in QuantizeV2 (CVE-2021-37663)
  • Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
  • Fixes an undefined behavior arising from reference binding to nullptr in RaggedTensorToVariant (CVE-2021-37666)
  • Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
  • Fixes an FPE in tf.raw_ops.UnravelIndex (CVE-2021-37668)
  • Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
  • Fixes a heap OOB in UpperBound and LowerBound (CVE-2021-37670)
  • Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
  • Fixes a heap OOB in SdcaOptimizerV2 (CVE-2021-37672)
  • Fixes a CHECK-fail in MapStage (CVE-2021-37673)
  • Fixes a vulnerability arising from incomplete validation in MaxPoolGrad (CVE-2021-37674)
  • Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
  • Fixes a division by 0 in most convolution operators (CVE-2021-37675)
  • Fixes vulnerabilities arising from missing validation in shape inference for Dequantize (CVE-2021-37677)
  • Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
  • Fixes a heap OOB in nested tf.map_fn with RaggedTensors (CVE-2021-37679)
  • Fixes a division by zero in TFLite (CVE-2021-37680)
  • Fixes an NPE in TFLite (CVE-2021-37681)
  • Fixes a vulnerability arising from use of uninitialized value in TFLite (CVE-2021-37682)
  • Fixes an FPE in TFLite division operations (CVE-2021-37683)
  • Fixes an FPE in TFLite pooling operations (CVE-2021-37684)
  • Fixes an infinite loop in TFLite (CVE-2021-37686)
  • Fixes a heap OOB in TFLite (CVE-2021-37685)
  • Fixes a heap OOB in TFLite's Gather* implementations (CVE-2021-37687)
  • Fixes an undefined behavior arising from null pointer dereference in TFLite (CVE-2021-37688)
  • Fixes an undefined behavior arising from null pointer dereference in TFLite MLIR optimizations (CVE-2021-37689)
  • Fixes a FPE in LSH in TFLite (CVE-2021-37691)
  • Fixes a segfault on strings tensors with mismatched dimensions, arising in Go code (CVE-2021-37692)
  • Fixes a use after free and a potential segfault in shape inference functions (CVE-2021-37690)
  • Updates curl to 7.77.0 to handle CVE-2021-22876, CVE-2021-22897, CVE-2021-22898, and CVE-2021-22901.

Breaking Changes

tf.train

  • Experimental mixed precision graph rewrite is removed, as the API only works in graph mode and is not customizable. The function is still accessible but it is recommended to use the Keras mixed precision API instead.

tf.lite

  • Remove experimental.nn.dynamic_rnn, experimental.nn.TfLiteRNNCell and experimental.nn.TfLiteLSTMCell since they're no longersupported. It's recommended to just use keras lstm instead.

tf.keras

  • Keras been split into a separate PIP package (keras), and its code has been moved to the GitHub repositorykeras-team/keras. The API endpoints for tf.keras stay unchanged, but are now backed by the keras PIP package. The existing code in tensorflow/python/keras is a staled copy and will be removed in future release (2.7). Please remove any imports to tensorflow.python.keras and replace them with public tf.keras API instead.
  • The methods Model.to_yaml() and keras.models.model_from_yaml have been replaced to raise a RuntimeError as they can be abused to cause arbitrary code execution. It is recommended to use JSON serialization instead of YAML, or, a better alternative, serialize to H5.

Known Caveats

TF Core

  • A longstanding bug in tf.while_loop, which caused it to execute sequentially, even when parallel_iterations>1, has now been fixed. However, the increased parallelism may result in increased memory use. Users who experience unwanted regressions should reset their while_loop's parallel_iterations value to 1, which is consistent with prior behavior.

Click here to install TensorFlow 2


Download TensorFlow 2.6.0 on the GitHub page.


Topics

blog-TensorFlow-2.6.0.jpg
Deep Learning

TensorFlow 2.6.0 Released

August 14, 202120 min read

TensorFlow 2.6.0 Now Available

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

The newest version of TensorFlow is a stability release and brings a number of major features, improvements, bug fixes and other changes.


Interested in a deep learning solution?
Learn more about Exxact AI workstations starting at $3,700


Major Features and Improvements

tf.keras

  • Keras has been split into a separate PIP package (keras), and its code has been moved to the GitHub repository keras-team/keras.
    The API endpoints for tf.keras stay unchanged, but are now backed by the keras PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository keras-team/keras.
  • tf.keras.utils.experimental.DatasetCreator now takes an optional tf.distribute.InputOptions for specific options when used with distribution.
  • tf.keras.experimental.SidecarEvaluator is now available for a program intended to be run on an evaluator task, which is commonly used to supplement a training cluster running with ParameterServerStrategy (see here). It can also be used with single-worker training or other strategies. See docstring for more info.
  • Preprocessing layers moved from experimental to core.
    • Import paths moved from tf.keras.layers.preprocessing.experimental to tf.keras.layers.
  • Updates to Preprocessing layers API for consistency and clarity:
    • StringLookup and IntegerLookup default for mask_token changed to None. This matches the default masking behavior of Hashing and Embedding layers. To keep existing behavior, pass mask_token="" during layer creation.
    • Renamed "binary" output mode to "multi_hot" for CategoryEncoding, StringLookup, IntegerLookup, and TextVectorization. Multi-hot encoding will no longer automatically uprank rank 1 inputs, so these layers can now multi-hot encode unbatched multi-dimensional samples.
    • Added a new output mode "one_hot" for CategoryEncoding, StringLookup, IntegerLookup, which will encode each element in an input batch individually, and automatically append a new output dimension if necessary. Use this mode on rank 1 inputs for the old "binary" behavior of one-hot encoding a batch of scalars.
    • Normalization will no longer automatically uprank rank 1 inputs, allowing normalization of unbatched multi-dimensional samples

tf.lite

  • The recommended Android NDK version for building TensorFlow Lite has been changed from r18b to r19c.
  • Supports int64 for mul.
  • Supports native variable builtin ops - ReadVariable, AssignVariable.
  • Converter:
    • Experimental support for variables in TFLite. To enable through conversion, users need to set experimental_enable_resource_variables on tf.lite.TFLiteConverter to True.
      Note: mutable variables is only available using from_saved_model in this release, support for other methods is coming soon.
    • Old Converter (TOCO) is getting removed from next release. It's been deprecated for few releases already.

tf.saved_model:

  • SavedModels can now save custom gradients. The documentation in Advanced autodiff has been updated.
  • Object metadata has now been deprecated and no longer saved to the SavedModel.

TF Core:

  • Added tf.config.experimental.reset_memory_stats to reset the tracked peak memory returned by tf.config.experimental.get_memory_info

tf.data:

  • Added target_workers param to data_service_ops.from_dataset_id and data_service_ops.distribute. Users can specify "AUTO", "ANY", or "LOCAL" (case insensitive). If "AUTO", tf.data service runtime decides which workers to read from. If "ANY", TF workers read from any tf.data service workers. If "LOCAL", TF workers will only read from local in-processs tf.data service workers. "AUTO" works well for most cases, while users can specify other targets. For example, "LOCAL" would help avoid RPCs and data copy if every TF worker colocates with a tf.data service worker. Currently, "AUTO" reads from any tf.data service workers to preserve existing behavior. The default value is "AUTO".

Bug Fixes and Other Changes

TF Core:

  • Added tf.lookup.experimental.MutableHashTable, which provides a generic mutable hash table implementation.
    • Compared to tf.lookup.experimental.DenseHashTable this offers lower overall memory usage, and a cleaner API. It does not require specifying a delete_key and empty_key that cannot be inserted into the table.
  • Added support for specifying number of subdivisions in all reduce host collective. This parallelizes work on CPU and speeds up the collective performance. Default behavior is unchanged.
  • Add an option perturb_singular to tf.linalg.tridiagonal_solve that allows solving linear systems with a numerically singular tridiagonal matrix, e.g. for use in inverse iteration.
  • Added tf.linalg.eigh_tridiagonal that computes the eigenvalues of a Hermitian tridiagonal matrix.
  • tf.constant now places its output on the current default device.
  • SavedModel
    • Added TrackableResource, which allows the creation of custom wrapper objects for resource tensors.
    • Added a SavedModel load option to allow restoring partial checkpoints into the SavedModel. See [tf.saved_model.LoadOptions] (link) for details.
  • Added a new op SparseSegmentSumGrad to match the other sparse segment gradient ops and avoid an extra gather operation that was in the previous gradient implementation.
  • Added a new session config setting internal_fragmentation_fraction, which controls when the BFC Allocator needs to split an oversized chunk to satisfy an allocation request.
  • Added tf.get_current_name_scope() which returns the current full name scope string that will be prepended to op names.

tf.data:

  • Promoting tf.data.Dataset.bucket_by_sequence_length from experimental
  • Promoting tf.data.Dataset.get_single_element from experimental
  • Promoting tf.data.Dataset.group_by_window from experimental.
  • Promoting tf.data.Dataset.random from experimental.
  • Promoting tf.data.Dataset.scan from experimental.
  • Promoting tf.data.Dataset.shapshot from experimental.
  • Promoting tf.data.Dataset.take_while from experimental.
  • Promoting tf.data.ThreadingOptions from experimental.
  • Promoting tf.data.Dataset.unique from experimental.
  • Added stop_on_empty_dataset parameter to sample_from_datasets and choose_from_datasets. Setting stop_on_empty_dataset=True will stop sampling if it encounters an empty dataset. This preserves the sampling ratio throughout training. The prior behavior was to continue sampling, skipping over exhausted datasets, until all datasets are exhausted. By default, the original behavior (stop_on_empty_dataset=False) is preserved.
  • Removed previously deprecated tf.data statistics related APIs:
  • Removed the following experimental tf.data optimization APIs:
    • MapVectorizationOptions.*
    • .filter_with_random_uniform_fusion
    • .hoist_random_uniform
    • .map_vectorization *
    • .reorder_data_discarding_ops

tf.keras

  • Fix usage of __getitem__ slicing in Keras Functional APIs when the inputs are RaggedTensor objects.
  • Add keepdims argument to all GlobalPooling layers.
  • Add include_preprocessing argument to MobileNetV3 architectures to control the inclusion of Rescaling layer in the model.
  • Add optional argument (force) to make_(train|test|predict)_funtion methods to skip the cached function and generate a new one. This is useful to regenerate in a single call the compiled training function when any .trainable attribute of any model's layer has changed.
  • Models now have a save_spec property which contains the TensorSpec specs for calling the model. This spec is automatically saved when the model is called for the first time.

tf.linalg

  • Add CompositeTensor as a base class to LinearOperator.

tf.lite

  • Fix mean op reference quantization rounding issue.
  • Added framework_stable BUILD target, which links in only the non-experimental TF Lite APIs.
  • Remove deprecated Java Interpreter methods:
    • modifyGraphWithDelegate - Use Interpreter.Options.addDelegate
    • setNumThreads - Use Interpreter.Options.setNumThreads
  • Add Conv3DTranspose as a builtin op.

tf.summary

  • correctly reflects when summaries will be written, even when tf.summary.record_if() is not n effect, by returning True tensor if default writer is present.

Grappler

Disable default Grappler optimization timeout to make the optimization pipeline deterministic. This may lead to increased model loading time, because time spent in graph optimizations is now unbounded (was 20 minutes).

Deterministic Op Functionality (enabled by setting TF_DETERMINISTIC_OPS to "true" or "1"):

  • Add a deterministic GPU implementation of tf.nn.softmax_cross_entropy_with_logits. See PR 49178.
  • Add a deterministic CPU implementation of tf.image.crop_and_resize. See PR 48905.
  • Add determinism-unimplemented exception-throwing to the following ops. When op-determinism is expected, an attempt to use the specified paths through the following ops on a GPU will cause tf.errors.UnimplementedError (with an understandable message) to be thrown.
    • tf.nn.sparse_softmax_cross_entropy_with _logits forwards and/or backwards. See PR 47925.
    • tf.image.crop_and_resize gradient w.r.t. either image or boxes. See PR 48905.
    • tf.sparse.sparse_dense_matmul forwards. See PR 50355.

Security

  • Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
  • Fixes a floating point exception in SparseDenseCwiseDiv (CVE-2021-37636)
  • Fixes a null pointer dereference in CompressElement (CVE-2021-37637)
  • Fixes a null pointer dereference in RaggedTensorToTensor (CVE-2021-37638)
  • Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
  • Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
  • Fixes a division by 0 in ResourceScatterDiv (CVE-2021-37642)
  • Fixes a heap OOB in RaggedGather (CVE-2021-37641)
  • Fixes a std::abort raised from TensorListReserve (CVE-2021-37644)
  • Fixes a null pointer dereference in MatrixDiagPartOp (CVE-2021-37643)
  • Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
  • Fixes a bad allocation error in StringNGrams caused by integer conversion (CVE-2021-37646)
  • Fixes a null pointer dereference in SparseTensorSliceDataset (CVE-2021-37647)
  • Fixes an incorrect validation of SaveV2 inputs (CVE-2021-37648)
  • Fixes a null pointer dereference in UncompressElement (CVE-2021-37649)
  • Fixes a segfault and a heap buffer overflow in {Experimental,}DatasetToTFRecord (CVE-2021-37650)
  • Fixes a heap buffer overflow in FractionalAvgPoolGrad (CVE-2021-37651)
  • Fixes a use after free in boosted trees creation (CVE-2021-37652)
  • Fixes a division by 0 in ResourceGather (CVE-2021-37653)
  • Fixes a heap OOB and a CHECK fail in ResourceGather (CVE-2021-37654)
  • Fixes a heap OOB in ResourceScatterUpdate (CVE-2021-37655)
  • Fixes an undefined behavior arising from reference binding to nullptr in RaggedTensorToSparse (CVE-2021-37656)
  • Fixes an undefined behavior arising from reference binding to nullptr in MatrixDiagV* ops (CVE-2021-37657)
  • Fixes an undefined behavior arising from reference binding to nullptr in MatrixSetDiagV* ops (CVE-2021-37658)
  • Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
  • Fixes a division by 0 in inplace operations (CVE-2021-37660)
  • Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
  • Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
  • Fixes a heap OOB in boosted trees (CVE-2021-37664)
  • Fixes vulnerabilities arising from incomplete validation in QuantizeV2 (CVE-2021-37663)
  • Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
  • Fixes an undefined behavior arising from reference binding to nullptr in RaggedTensorToVariant (CVE-2021-37666)
  • Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
  • Fixes an FPE in tf.raw_ops.UnravelIndex (CVE-2021-37668)
  • Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
  • Fixes a heap OOB in UpperBound and LowerBound (CVE-2021-37670)
  • Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
  • Fixes a heap OOB in SdcaOptimizerV2 (CVE-2021-37672)
  • Fixes a CHECK-fail in MapStage (CVE-2021-37673)
  • Fixes a vulnerability arising from incomplete validation in MaxPoolGrad (CVE-2021-37674)
  • Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
  • Fixes a division by 0 in most convolution operators (CVE-2021-37675)
  • Fixes vulnerabilities arising from missing validation in shape inference for Dequantize (CVE-2021-37677)
  • Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
  • Fixes a heap OOB in nested tf.map_fn with RaggedTensors (CVE-2021-37679)
  • Fixes a division by zero in TFLite (CVE-2021-37680)
  • Fixes an NPE in TFLite (CVE-2021-37681)
  • Fixes a vulnerability arising from use of uninitialized value in TFLite (CVE-2021-37682)
  • Fixes an FPE in TFLite division operations (CVE-2021-37683)
  • Fixes an FPE in TFLite pooling operations (CVE-2021-37684)
  • Fixes an infinite loop in TFLite (CVE-2021-37686)
  • Fixes a heap OOB in TFLite (CVE-2021-37685)
  • Fixes a heap OOB in TFLite's Gather* implementations (CVE-2021-37687)
  • Fixes an undefined behavior arising from null pointer dereference in TFLite (CVE-2021-37688)
  • Fixes an undefined behavior arising from null pointer dereference in TFLite MLIR optimizations (CVE-2021-37689)
  • Fixes a FPE in LSH in TFLite (CVE-2021-37691)
  • Fixes a segfault on strings tensors with mismatched dimensions, arising in Go code (CVE-2021-37692)
  • Fixes a use after free and a potential segfault in shape inference functions (CVE-2021-37690)
  • Updates curl to 7.77.0 to handle CVE-2021-22876, CVE-2021-22897, CVE-2021-22898, and CVE-2021-22901.

Breaking Changes

tf.train

  • Experimental mixed precision graph rewrite is removed, as the API only works in graph mode and is not customizable. The function is still accessible but it is recommended to use the Keras mixed precision API instead.

tf.lite

  • Remove experimental.nn.dynamic_rnn, experimental.nn.TfLiteRNNCell and experimental.nn.TfLiteLSTMCell since they're no longersupported. It's recommended to just use keras lstm instead.

tf.keras

  • Keras been split into a separate PIP package (keras), and its code has been moved to the GitHub repositorykeras-team/keras. The API endpoints for tf.keras stay unchanged, but are now backed by the keras PIP package. The existing code in tensorflow/python/keras is a staled copy and will be removed in future release (2.7). Please remove any imports to tensorflow.python.keras and replace them with public tf.keras API instead.
  • The methods Model.to_yaml() and keras.models.model_from_yaml have been replaced to raise a RuntimeError as they can be abused to cause arbitrary code execution. It is recommended to use JSON serialization instead of YAML, or, a better alternative, serialize to H5.

Known Caveats

TF Core

  • A longstanding bug in tf.while_loop, which caused it to execute sequentially, even when parallel_iterations>1, has now been fixed. However, the increased parallelism may result in increased memory use. Users who experience unwanted regressions should reset their while_loop's parallel_iterations value to 1, which is consistent with prior behavior.

Click here to install TensorFlow 2


Download TensorFlow 2.6.0 on the GitHub page.


Topics