Script 'mail_helper' called by obssrc
Hello community,

here is the log from the commit of package python-python-multipart for 
openSUSE:Factory checked in at 2026-04-21 12:41:57
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/python-python-multipart (Old)
 and      /work/SRC/openSUSE:Factory/.python-python-multipart.new.11940 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "python-python-multipart"

Tue Apr 21 12:41:57 2026 rev:14 rq:1348345 version:0.0.26

Changes:
--------
--- 
/work/SRC/openSUSE:Factory/python-python-multipart/python-python-multipart.changes
  2026-01-27 16:07:00.444738264 +0100
+++ 
/work/SRC/openSUSE:Factory/.python-python-multipart.new.11940/python-python-multipart.changes
       2026-04-21 12:42:16.648087570 +0200
@@ -1,0 +2,19 @@
+Tue Apr 21 02:20:57 UTC 2026 - Steve Kowalik <[email protected]>
+
+- Update to 0.0.26:
+  * Skip preamble before first multipart boundary
+    (CVE-2026-40347, bsc#1262403)
+  * Silently discard epilogue data after the closing boundary
+  * Apply Apache-2.0 properly
+  * Handle multipart headers case-insensitively
+  * Emit field_end for trailing bare field names on finalize
+  * Add UPLOAD_DELETE_TMP to FormParser config
+  * Remove custom FormParser classes
+  * Handle CTE values case-insensitively
+  * Add MIME content type info to File
+  * Validate chunk_size in parse_form()
+  * Remove unused trust_x_headers parameter and X-File-Name fallback
+  * Return processed length from QuerystringParser._internal_write
+  * Cleanup metadata dunders from __init__.py
+
+-------------------------------------------------------------------

Old:
----
  python_multipart-0.0.22.tar.gz

New:
----
  python_multipart-0.0.26.tar.gz

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ python-python-multipart.spec ++++++
--- /var/tmp/diff_new_pack.2mX5YL/_old  2026-04-21 12:42:17.452120976 +0200
+++ /var/tmp/diff_new_pack.2mX5YL/_new  2026-04-21 12:42:17.452120976 +0200
@@ -18,7 +18,7 @@
 
 %{?sle15_python_module_pythons}
 Name:           python-python-multipart
-Version:        0.0.22
+Version:        0.0.26
 Release:        0
 License:        Apache-2.0
 Summary:        Python streaming multipart parser
@@ -26,7 +26,6 @@
 Source:         
https://files.pythonhosted.org/packages/source/p/python-multipart/python_multipart-%{version}.tar.gz
 BuildRequires:  %{python_module hatchling}
 BuildRequires:  %{python_module pip}
-BuildRequires:  %{python_module wheel}
 BuildRequires:  python-rpm-macros
 # SECTION test requirements
 BuildRequires:  %{python_module PyYAML}

++++++ python_multipart-0.0.22.tar.gz -> python_multipart-0.0.26.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/python_multipart-0.0.22/.gitignore 
new/python_multipart-0.0.26/.gitignore
--- old/python_multipart-0.0.22/.gitignore      2020-02-02 01:00:00.000000000 
+0100
+++ new/python_multipart-0.0.26/.gitignore      2020-02-02 01:00:00.000000000 
+0100
@@ -198,4 +198,8 @@
 #  be found at 
https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
 #  and can be added to the global gitignore or merged into this file.  For a 
more nuclear
 #  option (not recommended) you can uncomment the following to ignore the 
entire idea folder.
-#.idea/
\ No newline at end of file
+#.idea/
+
+# Local coding agents
+.claude/settings.local.json
+.claude/worktrees/
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/python_multipart-0.0.22/CHANGELOG.md 
new/python_multipart-0.0.26/CHANGELOG.md
--- old/python_multipart-0.0.22/CHANGELOG.md    2020-02-02 01:00:00.000000000 
+0100
+++ new/python_multipart-0.0.26/CHANGELOG.md    2020-02-02 01:00:00.000000000 
+0100
@@ -1,5 +1,30 @@
 # Changelog
 
+## 0.0.26 (2026-04-10)
+
+* Skip preamble before the first multipart boundary more efficiently 
[#262](https://github.com/Kludex/python-multipart/pull/262).
+* Silently discard epilogue data after the closing multipart boundary 
[#259](https://github.com/Kludex/python-multipart/pull/259).
+
+## 0.0.25 (2026-04-10)
+
+* Add MIME content type info to `File` 
[#143](https://github.com/Kludex/python-multipart/pull/143).
+* Handle CTE values case-insensitively 
[#258](https://github.com/Kludex/python-multipart/pull/258).
+* Remove custom `FormParser` classes 
[#257](https://github.com/Kludex/python-multipart/pull/257).
+* Add `UPLOAD_DELETE_TMP` to `FormParser` config 
[#254](https://github.com/Kludex/python-multipart/pull/254).
+* Emit `field_end` for trailing bare field names on finalize 
[#230](https://github.com/Kludex/python-multipart/pull/230).
+* Handle multipart headers case-insensitively 
[#252](https://github.com/Kludex/python-multipart/pull/252).
+* Apply Apache-2.0 properly 
[#247](https://github.com/Kludex/python-multipart/pull/247).
+
+## 0.0.24 (2026-04-05)
+
+* Validate `chunk_size` in `parse_form()` 
[#244](https://github.com/Kludex/python-multipart/pull/244).
+
+## 0.0.23 (2026-04-05)
+
+* Remove unused `trust_x_headers` parameter and `X-File-Name` fallback 
[#196](https://github.com/Kludex/python-multipart/pull/196).
+* Return processed length from `QuerystringParser._internal_write` 
[#229](https://github.com/Kludex/python-multipart/pull/229).
+* Cleanup metadata dunders from `__init__.py` 
[#227](https://github.com/Kludex/python-multipart/pull/227).
+
 ## 0.0.22 (2026-01-25)
 
 * Drop directory path from filename in `File` 
[9433f4b](https://github.com/Kludex/python-multipart/commit/9433f4bbc9652bdde82bbe380984e32f8cfc89c4).
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/python_multipart-0.0.22/LICENSE.txt 
new/python_multipart-0.0.26/LICENSE.txt
--- old/python_multipart-0.0.22/LICENSE.txt     2020-02-02 01:00:00.000000000 
+0100
+++ new/python_multipart-0.0.26/LICENSE.txt     2020-02-02 01:00:00.000000000 
+0100
@@ -1,14 +1,202 @@
-Copyright 2012, Andrew Dunham
 
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
-   https://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
 
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright [yyyy] [name of copyright owner]
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/python_multipart-0.0.22/PKG-INFO 
new/python_multipart-0.0.26/PKG-INFO
--- old/python_multipart-0.0.22/PKG-INFO        2020-02-02 01:00:00.000000000 
+0100
+++ new/python_multipart-0.0.26/PKG-INFO        2020-02-02 01:00:00.000000000 
+0100
@@ -1,12 +1,13 @@
 Metadata-Version: 2.4
 Name: python-multipart
-Version: 0.0.22
+Version: 0.0.26
 Summary: A streaming multipart parser for Python
 Project-URL: Homepage, https://github.com/Kludex/python-multipart
 Project-URL: Documentation, https://kludex.github.io/python-multipart/
 Project-URL: Changelog, 
https://github.com/Kludex/python-multipart/blob/master/CHANGELOG.md
 Project-URL: Source, https://github.com/Kludex/python-multipart
-Author-email: Andrew Dunham <[email protected]>, Marcelo Trylesinski 
<[email protected]>
+Author-email: Andrew Dunham <[email protected]>
+Maintainer-email: Marcelo Trylesinski <[email protected]>
 License-Expression: Apache-2.0
 License-File: LICENSE.txt
 Classifier: Development Status :: 5 - Production/Stable
@@ -27,8 +28,10 @@
 
 # [Python-Multipart](https://kludex.github.io/python-multipart/)
 
+[![Build 
Status](https://github.com/Kludex/python-multipart/workflows/CI/badge.svg)](https://github.com/Kludex/python-multipart/actions)
 [![Package 
version](https://badge.fury.io/py/python-multipart.svg)](https://pypi.python.org/pypi/python-multipart)
 [![Supported Python 
Version](https://img.shields.io/pypi/pyversions/python-multipart.svg?color=%2334D058)](https://pypi.org/project/python-multipart)
+[![Discord](https://img.shields.io/discord/1051468649518616576?logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2)](https://discord.gg/RxKUF5JuHs)
 
 ---
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/python_multipart-0.0.22/README.md 
new/python_multipart-0.0.26/README.md
--- old/python_multipart-0.0.22/README.md       2020-02-02 01:00:00.000000000 
+0100
+++ new/python_multipart-0.0.26/README.md       2020-02-02 01:00:00.000000000 
+0100
@@ -1,7 +1,9 @@
 # [Python-Multipart](https://kludex.github.io/python-multipart/)
 
+[![Build 
Status](https://github.com/Kludex/python-multipart/workflows/CI/badge.svg)](https://github.com/Kludex/python-multipart/actions)
 [![Package 
version](https://badge.fury.io/py/python-multipart.svg)](https://pypi.python.org/pypi/python-multipart)
 [![Supported Python 
Version](https://img.shields.io/pypi/pyversions/python-multipart.svg?color=%2334D058)](https://pypi.org/project/python-multipart)
+[![Discord](https://img.shields.io/discord/1051468649518616576?logo=discord&logoColor=ffffff&color=7389D8&labelColor=6A7EC2)](https://discord.gg/RxKUF5JuHs)
 
 ---
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/python_multipart-0.0.22/multipart/__init__.py 
new/python_multipart-0.0.26/multipart/__init__.py
--- old/python_multipart-0.0.22/multipart/__init__.py   2020-02-02 
01:00:00.000000000 +0100
+++ new/python_multipart-0.0.26/multipart/__init__.py   2020-02-02 
01:00:00.000000000 +0100
@@ -1,3 +1,17 @@
+# Copyright 2012, Andrew Dunham
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 # This only works if using a file system, other loaders not implemented.
 
 import importlib.util
@@ -21,4 +35,4 @@
 else:
     warnings.warn("Please use `import python_multipart` instead.", 
PendingDeprecationWarning, stacklevel=2)
     from python_multipart import *
-    from python_multipart import __all__, __author__, __copyright__, 
__license__, __version__
+    from python_multipart import __all__, __version__
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/python_multipart-0.0.22/pyproject.toml 
new/python_multipart-0.0.26/pyproject.toml
--- old/python_multipart-0.0.22/pyproject.toml  2020-02-02 01:00:00.000000000 
+0100
+++ new/python_multipart-0.0.26/pyproject.toml  2020-02-02 01:00:00.000000000 
+0100
@@ -11,6 +11,8 @@
 requires-python = ">=3.10"
 authors = [
     { name = "Andrew Dunham", email = "[email protected]" },
+]
+maintainers = [
     { name = "Marcelo Trylesinski", email = "[email protected]" },
 ]
 classifiers = [
@@ -33,18 +35,18 @@
 [dependency-groups]
 dev = [
     "atomicwrites==1.4.1",
-    "attrs==23.2.0",
-    "coverage==7.4.4",
-    "more-itertools==10.2.0",
-    "pbr==6.0.0",
-    "pluggy==1.4.0",
+    "attrs==26.1.0",
+    "coverage==7.13.5",
+    "more-itertools==11.0.1",
+    "pbr==7.0.3",
+    "pluggy==1.6.0",
     "py==1.11.0",
-    "pytest==8.1.1",
-    "pytest-cov==5.0.0",
-    "PyYAML==6.0.1",
-    "invoke==2.2.0",
-    "pytest-timeout==2.3.1",
-    "ruff==0.11.7",
+    "pytest==9.0.2",
+    "pytest-cov==7.1.0",
+    "PyYAML==6.0.3",
+    "invoke==2.2.1",
+    "pytest-timeout==2.4.0",
+    "ruff==0.15.9",
     "mypy",
     "types-PyYAML",
     "atheris==2.3.0; python_version <= '3.11'",
@@ -53,6 +55,7 @@
     "mkdocs-material",
     "mkdocstrings-python",
     "mkdocs-autorefs",
+    "pymdown-extensions>=10.21.2",
 ]
 
 [tool.uv.pip]
@@ -108,17 +111,9 @@
 fail_under = 100
 skip_covered = true
 show_missing = true
-exclude_lines = [
-    "pragma: no cover",
-    "raise NotImplementedError",
-    "def __str__",
+exclude_also = [
     "def __repr__",
-    "if 0:",
-    "if False:",
     "if __name__ == .__main__.:",
-    "if self\\.config\\['DEBUG'\\]:",
-    "if self\\.debug:",
-    "except ImportError:",
 ]
 
 [tool.check-sdist]
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/python_multipart-0.0.22/python_multipart/__init__.py 
new/python_multipart-0.0.26/python_multipart/__init__.py
--- old/python_multipart-0.0.22/python_multipart/__init__.py    2020-02-02 
01:00:00.000000000 +0100
+++ new/python_multipart-0.0.26/python_multipart/__init__.py    2020-02-02 
01:00:00.000000000 +0100
@@ -1,8 +1,18 @@
-# This is the canonical package information.
-__author__ = "Andrew Dunham"
-__license__ = "Apache"
-__copyright__ = "Copyright (c) 2012-2013, Andrew Dunham"
-__version__ = "0.0.22"
+# Copyright 2012, Andrew Dunham
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#    https://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+__version__ = "0.0.26"
 
 from .multipart import (
     BaseParser,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/python_multipart/multipart.py 
new/python_multipart-0.0.26/python_multipart/multipart.py
--- old/python_multipart-0.0.22/python_multipart/multipart.py   2020-02-02 
01:00:00.000000000 +0100
+++ new/python_multipart-0.0.26/python_multipart/multipart.py   2020-02-02 
01:00:00.000000000 +0100
@@ -14,7 +14,7 @@
 from .decoders import Base64Decoder, QuotedPrintableDecoder
 from .exceptions import FileError, FormParserError, MultipartParseError, 
QuerystringParseError
 
-if TYPE_CHECKING:  # pragma: no cover
+if TYPE_CHECKING:
     from collections.abc import Callable
     from typing import Any, Literal, Protocol, TypeAlias, TypedDict
 
@@ -44,14 +44,6 @@
         on_headers_finished: Callable[[], None]
         on_end: Callable[[], None]
 
-    class FormParserConfig(TypedDict):
-        UPLOAD_DIR: str | None
-        UPLOAD_KEEP_FILENAME: bool
-        UPLOAD_KEEP_EXTENSIONS: bool
-        UPLOAD_ERROR_ON_BAD_CTE: bool
-        MAX_MEMORY_FILE_SIZE: int
-        MAX_BODY_SIZE: float
-
     class FileConfig(TypedDict, total=False):
         UPLOAD_DIR: str | bytes | None
         UPLOAD_DELETE_TMP: bool
@@ -59,23 +51,9 @@
         UPLOAD_KEEP_EXTENSIONS: bool
         MAX_MEMORY_FILE_SIZE: int
 
-    class _FormProtocol(Protocol):
-        def write(self, data: bytes) -> int: ...
-
-        def finalize(self) -> None: ...
-
-        def close(self) -> None: ...
-
-    class FieldProtocol(_FormProtocol, Protocol):
-        def __init__(self, name: bytes | None) -> None: ...
-
-        def set_none(self) -> None: ...
-
-    class FileProtocol(_FormProtocol, Protocol):
-        def __init__(self, file_name: bytes | None, field_name: bytes | None, 
config: FileConfig) -> None: ...
-
-    OnFieldCallback = Callable[[FieldProtocol], None]
-    OnFileCallback = Callable[[FileProtocol], None]
+    class FormParserConfig(FileConfig):
+        UPLOAD_ERROR_ON_BAD_CTE: bool
+        MAX_BODY_SIZE: float
 
     CallbackName: TypeAlias = Literal[
         "start",
@@ -221,11 +199,13 @@
 
     Args:
         name: The name of the form field.
+        content_type: The value of the Content-Type header for this field.
     """
 
-    def __init__(self, name: bytes | None) -> None:
+    def __init__(self, name: bytes | None, *, content_type: str | None = None) 
-> None:
         self._name = name
         self._value: list[bytes] = []
+        self._content_type = content_type
 
         # We cache the joined version of _value for speed.
         self._cache = _missing
@@ -317,6 +297,11 @@
         assert isinstance(self._cache, bytes) or self._cache is None
         return self._cache
 
+    @property
+    def content_type(self) -> str | None:
+        """This property returns the content_type value of the field."""
+        return self._content_type
+
     def __eq__(self, other: object) -> bool:
         if isinstance(other, Field):
             return self.field_name == other.field_name and self.value == 
other.value
@@ -355,9 +340,17 @@
         field_name: The name of the form field that this file was uploaded 
with.  This can be None, if, for example,
             the file was uploaded with Content-Type application/octet-stream.
         config: The configuration for this File.  See above for valid 
configuration keys and their corresponding values.
+        content_type: The value of the Content-Type header.
     """  # noqa: E501
 
-    def __init__(self, file_name: bytes | None, field_name: bytes | None = 
None, config: FileConfig = {}) -> None:
+    def __init__(
+        self,
+        file_name: bytes | None,
+        field_name: bytes | None = None,
+        config: FileConfig = {},
+        *,
+        content_type: str | None = None,
+    ) -> None:
         # Save configuration, set other variables default.
         self.logger = logging.getLogger(__name__)
         self._config = config
@@ -365,9 +358,10 @@
         self._bytes_written = 0
         self._fileobj: BytesIO | BufferedRandom = BytesIO()
 
-        # Save the provided field/file name.
+        # Save the provided field/file name and content type.
         self._field_name = field_name
         self._file_name = file_name
+        self._content_type = content_type
 
         # Our actual file name is None by default, since, depending on our
         # config, we may not actually use the provided name.
@@ -422,6 +416,11 @@
         """
         return self._in_memory
 
+    @property
+    def content_type(self) -> str | None:
+        """The Content-Type value for this part, if it was set."""
+        return self._content_type
+
     def flush_to_disk(self) -> None:
         """If the file is already on-disk, do nothing.  Otherwise, copy from
         the in-memory buffer to a disk file, and then reassign our internal
@@ -936,7 +935,7 @@
 
         self.state = state
         self._found_sep = found_sep
-        return len(data)
+        return length
 
     def finalize(self) -> None:
         """Finalize this parser, which signals to that we are finished parsing,
@@ -944,7 +943,7 @@
         then the on_end callback.
         """
         # If we're currently in the middle of a field, we finish it.
-        if self.state == QuerystringState.FIELD_DATA:
+        if self.state in (QuerystringState.FIELD_DATA, 
QuerystringState.FIELD_NAME):
             self.callback("field_end")
         self.callback("end")
 
@@ -1106,7 +1105,11 @@
             if state == MultipartState.START:
                 # Skip leading newlines
                 if c == CR or c == LF:
-                    i += 1
+                    i = data.find(b"-", i)
+                    if i == -1:
+                        # No boundary candidate in this chunk, so ignore the 
content after the leading CR/LF.
+                        i = length
+                        break
                     continue
 
                 # index is used as in index into our boundary.  Set to 0.
@@ -1414,12 +1417,8 @@
                     state = MultipartState.END
 
             elif state == MultipartState.END:
-                # Don't do anything if chunk ends with CRLF.
-                if c == CR and i + 1 < length and data[i + 1] == LF:
-                    i += 2
-                    continue
-                # Skip data after the last boundary.
-                self.logger.warning("Skipping data after last boundary")
+                # Silently discard any epilogue data (RFC 2046 section 5.1.1 
allows a CRLF and optional
+                # epilogue after the closing boundary). Django and Werkzeug do 
the same.
                 i = length
                 break
 
@@ -1487,19 +1486,6 @@
         file_name: If the request is of type application/octet-stream, then 
the body of the request will not contain any
             information about the uploaded file.  In such cases, you can 
provide the file name of the uploaded file
             manually.
-        FileClass: The class to use for uploaded files.  Defaults to 
:class:`File`, but you can provide your own class
-            if you wish to customize behaviour.  The class will be 
instantiated as FileClass(file_name, field_name), and
-            it must provide the following functions::
-                - file_instance.write(data)
-                - file_instance.finalize()
-                - file_instance.close()
-        FieldClass: The class to use for uploaded fields.  Defaults to 
:class:`Field`, but you can provide your own
-            class if you wish to customize behaviour.  The class will be 
instantiated as FieldClass(field_name), and it
-            must provide the following functions::
-                - field_instance.write(data)
-                - field_instance.finalize()
-                - field_instance.close()
-                - field_instance.set_none()
         config: Configuration to use for this FormParser.  The default values 
are taken from the DEFAULT_CONFIG value,
             and then any keys present in this dictionary will overwrite the 
default values.
     """
@@ -1510,6 +1496,7 @@
         "MAX_BODY_SIZE": float("inf"),
         "MAX_MEMORY_FILE_SIZE": 1 * 1024 * 1024,
         "UPLOAD_DIR": None,
+        "UPLOAD_DELETE_TMP": True,
         "UPLOAD_KEEP_FILENAME": False,
         "UPLOAD_KEEP_EXTENSIONS": False,
         # Error on invalid Content-Transfer-Encoding?
@@ -1519,13 +1506,11 @@
     def __init__(
         self,
         content_type: str,
-        on_field: OnFieldCallback | None,
-        on_file: OnFileCallback | None,
+        on_field: Callable[[Field], None] | None,
+        on_file: Callable[[File], None] | None,
         on_end: Callable[[], None] | None = None,
         boundary: bytes | str | None = None,
         file_name: bytes | None = None,
-        FileClass: type[FileProtocol] = File,
-        FieldClass: type[FieldProtocol] = Field,
         config: dict[Any, Any] = {},
     ) -> None:
         self.logger = logging.getLogger(__name__)
@@ -1541,10 +1526,6 @@
         self.on_file = on_file
         self.on_end = on_end
 
-        # Save classes.
-        self.FileClass = File
-        self.FieldClass = Field
-
         # Set configuration options.
         self.config: FormParserConfig = self.DEFAULT_CONFIG.copy()
         self.config.update(config)  # type: ignore[typeddict-item]
@@ -1553,18 +1534,20 @@
 
         # Depending on the Content-Type, we instantiate the correct parser.
         if content_type == "application/octet-stream":
-            file: FileProtocol = None  # type: ignore
+            file: File | None = None
 
             def on_start() -> None:
                 nonlocal file
-                file = FileClass(file_name, None, config=cast("FileConfig", 
self.config))
+                file = File(file_name, None, config=self.config)
 
             def on_data(data: bytes, start: int, end: int) -> None:
                 nonlocal file
+                assert file is not None
                 file.write(data[start:end])
 
             def _on_end() -> None:
                 nonlocal file
+                assert file is not None
                 # Finalize the file itself.
                 file.finalize()
 
@@ -1585,7 +1568,7 @@
         elif content_type == "application/x-www-form-urlencoded" or 
content_type == "application/x-url-encoded":
             name_buffer: list[bytes] = []
 
-            f: FieldProtocol | None = None
+            f: Field | None = None
 
             def on_field_start() -> None:
                 pass
@@ -1596,7 +1579,7 @@
             def on_field_data(data: bytes, start: int, end: int) -> None:
                 nonlocal f
                 if f is None:
-                    f = FieldClass(b"".join(name_buffer))
+                    f = Field(b"".join(name_buffer))
                     del name_buffer[:]
                 f.write(data[start:end])
 
@@ -1606,7 +1589,7 @@
                 if f is None:
                     # If we get here, it's because there was no field data.
                     # We create a field, set it to None, and then continue.
-                    f = FieldClass(b"".join(name_buffer))
+                    f = Field(b"".join(name_buffer))
                     del name_buffer[:]
                     f.set_none()
 
@@ -1640,8 +1623,8 @@
             header_value: list[bytes] = []
             headers: dict[bytes, bytes] = {}
 
-            f_multi: FileProtocol | FieldProtocol | None = None
-            writer = None
+            f_multi: File | Field | None = None
+            writer: File | Field | Base64Decoder | QuotedPrintableDecoder | 
None = None
             is_file = False
 
             def on_part_begin() -> None:
@@ -1661,10 +1644,12 @@
                 f_multi.finalize()
                 if is_file:
                     if on_file:
+                        assert isinstance(f_multi, File)
                         on_file(f_multi)
                 else:
                     if on_field:
-                        on_field(cast("FieldProtocol", f_multi))
+                        assert isinstance(f_multi, Field)
+                        on_field(f_multi)
 
             def on_header_field(data: bytes, start: int, end: int) -> None:
                 header_name.append(data[start:end])
@@ -1673,7 +1658,7 @@
                 header_value.append(data[start:end])
 
             def on_header_end() -> None:
-                headers[b"".join(header_name)] = b"".join(header_value)
+                headers[b"".join(header_name).lower()] = b"".join(header_value)
                 del header_name[:]
                 del header_value[:]
 
@@ -1683,26 +1668,31 @@
                 is_file = False
 
                 # Parse the content-disposition header.
-                # TODO: handle mixed case
-                content_disp = headers.get(b"Content-Disposition")
+                content_disp = headers.get(b"content-disposition")
                 disp, options = parse_options_header(content_disp)
 
                 # Get the field and filename.
                 field_name = options.get(b"name")
                 file_name = options.get(b"filename")
-                # TODO: check for errors
+                # RFC 7578 §4.2: each part MUST have a Content-Disposition 
header with a "name" parameter.
+                if field_name is None:
+                    raise FormParserError(f'Field name not found in 
Content-Disposition: "{content_disp!r}"')
 
                 # Create the proper class.
+                content_type_b = headers.get(b"content-type")
+                content_type = content_type_b.decode("latin-1") if 
content_type_b is not None else None
                 if file_name is None:
-                    f_multi = FieldClass(field_name)
+                    f_multi = Field(field_name, content_type=content_type)
                 else:
-                    f_multi = FileClass(file_name, field_name, 
config=cast("FileConfig", self.config))
+                    f_multi = File(file_name, field_name, config=self.config, 
content_type=content_type)
                     is_file = True
 
                 # Parse the given Content-Transfer-Encoding to determine what
                 # we need to do with the incoming data.
                 # TODO: check that we properly handle 8bit / 7bit encoding.
-                transfer_encoding = headers.get(b"Content-Transfer-Encoding", 
b"7bit")
+                # RFC 2045 section 6.1: Content-Transfer-Encoding values are 
case-insensitive.
+                # https://www.rfc-editor.org/rfc/rfc2045#section-6.1
+                transfer_encoding = headers.get(b"content-transfer-encoding", 
b"7bit").lower()
 
                 if transfer_encoding in (b"binary", b"8bit", b"7bit"):
                     writer = f_multi
@@ -1782,9 +1772,8 @@
 
 def create_form_parser(
     headers: dict[str, bytes],
-    on_field: OnFieldCallback | None,
-    on_file: OnFileCallback | None,
-    trust_x_headers: bool = False,
+    on_field: Callable[[Field], None] | None,
+    on_file: Callable[[File], None] | None,
     config: dict[Any, Any] = {},
 ) -> FormParser:
     """This function is a helper function to aid in creating a FormParser
@@ -1797,8 +1786,6 @@
         headers: A dictionary-like object of HTTP headers.  The only required 
header is Content-Type.
         on_field: Callback to call with each parsed field.
         on_file: Callback to call with each parsed file.
-        trust_x_headers: Whether or not to trust information received from 
certain X-Headers - for example, the file
-            name from X-File-Name.
         config: Configuration variables to pass to the FormParser.
     """
     content_type: str | bytes | None = headers.get("Content-Type")
@@ -1814,11 +1801,8 @@
     # We need content_type to be a string, not a bytes object.
     content_type = content_type.decode("latin-1")
 
-    # File names are optional.
-    file_name = headers.get("X-File-Name")
-
     # Instantiate a form parser.
-    form_parser = FormParser(content_type, on_field, on_file, 
boundary=boundary, file_name=file_name, config=config)
+    form_parser = FormParser(content_type, on_field, on_file, 
boundary=boundary, config=config)
 
     # Return our parser.
     return form_parser
@@ -1827,8 +1811,8 @@
 def parse_form(
     headers: dict[str, bytes],
     input_stream: SupportsRead,
-    on_field: OnFieldCallback | None,
-    on_file: OnFileCallback | None,
+    on_field: Callable[[Field], None] | None,
+    on_file: Callable[[File], None] | None,
     chunk_size: int = 1048576,
 ) -> None:
     """This function is useful if you just want to parse a request body,
@@ -1844,6 +1828,9 @@
         chunk_size: The maximum size to read from the input stream and write 
to the parser at one time.
             Defaults to 1 MiB.
     """
+    if chunk_size < 1:
+        raise ValueError(f"chunk_size must be a positive number, not 
{chunk_size!r}")
+
     # Create our form parser.
     parser = create_form_parser(headers, on_field, on_file)
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/almost_match_boundary.yaml 
new/python_multipart-0.0.26/tests/test_data/http/almost_match_boundary.yaml
--- old/python_multipart-0.0.22/tests/test_data/http/almost_match_boundary.yaml 
2020-02-02 01:00:00.000000000 +0100
+++ new/python_multipart-0.0.26/tests/test_data/http/almost_match_boundary.yaml 
2020-02-02 01:00:00.000000000 +0100
@@ -3,6 +3,7 @@
   - name: file
     type: file
     file_name: test.txt
+    content_type: text/plain
     data: !!binary |
         
LS1ib3VuZGFyaQ0KLS1ib3VuZGFyeXEtLWJvdW5kYXJ5DXEtLWJvdW5kYXJxDQotLWJvdW5hcnlkLS0NCi0tbm90Ym91bmQtLQ0KLS1taXNtYXRjaA0KLS1taXNtYXRjaC0tDQotLWJvdW5kYXJ5LVENCi0tYm91bmRhcnkNUS0tYm91bmRhcnlR
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/base64_encoding.yaml 
new/python_multipart-0.0.26/tests/test_data/http/base64_encoding.yaml
--- old/python_multipart-0.0.22/tests/test_data/http/base64_encoding.yaml       
2020-02-02 01:00:00.000000000 +0100
+++ new/python_multipart-0.0.26/tests/test_data/http/base64_encoding.yaml       
2020-02-02 01:00:00.000000000 +0100
@@ -3,5 +3,6 @@
   - name: file
     type: file
     file_name: test.txt
+    content_type: text/plain
     data: !!binary |
       VGVzdCAxMjM=
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/case_insensitive_headers.http 
new/python_multipart-0.0.26/tests/test_data/http/case_insensitive_headers.http
--- 
old/python_multipart-0.0.22/tests/test_data/http/case_insensitive_headers.http  
    1970-01-01 01:00:00.000000000 +0100
+++ 
new/python_multipart-0.0.26/tests/test_data/http/case_insensitive_headers.http  
    2020-02-02 01:00:00.000000000 +0100
@@ -0,0 +1,21 @@
+------WebKitFormBoundarygbACTUR58IyeurVf
+Content-Disposition: form-data; name="file1"; filename="test1.txt"
+Content-Type: text/plain
+
+Test file #1
+------WebKitFormBoundarygbACTUR58IyeurVf
+CONTENT-DISPOSITION: form-data; name="file2"; filename="test2.txt"
+CONTENT-Type: text/plain
+
+Test file #2
+------WebKitFormBoundarygbACTUR58IyeurVf
+content-disposition: form-data; name="file3"; filename="test3.txt"
+content-type: text/plain
+
+Test file #3
+------WebKitFormBoundarygbACTUR58IyeurVf
+cOnTenT-DiSpOsItiOn: form-data; name="file4"; filename="test4.txt"
+Content-Type: text/plain
+
+Test file #4
+------WebKitFormBoundarygbACTUR58IyeurVf--
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/case_insensitive_headers.yaml 
new/python_multipart-0.0.26/tests/test_data/http/case_insensitive_headers.yaml
--- 
old/python_multipart-0.0.22/tests/test_data/http/case_insensitive_headers.yaml  
    1970-01-01 01:00:00.000000000 +0100
+++ 
new/python_multipart-0.0.26/tests/test_data/http/case_insensitive_headers.yaml  
    2020-02-02 01:00:00.000000000 +0100
@@ -0,0 +1,26 @@
+boundary: ----WebKitFormBoundarygbACTUR58IyeurVf
+expected:
+  - name: file1
+    type: file
+    file_name: test1.txt
+    content_type: text/plain
+    data: !!binary |
+      VGVzdCBmaWxlICMx
+  - name: file2
+    type: file
+    file_name: test2.txt
+    content_type: text/plain
+    data: !!binary |
+      VGVzdCBmaWxlICMy
+  - name: file3
+    type: file
+    file_name: test3.txt
+    content_type: text/plain
+    data: !!binary |
+      VGVzdCBmaWxlICMz
+  - name: file4
+    type: file
+    file_name: test4.txt
+    content_type: text/plain
+    data: !!binary |
+      VGVzdCBmaWxlICM0
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/header_with_number.yaml 
new/python_multipart-0.0.26/tests/test_data/http/header_with_number.yaml
--- old/python_multipart-0.0.22/tests/test_data/http/header_with_number.yaml    
2020-02-02 01:00:00.000000000 +0100
+++ new/python_multipart-0.0.26/tests/test_data/http/header_with_number.yaml    
2020-02-02 01:00:00.000000000 +0100
@@ -3,5 +3,6 @@
   - name: files
     type: file
     file_name: secret.txt
+    content_type: "text/plain; charset=utf-8"
     data: !!binary |
       YWFhYWFh
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/mixed_case_headers.http 
new/python_multipart-0.0.26/tests/test_data/http/mixed_case_headers.http
--- old/python_multipart-0.0.22/tests/test_data/http/mixed_case_headers.http    
1970-01-01 01:00:00.000000000 +0100
+++ new/python_multipart-0.0.26/tests/test_data/http/mixed_case_headers.http    
2020-02-02 01:00:00.000000000 +0100
@@ -0,0 +1,19 @@
+----boundary
+ConTenT-TypE: text/plain; charset="UTF-8"
+ConTenT-DisPoSitioN: form-data; name=field1
+ConTenT-TransfeR-EncoDinG: base64
+
+VGVzdCAxMjM=
+----boundary
+content-type: text/plain; charset="UTF-8"
+content-disposition: form-data; name=field2
+content-transfer-encoding: base64
+
+VGVzdCAxMjM=
+----boundary
+CONTENT-TYPE: text/plain; charset="UTF-8"
+CONTENT-DISPOSITION: form-data; name=Field3
+CONTENT-TRANSFER-ENCODING: base64
+
+VGVzdCAxMjM=
+----boundary--
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/mixed_case_headers.yaml 
new/python_multipart-0.0.26/tests/test_data/http/mixed_case_headers.yaml
--- old/python_multipart-0.0.22/tests/test_data/http/mixed_case_headers.yaml    
1970-01-01 01:00:00.000000000 +0100
+++ new/python_multipart-0.0.26/tests/test_data/http/mixed_case_headers.yaml    
2020-02-02 01:00:00.000000000 +0100
@@ -0,0 +1,14 @@
+boundary: --boundary
+expected:
+  - name: field1
+    type: field
+    data: !!binary |
+      VGVzdCAxMjM=
+  - name: field2
+    type: field
+    data: !!binary |
+      VGVzdCAxMjM=
+  - name: Field3
+    type: field
+    data: !!binary |
+      VGVzdCAxMjM=
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/multiple_files.yaml 
new/python_multipart-0.0.26/tests/test_data/http/multiple_files.yaml
--- old/python_multipart-0.0.22/tests/test_data/http/multiple_files.yaml        
2020-02-02 01:00:00.000000000 +0100
+++ new/python_multipart-0.0.26/tests/test_data/http/multiple_files.yaml        
2020-02-02 01:00:00.000000000 +0100
@@ -3,11 +3,13 @@
   - name: file1
     type: file
     file_name: test1.txt
+    content_type: 'text/plain'
     data: !!binary |
       VGVzdCBmaWxlICMx
   - name: file2
     type: file
     file_name: test2.txt
+    content_type: 'text/plain'
     data: !!binary |
       VGVzdCBmaWxlICMy
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/quoted_printable_encoding.yaml 
new/python_multipart-0.0.26/tests/test_data/http/quoted_printable_encoding.yaml
--- 
old/python_multipart-0.0.22/tests/test_data/http/quoted_printable_encoding.yaml 
    2020-02-02 01:00:00.000000000 +0100
+++ 
new/python_multipart-0.0.26/tests/test_data/http/quoted_printable_encoding.yaml 
    2020-02-02 01:00:00.000000000 +0100
@@ -3,5 +3,6 @@
   - name: file
     type: file
     file_name: test.txt
+    content_type: 'text/plain'
     data: !!binary |
         Zm9vPWJhcg==
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/single_field_single_file.yaml 
new/python_multipart-0.0.26/tests/test_data/http/single_field_single_file.yaml
--- 
old/python_multipart-0.0.22/tests/test_data/http/single_field_single_file.yaml  
    2020-02-02 01:00:00.000000000 +0100
+++ 
new/python_multipart-0.0.26/tests/test_data/http/single_field_single_file.yaml  
    2020-02-02 01:00:00.000000000 +0100
@@ -2,11 +2,13 @@
 expected:
   - name: field
     type: field
+    content_type: 'text/plain'
     data: !!binary |
       dGVzdDE=
   - name: file
     type: file
     file_name: file.txt
+    content_type: 'text/plain'
     data: !!binary |
       dGVzdDI=
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/single_field_with_trailer.http 
new/python_multipart-0.0.26/tests/test_data/http/single_field_with_trailer.http
--- 
old/python_multipart-0.0.22/tests/test_data/http/single_field_with_trailer.http 
    1970-01-01 01:00:00.000000000 +0100
+++ 
new/python_multipart-0.0.26/tests/test_data/http/single_field_with_trailer.http 
    2020-02-02 01:00:00.000000000 +0100
@@ -0,0 +1,7 @@
+------WebKitFormBoundaryTkr3kCBQlBe1nrhc
+Content-Disposition: form-data; name="field"
+
+This is a test.
+------WebKitFormBoundaryTkr3kCBQlBe1nrhc--
+this trailer is epilogue data
+and should be silently ignored
\ No newline at end of file
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/single_field_with_trailer.yaml 
new/python_multipart-0.0.26/tests/test_data/http/single_field_with_trailer.yaml
--- 
old/python_multipart-0.0.22/tests/test_data/http/single_field_with_trailer.yaml 
    1970-01-01 01:00:00.000000000 +0100
+++ 
new/python_multipart-0.0.26/tests/test_data/http/single_field_with_trailer.yaml 
    2020-02-02 01:00:00.000000000 +0100
@@ -0,0 +1,6 @@
+boundary: ----WebKitFormBoundaryTkr3kCBQlBe1nrhc
+expected:
+  - name: field
+    type: field
+    data: !!binary |
+      VGhpcyBpcyBhIHRlc3Qu
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/single_file.yaml 
new/python_multipart-0.0.26/tests/test_data/http/single_file.yaml
--- old/python_multipart-0.0.22/tests/test_data/http/single_file.yaml   
2020-02-02 01:00:00.000000000 +0100
+++ new/python_multipart-0.0.26/tests/test_data/http/single_file.yaml   
2020-02-02 01:00:00.000000000 +0100
@@ -3,6 +3,7 @@
   - name: file
     type: file
     file_name: test.txt
+    content_type: 'text/plain'
     data: !!binary |
       VGhpcyBpcyBhIHRlc3QgZmlsZS4=
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/python_multipart-0.0.22/tests/test_data/http/utf8_filename.yaml 
new/python_multipart-0.0.26/tests/test_data/http/utf8_filename.yaml
--- old/python_multipart-0.0.22/tests/test_data/http/utf8_filename.yaml 
2020-02-02 01:00:00.000000000 +0100
+++ new/python_multipart-0.0.26/tests/test_data/http/utf8_filename.yaml 
2020-02-02 01:00:00.000000000 +0100
@@ -3,6 +3,7 @@
   - name: file
     type: file
     file_name: ???.txt
+    content_type: 'text/plain'
     data: !!binary |
       44GT44KM44Gv44OG44K544OI44Gn44GZ44CC
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/python_multipart-0.0.22/tests/test_multipart.py 
new/python_multipart-0.0.26/tests/test_multipart.py
--- old/python_multipart-0.0.22/tests/test_multipart.py 2020-02-02 
01:00:00.000000000 +0100
+++ new/python_multipart-0.0.26/tests/test_multipart.py 2020-02-02 
01:00:00.000000000 +0100
@@ -1,13 +1,12 @@
 from __future__ import annotations
 
-import logging
 import os
 import random
 import sys
 import tempfile
 import unittest
 from io import BytesIO
-from typing import TYPE_CHECKING, cast
+from typing import TYPE_CHECKING
 from unittest.mock import Mock
 
 import pytest
@@ -40,7 +39,7 @@
     from collections.abc import Iterator
     from typing import Any, TypedDict
 
-    from python_multipart.multipart import FieldProtocol, FileConfig, 
FileProtocol
+    from python_multipart.multipart import FileConfig
 
     class TestParams(TypedDict):
         name: str
@@ -376,6 +375,18 @@
 
         self.assert_fields((b"foo", b"bar"))
 
+    def test_querystring_trailing_bare_field_name(self) -> None:
+        # A trailing bare field name (no '=') must still emit field_end on
+        # finalize - otherwise the field is silently dropped.
+        self.p.write(b"foo=bar&baz")
+
+        self.assert_fields((b"foo", b"bar"), (b"baz", b""))
+
+    def test_querystring_only_bare_field_name(self) -> None:
+        self.p.write(b"foo")
+
+        self.assert_fields((b"foo", b""))
+
     def test_multiple_querystring(self) -> None:
         self.p.write(b"foo=bar&asdf=baz")
 
@@ -722,6 +733,14 @@
     "single_field_single_file",
 ]
 
+EPILOGUE_TEST_HEAD = (
+    "--boundary\r\n"
+    'Content-Disposition: form-data; name="file"; filename="filename.txt"\r\n'
+    "Content-Type: text/plain\r\n\r\n"
+    "hello\r\n"
+    "--boundary--"
+).encode("latin-1")
+
 
 def split_all(val: bytes) -> Iterator[tuple[bytes, bytes]]:
     """
@@ -734,6 +753,25 @@
         yield (val[:i], val[i:])
 
 
[email protected]("content_transfer_encoding", [b"base64", b"BASE64", 
b"Base64"])
+def 
test_content_transfer_encoding_is_case_insensitive(content_transfer_encoding: 
bytes) -> None:
+    data = (
+        b'----boundary\r\nContent-Disposition: form-data; name="file"; 
filename="test.txt"\r\n'
+        b"Content-Type: text/plain\r\n"
+        b"Content-Transfer-Encoding: " + content_transfer_encoding + 
b"\r\n\r\nVGVzdA==\r\n----boundary--\r\n"
+    )
+    files: list[File] = []
+
+    f = FormParser("multipart/form-data", None, files.append, 
boundary="--boundary")
+
+    f.write(data)
+    f.finalize()
+
+    file = files[0]
+    file.file_object.seek(0)
+    assert file.file_object.read() == b"Test"
+
+
 @parametrize_class
 class TestFormParser(unittest.TestCase):
     def make(self, boundary: str | bytes, config: dict[str, Any] = {}) -> None:
@@ -741,11 +779,11 @@
         self.files: list[File] = []
         self.fields: list[Field] = []
 
-        def on_field(f: FieldProtocol) -> None:
-            self.fields.append(cast(Field, f))
+        def on_field(f: Field) -> None:
+            self.fields.append(f)
 
-        def on_file(f: FileProtocol) -> None:
-            self.files.append(cast(File, f))
+        def on_file(f: File) -> None:
+            self.files.append(f)
 
         def on_end() -> None:
             self.ended = True
@@ -759,7 +797,7 @@
         file_data = o.read()
         self.assertEqual(file_data, data)
 
-    def assert_file(self, field_name: bytes, file_name: bytes, data: bytes) -> 
None:
+    def assert_file(self, field_name: bytes, file_name: bytes, content_type: 
str | None, data: bytes) -> None:
         # Find this file.
         found = None
         for f in self.files:
@@ -771,6 +809,8 @@
         self.assertIsNotNone(found)
         assert found is not None
 
+        self.assertEqual(found.content_type, content_type)
+
         try:
             # Assert about this file.
             self.assert_file_data(found, data)
@@ -840,7 +880,7 @@
                 self.assert_field(name, e["data"])
 
             elif type == "file":
-                self.assert_file(name, e["file_name"].encode("latin-1"), 
e["data"])
+                self.assert_file(name, e["file_name"].encode("latin-1"), 
e["content_type"], e["data"])
 
             else:
                 assert False
@@ -871,7 +911,32 @@
 
             # Assert that our file and field are here.
             self.assert_field(b"field", b"test1")
-            self.assert_file(b"file", b"file.txt", b"test2")
+            self.assert_file(b"file", b"file.txt", "text/plain", b"test2")
+
+    def test_upload_delete_tmp_config(self) -> None:
+        with tempfile.TemporaryDirectory() as upload_dir:
+            self.make(
+                "----WebKitFormBoundary5BZGOJCWtXGYC9HW",
+                config={"UPLOAD_DIR": upload_dir, "UPLOAD_DELETE_TMP": False, 
"MAX_MEMORY_FILE_SIZE": 1},
+            )
+
+            test_file = "single_file.http"
+            with open(os.path.join(http_tests_dir, test_file), "rb") as f:
+                test_data = f.read()
+
+            self.f.write(test_data)
+            self.f.finalize()
+
+            self.assertEqual(len(self.files), 1)
+            uploaded_file = self.files[0]
+            assert uploaded_file.actual_file_name is not None
+            actual_file_name = 
uploaded_file.actual_file_name.decode(sys.getfilesystemencoding())
+            uploaded_file.close()
+
+            try:
+                self.assertTrue(os.path.exists(actual_file_name))
+            finally:
+                os.unlink(actual_file_name)
 
     @parametrize("param", [t for t in http_tests if t["name"] in 
single_byte_tests])
     def test_feed_single_bytes(self, param: TestParams) -> None:
@@ -910,7 +975,7 @@
                 self.assert_field(name, e["data"])
 
             elif type == "file":
-                self.assert_file(name, e["file_name"].encode("latin-1"), 
e["data"])
+                self.assert_file(name, e["file_name"].encode("latin-1"), 
e["content_type"], e["data"])
 
             else:
                 assert False
@@ -948,6 +1013,48 @@
                 # Assert that our field is here.
                 self.assert_field(b"field", 
b"0123456789ABCDEFGHIJ0123456789ABCDEFGHIJ")
 
+    def test_file_content_type_header(self) -> None:
+        """
+        This test checks the content-type for a file part is passed on.
+        """
+        # Load test data.
+        test_file = "header_with_number.http"
+        with open(os.path.join(http_tests_dir, test_file), "rb") as f:
+            test_data = f.read()
+
+        expected_content_type = "text/plain; charset=utf-8"
+
+        # Create form parser.
+        self.make(boundary="b8825ae386be4fdc9644d87e392caad3")
+        self.f.write(test_data)
+        self.f.finalize()
+
+        # Assert that our field is here.
+        self.assertEqual(1, len(self.files))
+        actual_content_type = self.files[0].content_type
+        self.assertEqual(actual_content_type, expected_content_type)
+
+    def test_field_content_type_header(self) -> None:
+        """
+        This test checks content-tpye for a field part are read and passed.
+        """
+        # Load test data.
+        test_file = "single_field.http"
+        with open(os.path.join(http_tests_dir, test_file), "rb") as f:
+            test_data = f.read()
+
+        expected_content_type = None
+
+        # Create form parser.
+        self.make(boundary="----WebKitFormBoundaryTkr3kCBQlBe1nrhc")
+        self.f.write(test_data)
+        self.f.finalize()
+
+        # Assert that our field is here.
+        self.assertEqual(1, len(self.fields))
+        actual_content_type = self.fields[0].content_type
+        self.assertEqual(actual_content_type, expected_content_type)
+
     def test_request_body_fuzz(self) -> None:
         """
         This test randomly fuzzes the request body to ensure that no strange
@@ -1078,8 +1185,8 @@
     def test_octet_stream(self) -> None:
         files: list[File] = []
 
-        def on_file(f: FileProtocol) -> None:
-            files.append(cast(File, f))
+        def on_file(f: File) -> None:
+            files.append(f)
 
         on_field = Mock()
         on_end = Mock()
@@ -1100,8 +1207,8 @@
     def test_querystring(self) -> None:
         fields: list[Field] = []
 
-        def on_field(f: FieldProtocol) -> None:
-            fields.append(cast(Field, f))
+        def on_field(f: Field) -> None:
+            fields.append(f)
 
         on_file = Mock()
         on_end = Mock()
@@ -1171,8 +1278,8 @@
 
         files: list[File] = []
 
-        def on_file(f: FileProtocol) -> None:
-            files.append(cast(File, f))
+        def on_file(f: File) -> None:
+            files.append(f)
 
         on_field = Mock()
         on_end = Mock()
@@ -1193,11 +1300,31 @@
         f.finalize()
         self.assert_file_data(files[0], b"Test")
 
+    def test_bad_content_disposition(self) -> None:
+        # Field name is required per RFC 7578 §4.2.
+        data = (
+            b"----boundary\r\n"
+            b"Content-Disposition: form-data;\r\n"
+            b"Content-Type: text/plain\r\n"
+            b"\r\n"
+            b"Test\r\n"
+            b"----boundary--\r\n"
+        )
+
+        on_field = Mock()
+        on_file = Mock()
+
+        f = FormParser("multipart/form-data", on_field, on_file, 
boundary="--boundary")
+
+        with self.assertRaisesRegex(FormParserError, "Field name not found in 
Content-Disposition"):
+            f.write(data)
+            f.finalize()
+
     def test_handles_None_fields(self) -> None:
         fields: list[Field] = []
 
-        def on_field(f: FieldProtocol) -> None:
-            fields.append(cast(Field, f))
+        def on_field(f: Field) -> None:
+            fields.append(f)
 
         on_file = Mock()
         on_end = Mock()
@@ -1215,27 +1342,58 @@
         self.assertEqual(fields[2].field_name, b"baz")
         self.assertEqual(fields[2].value, b"asdf")
 
-    def test_multipart_parser_newlines_before_first_boundary(self) -> None:
-        """This test makes sure that the parser does not handle when there is 
junk data after the last boundary."""
-        num = 5_000_000
-        data = (
-            "\r\n" * num + "--boundary\r\n"
-            'Content-Disposition: form-data; name="file"; 
filename="filename.txt"\r\n'
-            "Content-Type: text/plain\r\n\r\n"
-            "hello\r\n"
-            "--boundary--"
-        )
+    @parametrize(
+        "chunks",
+        [
+            [
+                b"\r\nignored preamble\r\n"
+                + (
+                    b"--boundary\r\n"
+                    b'Content-Disposition: form-data; name="file"; 
filename="filename.txt"\r\n'
+                    b"Content-Type: text/plain\r\n\r\n"
+                    b"hello\r\n"
+                    b"--boundary--"
+                )
+            ],
+            [
+                b"\r\n" * 5_000_000
+                + (
+                    b"--boundary\r\n"
+                    b'Content-Disposition: form-data; name="file"; 
filename="filename.txt"\r\n'
+                    b"Content-Type: text/plain\r\n\r\n"
+                    b"hello\r\n"
+                    b"--boundary--"
+                )
+            ],
+            [
+                b"\r\n" * 5_000_000,
+                (
+                    b"--boundary\r\n"
+                    b'Content-Disposition: form-data; name="file"; 
filename="filename.txt"\r\n'
+                    b"Content-Type: text/plain\r\n\r\n"
+                    b"hello\r\n"
+                    b"--boundary--"
+                ),
+            ],
+        ],
+    )
+    def test_multipart_parser_preamble_before_first_boundary(self, chunks: 
list[bytes]) -> None:
+        """Parser must not hang or blow up on a preamble before the first 
boundary."""
 
         files: list[File] = []
 
-        def on_file(f: FileProtocol) -> None:
-            files.append(cast(File, f))
+        def on_file(f: File) -> None:
+            files.append(f)
 
         f = FormParser("multipart/form-data", on_field=Mock(), 
on_file=on_file, boundary="boundary")
-        f.write(data.encode("latin-1"))
+        for chunk in chunks:
+            f.write(chunk)
+
+        assert len(files) == 1
+        self.assert_file_data(files[0], b"hello")
 
     def test_multipart_parser_data_after_last_boundary(self) -> None:
-        """This test makes sure that the parser does not handle when there is 
junk data after the last boundary."""
+        """Parser must short-circuit on arbitrary epilogue data after the 
closing boundary (no O(N) scan)."""
         num = 50_000_000
         data = (
             "--boundary\r\n"
@@ -1247,35 +1405,38 @@
 
         files: list[File] = []
 
-        def on_file(f: FileProtocol) -> None:
-            files.append(cast(File, f))
+        def on_file(f: File) -> None:
+            files.append(f)
 
         f = FormParser("multipart/form-data", on_field=Mock(), 
on_file=on_file, boundary="boundary")
         f.write(data.encode("latin-1"))
 
-    @pytest.fixture(autouse=True)
-    def inject_fixtures(self, caplog: pytest.LogCaptureFixture) -> None:
-        self._caplog = caplog
-
-    def test_multipart_parser_data_end_with_crlf_without_warnings(self) -> 
None:
-        """This test makes sure that the parser does not handle when the data 
ends with a CRLF."""
-        data = (
-            "--boundary\r\n"
-            'Content-Disposition: form-data; name="file"; 
filename="filename.txt"\r\n'
-            "Content-Type: text/plain\r\n\r\n"
-            "hello\r\n"
-            "--boundary--\r\n"
-        )
+    @parametrize(
+        "chunks",
+        [
+            [EPILOGUE_TEST_HEAD + b"\r\n"],
+            [EPILOGUE_TEST_HEAD + b"\r", b"\n"],
+            [EPILOGUE_TEST_HEAD, b"\r\n"],
+            [EPILOGUE_TEST_HEAD + b"\r\n--boundary\r\nthis is not a valid 
header\r\n\r\nnot a real part"],
+        ],
+    )
+    def test_multipart_parser_ignores_epilogue(self, chunks: list[bytes]) -> 
None:
+        """Epilogue data after the closing boundary must be ignored.
 
+        Covers both the single-chunk case and the case where trailing CRLF is 
split across `write()` calls.
+        The final case asserts that epilogue bytes are not parsed or validated.
+        """
         files: list[File] = []
 
-        def on_file(f: FileProtocol) -> None:
-            files.append(cast(File, f))
+        def on_file(f: File) -> None:
+            files.append(f)
 
         f = FormParser("multipart/form-data", on_field=Mock(), 
on_file=on_file, boundary="boundary")
-        with self._caplog.at_level(logging.WARNING):
-            f.write(data.encode("latin-1"))
-            assert len(self._caplog.records) == 0
+        for chunk in chunks:
+            f.write(chunk)
+
+        assert len(files) == 1
+        self.assert_file_data(files[0], b"hello")
 
     def test_max_size_multipart(self) -> None:
         # Load test data.
@@ -1317,8 +1478,8 @@
     def test_octet_stream_max_size(self) -> None:
         files: list[File] = []
 
-        def on_file(f: FileProtocol) -> None:
-            files.append(cast(File, f))
+        def on_file(f: File) -> None:
+            files.append(f)
 
         on_field = Mock()
         on_end = Mock()
@@ -1371,6 +1532,7 @@
         self.assertEqual(calls, 3)
 
 
+@parametrize_class
 class TestHelperFunctions(unittest.TestCase):
     def test_create_form_parser(self) -> None:
         r = create_form_parser({"Content-Type": b"application/octet-stream"}, 
None, None)
@@ -1394,12 +1556,12 @@
         self.assertEqual(on_file.call_args[0][0].size, 15)
 
     def test_parse_form_content_length(self) -> None:
-        files: list[FileProtocol] = []
+        files: list[File] = []
 
-        def on_field(field: FieldProtocol) -> None:
+        def on_field(field: Field) -> None:
             pass
 
-        def on_file(file: FileProtocol) -> None:
+        def on_file(file: File) -> None:
             files.append(file)
 
         parse_form(
@@ -1410,7 +1572,17 @@
         )
 
         self.assertEqual(len(files), 1)
-        self.assertEqual(files[0].size, 10)  # type: ignore[attr-defined]
+        self.assertEqual(files[0].size, 10)
+
+    def test_parse_form_invalid_chunk_size(self) -> None:
+        with self.assertRaisesRegex(ValueError, "chunk_size must be a positive 
number, not 0"):
+            parse_form(
+                {"Content-Type": b"application/octet-stream"},
+                BytesIO(b"123456789012345"),
+                lambda _: None,
+                lambda _: None,
+                chunk_size=0,
+            )
 
 
 def suite() -> unittest.TestSuite:

Reply via email to