Compare commits

...

75 Commits

Author SHA1 Message Date
GiGaGon
7987951e24
Convert legacy string formatting to f-strings (#4685)
* the changes

* Update driver.py
2025-06-05 18:51:26 -07:00
GiGaGon
e5e5dad792
Fix await ellipses and remove async/await soft keyword/identifier support (#4676)
* Update tokenize.py

* Update driver.py

* Update test_black.py

* Update test_black.py

* Update python37.py

* Update tokenize.py

* Update CHANGES.md

* Update CHANGES.md

* Update faq.md

* Update driver.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-06-05 18:50:42 -07:00
GiGaGon
24e4cb20ab
Fix backslash cr nl bug (#4673)
* Update tokenize.py

* Update CHANGES.md

* Update test_black.py

* Update test_black.py

* Update test_black.py
2025-06-05 18:49:15 -07:00
GiGaGon
e7bf7b4619
Fix CI mypyc 1.16 failure (#4671) 2025-05-29 14:10:29 -07:00
cobalt
71e380aedf
CI: Remove now-uneeded workarounds (#4665) 2025-05-25 18:23:42 -05:00
dependabot[bot]
2630801f95
Bump pypa/cibuildwheel from 2.22.0 to 2.23.3 (#4660)
Bumps [pypa/cibuildwheel](https://github.com/pypa/cibuildwheel) from 2.22.0 to 2.23.3.
- [Release notes](https://github.com/pypa/cibuildwheel/releases)
- [Changelog](https://github.com/pypa/cibuildwheel/blob/main/docs/changelog.md)
- [Commits](https://github.com/pypa/cibuildwheel/compare/v2.22.0...v2.23.3)

---
updated-dependencies:
- dependency-name: pypa/cibuildwheel
  dependency-version: 2.23.3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-15 07:22:11 -05:00
danigm
b0f36f5b42
Update test_code_option_safe to work with click 8.2.0 (#4666) 2025-05-15 07:04:00 -05:00
cobalt
314f8cf92b
Update Prettier pre-commit configuration (#4662)
* Update Prettier configuration

Signed-off-by: cobalt <61329810+cobaltt7@users.noreply.github.com>

* Update .github/workflows/diff_shades.yml

Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>

---------

Signed-off-by: cobalt <61329810+cobaltt7@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
2025-05-11 19:21:50 -05:00
Pedro Mezacasa Muller
d0ff3bd6cb
Fix crash when a tuple is used as a ContextManager (#4646) 2025-04-08 21:42:17 -07:00
pre-commit-ci[bot]
a41dc89f1f
[pre-commit.ci] pre-commit autoupdate (#4644)
updates:
- [github.com/pycqa/isort: 5.13.2 → 6.0.1](https://github.com/pycqa/isort/compare/5.13.2...6.0.1)
- [github.com/pycqa/flake8: 7.1.1 → 7.2.0](https://github.com/pycqa/flake8/compare/7.1.1...7.2.0)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-04-07 14:45:01 -07:00
Tushar Sadhwani
950ec38c11
Disallow unwrapping tuples in an as clause (#4634) 2025-04-01 07:49:37 -07:00
Tushar Sadhwani
2c135edf37
Handle # fmt: skip followed by a comment (#4635) 2025-03-22 19:30:40 -07:00
Tushar Sadhwani
6144c46c6a
Fix parsing of walrus operator in complex with statements (#4630) 2025-03-20 14:00:11 -07:00
Tsvika Shapira
dd278cb316
update github-action to look for black version in "dependency-groups" (#4606)
"dependency-groups" is the mechanism for storing package requirements in `pyproject.toml`, recommended for formatting tools (see https://packaging.python.org/en/latest/specifications/dependency-groups/ )

this change allow the black action to look also in those locations when determining the version of black to install
2025-03-20 08:01:31 -07:00
Tushar Sadhwani
dbb14eac93
Recursively unwrap tuples in del statements (#4628) 2025-03-19 15:02:40 -07:00
Tushar Sadhwani
5342d2eeda
Replace the blib2to3 tokenizer with pytokens (#4536) 2025-03-15 17:41:19 -07:00
Glyph
9f38928414
github is deprecating the ubuntu 20.04 actions runner image (#4607)
see https://github.com/actions/runner-images/issues/11101
2025-03-05 18:26:00 -08:00
Pedro Mezacasa Muller
3e9dd25dad
Fix bug where # fmt: skip is not being respected with one-liner functions (#4552) 2025-03-03 15:11:21 -08:00
dependabot[bot]
bb802cf19a
Bump sphinx from 8.2.1 to 8.2.3 in /docs (#4603)
Bumps [sphinx](https://github.com/sphinx-doc/sphinx) from 8.2.1 to 8.2.3.
- [Release notes](https://github.com/sphinx-doc/sphinx/releases)
- [Changelog](https://github.com/sphinx-doc/sphinx/blob/master/CHANGES.rst)
- [Commits](https://github.com/sphinx-doc/sphinx/compare/v8.2.1...v8.2.3)

---
updated-dependencies:
- dependency-name: sphinx
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-03 06:24:03 -08:00
Jelle Zijlstra
5ae38dd370
Fix parser for TypeVar bounds (#4602) 2025-03-03 00:20:23 -08:00
rdrll
45cbe572ee
Add regression tests for Black’s previous inconsistent quote formatting with adjacent string literals (#4580) 2025-03-02 19:23:58 -08:00
Hugo van Kemenade
fccd70cff1
Update top-pypi-packages filename (#4598)
To stay within quota, it now has just under 30 days of data, so the filename has been updated. Both will be available for a while. See https://github.com/hugovk/top-pypi-packages/pull/46.
2025-03-02 08:09:40 -08:00
🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко)
00c0d6d91a
📦 Tell git archive to include numbered tags (#4593)
The wildcard at the beginning used to match tags with arbitrary
prefixes otherwise. This patch corrects that making it more accurate.
2025-02-28 16:09:40 -08:00
🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко)
0580ecbef3
📦 Make Git archives for tags immutable (#4592)
This change will help with reproducibility in downstreams.

Ref: https://setuptools-scm.rtfd.io/en/latest/usage/#git-archives
2025-02-27 09:08:50 -08:00
Michael R. Crusoe
ed64d89faa
additional fix for click 8.2.0 (#4591) 2025-02-27 08:46:59 -08:00
dependabot[bot]
452d3b68f4
Bump sphinx from 8.1.3 to 8.2.1 in /docs (#4587)
Bumps [sphinx](https://github.com/sphinx-doc/sphinx) from 8.1.3 to 8.2.1.
- [Release notes](https://github.com/sphinx-doc/sphinx/releases)
- [Changelog](https://github.com/sphinx-doc/sphinx/blob/v8.2.1/CHANGES.rst)
- [Commits](https://github.com/sphinx-doc/sphinx/compare/v8.1.3...v8.2.1)

---
updated-dependencies:
- dependency-name: sphinx
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-24 05:19:48 -08:00
sobolevn
256f3420b1
Add --local-partial-types and --strict-bytes to mypy (#4583) 2025-02-20 15:27:23 -08:00
dependabot[bot]
00cb6d15c5
Bump myst-parser from 4.0.0 to 4.0.1 in /docs (#4578)
Bumps [myst-parser](https://github.com/executablebooks/MyST-Parser) from 4.0.0 to 4.0.1.
- [Release notes](https://github.com/executablebooks/MyST-Parser/releases)
- [Changelog](https://github.com/executablebooks/MyST-Parser/blob/master/CHANGELOG.md)
- [Commits](https://github.com/executablebooks/MyST-Parser/compare/v4.0.0...v4.0.1)

---
updated-dependencies:
- dependency-name: myst-parser
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-18 21:16:59 -08:00
MeggyCal
14e1de805a
mix_stderr parameter was removed from click 8.2.0 (#4577) 2025-02-18 07:30:11 -08:00
GiGaGon
5f23701708
Fix diff shades CI (#4576) 2025-02-06 18:59:16 -08:00
GiGaGon
9c129567e7
Re-add packaging CHANGES.md comment (#4568) 2025-01-29 14:29:55 -08:00
Michał Górny
c02ca47daa
Fix mis-synced version check in black.vim (#4567)
The message has been updated to indicate Python 3.9+, but the check
still compares to 3.8
2025-01-29 12:25:00 -08:00
Jelle Zijlstra
edaf085a18 new changelog template 2025-01-28 21:55:27 -08:00
Jelle Zijlstra
b844c8a136
unhack pyproject.toml (#4566) 2025-01-28 21:54:46 -08:00
Jelle Zijlstra
d82da0f0e9
Fix hatch build (#4565) 2025-01-28 20:52:03 -08:00
Jelle Zijlstra
8a737e727a
Prepare release 25.1.0 (#4563) 2025-01-28 18:34:41 -08:00
Jelle Zijlstra
d330deea00
docs: We're not going to use backslashes for the with statement (#4564) 2025-01-28 18:29:05 -08:00
cobalt
3d8129001f
Move wrap_long_dict_values_in_parens to the preview style (#4561) 2025-01-27 17:46:13 -08:00
Pedro Mezacasa Muller
459562c71a
Improve function declaration wrapping when it contains generic type definitions (#4553)
---------

Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: hauntsaninja <hauntsaninja@gmail.com>
Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>
2025-01-26 00:43:22 -08:00
Shantanu
99dbf3006b
Cache executor to avoid hitting open file limits (#4560)
Fixes #4504, fixes #3251
2025-01-25 09:28:06 -08:00
Jelle Zijlstra
c0b92f3888
Prepare the 2025 stable style (#4558) 2025-01-24 18:00:35 -08:00
GiGaGon
e58baf15b9
Add test for #1187 (#4559)
Closes #1187
2025-01-23 21:20:47 -08:00
GiGaGon
1455ae4731
Fix docs CI (#4555)
Update .readthedocs.yaml
2025-01-21 12:43:08 -08:00
cobalt
584d0331c8
fix: Don't remove parenthesis around long dictionary values (#4377) 2025-01-16 22:09:22 -08:00
Jelle Zijlstra
6e9654065c
Fix CI (#4551) 2025-01-16 21:21:08 -08:00
Cooper Lees
8dc912774e
Remove Facebook from users (#4548) 2025-01-09 19:22:59 -08:00
pre-commit-ci[bot]
40b73f2fb5
[pre-commit.ci] pre-commit autoupdate (#4547)
* [pre-commit.ci] pre-commit autoupdate

updates:
- [github.com/pre-commit/mirrors-mypy: v1.13.0 → v1.14.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.13.0...v1.14.1)

* Fix wrapper's return types to be String or Text IO

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Cooper Ry Lees <me@cooperlees.com>
2025-01-07 11:42:27 -08:00
GiGaGon
e157ba4de5
Fix sus returns in strings.py (#4546) 2025-01-06 12:02:56 -08:00
Tony Wang
fdabd424e2
Speed up blib2to3 tokenization using startswith with a tuple (#4541) 2024-12-29 17:17:50 -08:00
GiGaGon
9431e98522
Fix integration between stdin filename and --force-exclude (#4539) 2024-12-27 16:48:23 -08:00
GiGaGon
3b00112ac5
Fix crash on formatting certain with statements (#4538)
Fixes #3678
2024-12-24 12:25:08 -08:00
GiGaGon
0aabac4fe0
Add regression test for #1765 (#4530) 2024-12-23 10:46:25 -08:00
cobalt
ed33205579
Fix type error (#4537) 2024-12-22 22:19:40 -08:00
Ac5000
6000d37f09
Add Clarification to Config File Location/Name (#4533) 2024-12-19 16:07:27 -08:00
GiGaGon
30759ca782
Add *.py diff=python to .gitattributes (#4531) 2024-12-11 11:35:20 -08:00
dependabot[bot]
84ac1a947d
Bump sphinxcontrib-programoutput from 0.17 to 0.18 in /docs (#4528)
Bumps [sphinxcontrib-programoutput](https://github.com/NextThought/sphinxcontrib-programoutput) from 0.17 to 0.18.
- [Changelog](https://github.com/OpenNTI/sphinxcontrib-programoutput/blob/master/CHANGES.rst)
- [Commits](https://github.com/NextThought/sphinxcontrib-programoutput/compare/0.17...0.18)

---
updated-dependencies:
- dependency-name: sphinxcontrib-programoutput
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-09 12:44:44 -08:00
mosfet80
0db1173bbc
Update libs into .pre-commit-config.yaml (#4521) 2024-12-07 19:53:22 -08:00
GiGaGon
3fab5ade71
Prevent f-string merge quote changes with nested quotes (#4498) 2024-12-03 20:44:26 -08:00
Owen Christie
e54f86bae4
Two blank lines after an import should be reduced to one (#4489)
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
2024-12-03 20:39:35 -08:00
cobalt
96ca1b6be3
fix: Remove parenthesis around sole list items (#4312) 2024-11-27 19:59:29 -08:00
Ярослав Бритов
17efac45f9
Update getting_started.md (#4518)
Update necessary python version to run black in docs.
2024-11-25 20:22:37 -08:00
dependabot[bot]
73f651f02f
Bump pypa/cibuildwheel from 2.21.2 to 2.22.0 (#4517)
Bumps [pypa/cibuildwheel](https://github.com/pypa/cibuildwheel) from 2.21.2 to 2.22.0.
- [Release notes](https://github.com/pypa/cibuildwheel/releases)
- [Changelog](https://github.com/pypa/cibuildwheel/blob/main/docs/changelog.md)
- [Commits](https://github.com/pypa/cibuildwheel/compare/v2.21.2...v2.22.0)

---
updated-dependencies:
- dependency-name: pypa/cibuildwheel
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-25 08:54:00 -08:00
Shantanu
f6c7c98f34
Fix issue with newer upload-artifact in PyPI action (#4512)
Github is breaking older upload-artifact in a few weeks
2024-11-14 07:43:59 -08:00
dependabot[bot]
d670b0439c
Bump sphinx from 7.4.7 to 8.1.3 in /docs (#4483) 2024-11-14 04:27:54 +00:00
dependabot[bot]
56896264e4
Bump docutils from 0.20.1 to 0.21.2 in /docs (#4342) 2024-11-13 20:15:30 -08:00
dependabot[bot]
efd9778873
Bump myst-parser from 3.0.1 to 4.0.0 in /docs (#4434)
Bumps [myst-parser](https://github.com/executablebooks/MyST-Parser) from 3.0.1 to 4.0.0.
- [Release notes](https://github.com/executablebooks/MyST-Parser/releases)
- [Changelog](https://github.com/executablebooks/MyST-Parser/blob/master/CHANGELOG.md)
- [Commits](https://github.com/executablebooks/MyST-Parser/compare/v3.0.1...v4.0.0)
2024-11-13 20:14:50 -08:00
GiGaGon
c472557ba8
Small improvements to the contributing basics (#4502) 2024-11-05 08:03:32 -08:00
Mattwmaster58
53a219056d
Note required python version for use_pyproject: true (#4503) 2024-10-24 18:58:24 -07:00
Matej Aleksandrov
c98fc0c128
Update deprecated type aliases (#4486) 2024-10-23 07:00:55 -07:00
Shantanu
f54f34799b
Use released mypy (#4490) 2024-10-19 18:01:05 -07:00
Matej Aleksandrov
484a669699
Replace remaining aliases to built-in types (#4485) 2024-10-14 16:37:58 -07:00
Matej Aleksandrov
fff747d61b
Fix formatting cells with magic methods and starting or trailing empty lines (#4484) 2024-10-14 06:55:59 -07:00
Marc Mueller
9995bffbe4
Store license identifier inside the License-Expression metadata field (#4479) 2024-10-11 14:40:49 -07:00
Jelle Zijlstra
7452902c77 New changelog 2024-10-11 14:21:07 -07:00
Jelle Zijlstra
32ebb93003
Clean up Python 3.8 remnants (#4473) 2024-10-08 19:11:22 -07:00
109 changed files with 2506 additions and 1807 deletions

View File

@ -1,4 +1,3 @@
node: $Format:%H$ node: $Format:%H$
node-date: $Format:%cI$ node-date: $Format:%cI$
describe-name: $Format:%(describe:tags=true,match=*[0-9]*)$ describe-name: $Format:%(describe:tags=true,match=[0-9]*)$
ref-names: $Format:%D$

1
.gitattributes vendored
View File

@ -1 +1,2 @@
.git_archival.txt export-subst .git_archival.txt export-subst
*.py diff=python

View File

@ -34,7 +34,8 @@ jobs:
env: env:
GITHUB_TOKEN: ${{ github.token }} GITHUB_TOKEN: ${{ github.token }}
run: > run: >
python scripts/diff_shades_gha_helper.py config ${{ github.event_name }} ${{ matrix.mode }} python scripts/diff_shades_gha_helper.py config ${{ github.event_name }}
${{ matrix.mode }}
analysis: analysis:
name: analysis / ${{ matrix.mode }} name: analysis / ${{ matrix.mode }}
@ -44,11 +45,11 @@ jobs:
HATCH_BUILD_HOOKS_ENABLE: "1" HATCH_BUILD_HOOKS_ENABLE: "1"
# Clang is less picky with the C code it's given than gcc (and may # Clang is less picky with the C code it's given than gcc (and may
# generate faster binaries too). # generate faster binaries too).
CC: clang-14 CC: clang-18
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
include: ${{ fromJson(needs.configure.outputs.matrix )}} include: ${{ fromJson(needs.configure.outputs.matrix) }}
steps: steps:
- name: Checkout this repository (full clone) - name: Checkout this repository (full clone)
@ -110,19 +111,19 @@ jobs:
${{ matrix.baseline-analysis }} ${{ matrix.target-analysis }} ${{ matrix.baseline-analysis }} ${{ matrix.target-analysis }}
- name: Upload diff report - name: Upload diff report
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: ${{ matrix.mode }}-diff.html name: ${{ matrix.mode }}-diff.html
path: diff.html path: diff.html
- name: Upload baseline analysis - name: Upload baseline analysis
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: ${{ matrix.baseline-analysis }} name: ${{ matrix.baseline-analysis }}
path: ${{ matrix.baseline-analysis }} path: ${{ matrix.baseline-analysis }}
- name: Upload target analysis - name: Upload target analysis
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: ${{ matrix.target-analysis }} name: ${{ matrix.target-analysis }}
path: ${{ matrix.target-analysis }} path: ${{ matrix.target-analysis }}
@ -130,14 +131,13 @@ jobs:
- name: Generate summary file (PR only) - name: Generate summary file (PR only)
if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes' if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes'
run: > run: >
python helper.py comment-body python helper.py comment-body ${{ matrix.baseline-analysis }}
${{ matrix.baseline-analysis }} ${{ matrix.target-analysis }} ${{ matrix.target-analysis }} ${{ matrix.baseline-sha }}
${{ matrix.baseline-sha }} ${{ matrix.target-sha }} ${{ matrix.target-sha }} ${{ github.event.pull_request.number }}
${{ github.event.pull_request.number }}
- name: Upload summary file (PR only) - name: Upload summary file (PR only)
if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes' if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes'
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: .pr-comment.json name: .pr-comment.json
path: .pr-comment.json path: .pr-comment.json

View File

@ -50,8 +50,8 @@ jobs:
# Keep cibuildwheel version in sync with below # Keep cibuildwheel version in sync with below
- name: Install cibuildwheel and pypyp - name: Install cibuildwheel and pypyp
run: | run: |
pipx install cibuildwheel==2.21.2 pipx install cibuildwheel==2.22.0
pipx install pypyp==1 pipx install pypyp==1.3.0
- name: generate matrix - name: generate matrix
if: github.event_name != 'pull_request' if: github.event_name != 'pull_request'
run: | run: |
@ -92,14 +92,14 @@ jobs:
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
# Keep cibuildwheel version in sync with above # Keep cibuildwheel version in sync with above
- uses: pypa/cibuildwheel@v2.21.2 - uses: pypa/cibuildwheel@v2.23.3
with: with:
only: ${{ matrix.only }} only: ${{ matrix.only }}
- name: Upload wheels as workflow artifacts - name: Upload wheels as workflow artifacts
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: ${{ matrix.name }}-mypyc-wheels name: ${{ matrix.only }}-mypyc-wheels
path: ./wheelhouse/*.whl path: ./wheelhouse/*.whl
- if: github.event_name == 'release' - if: github.event_name == 'release'

View File

@ -13,13 +13,13 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
os: [windows-2019, ubuntu-20.04, macos-latest] os: [windows-2019, ubuntu-22.04, macos-latest]
include: include:
- os: windows-2019 - os: windows-2019
pathsep: ";" pathsep: ";"
asset_name: black_windows.exe asset_name: black_windows.exe
executable_mime: "application/vnd.microsoft.portable-executable" executable_mime: "application/vnd.microsoft.portable-executable"
- os: ubuntu-20.04 - os: ubuntu-22.04
pathsep: ":" pathsep: ":"
asset_name: black_linux asset_name: black_linux
executable_mime: "application/x-executable" executable_mime: "application/x-executable"

View File

@ -24,12 +24,12 @@ repos:
additional_dependencies: *version_check_dependencies additional_dependencies: *version_check_dependencies
- repo: https://github.com/pycqa/isort - repo: https://github.com/pycqa/isort
rev: 5.13.2 rev: 6.0.1
hooks: hooks:
- id: isort - id: isort
- repo: https://github.com/pycqa/flake8 - repo: https://github.com/pycqa/flake8
rev: 7.1.0 rev: 7.2.0
hooks: hooks:
- id: flake8 - id: flake8
additional_dependencies: additional_dependencies:
@ -39,17 +39,21 @@ repos:
exclude: ^src/blib2to3/ exclude: ^src/blib2to3/
- repo: https://github.com/pre-commit/mirrors-mypy - repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.11.2 rev: v1.15.0
hooks: hooks:
- id: mypy - id: mypy
exclude: ^(docs/conf.py|scripts/generate_schema.py)$ exclude: ^(docs/conf.py|scripts/generate_schema.py)$
args: [] args: []
additional_dependencies: &mypy_deps additional_dependencies: &mypy_deps
- types-PyYAML - types-PyYAML
- types-atheris
- tomli >= 0.2.6, < 2.0.0 - tomli >= 0.2.6, < 2.0.0
- click >= 8.1.0, != 8.1.4, != 8.1.5 - click >= 8.2.0
# Click is intentionally out-of-sync with pyproject.toml
# v8.2 has breaking changes. We work around them at runtime, but we need the newer stubs.
- packaging >= 22.0 - packaging >= 22.0
- platformdirs >= 2.1.0 - platformdirs >= 2.1.0
- pytokens >= 0.1.10
- pytest - pytest
- hypothesis - hypothesis
- aiohttp >= 3.7.4 - aiohttp >= 3.7.4
@ -62,14 +66,15 @@ repos:
args: ["--python-version=3.10"] args: ["--python-version=3.10"]
additional_dependencies: *mypy_deps additional_dependencies: *mypy_deps
- repo: https://github.com/pre-commit/mirrors-prettier - repo: https://github.com/rbubley/mirrors-prettier
rev: v4.0.0-alpha.8 rev: v3.5.3
hooks: hooks:
- id: prettier - id: prettier
types_or: [markdown, yaml, json]
exclude: \.github/workflows/diff_shades\.yml exclude: \.github/workflows/diff_shades\.yml
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0 rev: v5.0.0
hooks: hooks:
- id: end-of-file-fixer - id: end-of-file-fixer
- id: trailing-whitespace - id: trailing-whitespace

View File

@ -16,3 +16,6 @@ python:
path: . path: .
extra_requirements: extra_requirements:
- d - d
sphinx:
configuration: docs/conf.py

View File

@ -1,5 +1,128 @@
# Change Log # Change Log
## Unreleased
### Highlights
<!-- Include any especially major or disruptive changes here -->
### Stable style
<!-- Changes that affect Black's stable style -->
- Fix crash while formatting a long `del` statement containing tuples (#4628)
- Fix crash while formatting expressions using the walrus operator in complex `with`
statements (#4630)
- Handle `# fmt: skip` followed by a comment at the end of file (#4635)
- Fix crash when a tuple appears in the `as` clause of a `with` statement (#4634)
- Fix crash when tuple is used as a context manager inside a `with` statement (#4646)
- Fix crash on a `\\r\n` (#4673)
- Fix crash on `await ...` (where `...` is a literal `Ellipsis`) (#4676)
- Remove support for pre-python 3.7 `await/async` as soft keywords/variable names
(#4676)
### Preview style
<!-- Changes that affect Black's preview style -->
- Fix a bug where one-liner functions/conditionals marked with `# fmt: skip` would still
be formatted (#4552)
### Configuration
<!-- Changes to how Black can be configured -->
### Packaging
<!-- Changes to how Black is packaged, such as dependency requirements -->
### Parser
<!-- Changes to the parser or to version autodetection -->
- Rewrite tokenizer to improve performance and compliance (#4536)
- Fix bug where certain unusual expressions (e.g., lambdas) were not accepted in type
parameter bounds and defaults. (#4602)
### Performance
<!-- Changes that improve Black's performance. -->
### Output
<!-- Changes to Black's terminal output and error messages -->
### _Blackd_
<!-- Changes to blackd -->
### Integrations
<!-- For example, Docker, GitHub Actions, pre-commit, editors -->
- Fix the version check in the vim file to reject Python 3.8 (#4567)
- Enhance GitHub Action `psf/black` to read Black version from an additional section in
pyproject.toml: `[project.dependency-groups]` (#4606)
### Documentation
<!-- Major changes to documentation and policies. Small docs changes
don't need a changelog entry. -->
## 25.1.0
### Highlights
This release introduces the new 2025 stable style (#4558), stabilizing the following
changes:
- Normalize casing of Unicode escape characters in strings to lowercase (#2916)
- Fix inconsistencies in whether certain strings are detected as docstrings (#4095)
- Consistently add trailing commas to typed function parameters (#4164)
- Remove redundant parentheses in if guards for case blocks (#4214)
- Add parentheses to if clauses in case blocks when the line is too long (#4269)
- Whitespace before `# fmt: skip` comments is no longer normalized (#4146)
- Fix line length computation for certain expressions that involve the power operator
(#4154)
- Check if there is a newline before the terminating quotes of a docstring (#4185)
- Fix type annotation spacing between `*` and more complex type variable tuple (#4440)
The following changes were not in any previous release:
- Remove parentheses around sole list items (#4312)
- Generic function definitions are now formatted more elegantly: parameters are split
over multiple lines first instead of type parameter definitions (#4553)
### Stable style
- Fix formatting cells in IPython notebooks with magic methods and starting or trailing
empty lines (#4484)
- Fix crash when formatting `with` statements containing tuple generators/unpacking
(#4538)
### Preview style
- Fix/remove string merging changing f-string quotes on f-strings with internal quotes
(#4498)
- Collapse multiple empty lines after an import into one (#4489)
- Prevent `string_processing` and `wrap_long_dict_values_in_parens` from removing
parentheses around long dictionary values (#4377)
- Move `wrap_long_dict_values_in_parens` from the unstable to preview style (#4561)
### Packaging
- Store license identifier inside the `License-Expression` metadata field, see
[PEP 639](https://peps.python.org/pep-0639/). (#4479)
### Performance
- Speed up the `is_fstring_start` function in Black's tokenizer (#4541)
### Integrations
- If using stdin with `--stdin-filename` set to a force excluded path, stdin won't be
formatted. (#4539)
## 24.10.0 ## 24.10.0
### Highlights ### Highlights

View File

@ -1,10 +1,13 @@
# Contributing to _Black_ # Contributing to _Black_
Welcome! Happy to see you willing to make the project better. Have you read the entire Welcome future contributor! We're happy to see you willing to make the project better.
[user documentation](https://black.readthedocs.io/en/latest/) yet?
Our [contributing documentation](https://black.readthedocs.org/en/latest/contributing/) If you aren't familiar with _Black_, or are looking for documentation on something
contains details on all you need to know about contributing to _Black_, the basics to specific, the [user documentation](https://black.readthedocs.io/en/latest/) is the best
the internals of _Black_. place to look.
We look forward to your contributions! For getting started on contributing, please read the
[contributing documentation](https://black.readthedocs.org/en/latest/contributing/) for
all you need to know.
Thank you, and we look forward to your contributions!

View File

@ -38,7 +38,7 @@ Try it out now using the [Black Playground](https://black.vercel.app). Watch the
### Installation ### Installation
_Black_ can be installed by running `pip install black`. It requires Python 3.8+ to run. _Black_ can be installed by running `pip install black`. It requires Python 3.9+ to run.
If you want to format Jupyter Notebooks, install with `pip install "black[jupyter]"`. If you want to format Jupyter Notebooks, install with `pip install "black[jupyter]"`.
If you can't wait for the latest _hotness_ and want to install from GitHub, use: If you can't wait for the latest _hotness_ and want to install from GitHub, use:
@ -137,8 +137,8 @@ SQLAlchemy, Poetry, PyPA applications (Warehouse, Bandersnatch, Pipenv, virtuale
pandas, Pillow, Twisted, LocalStack, every Datadog Agent Integration, Home Assistant, pandas, Pillow, Twisted, LocalStack, every Datadog Agent Integration, Home Assistant,
Zulip, Kedro, OpenOA, FLORIS, ORBIT, WOMBAT, and many more. Zulip, Kedro, OpenOA, FLORIS, ORBIT, WOMBAT, and many more.
The following organizations use _Black_: Facebook, Dropbox, KeepTruckin, Lyft, Mozilla, The following organizations use _Black_: Dropbox, KeepTruckin, Lyft, Mozilla, Quora,
Quora, Duolingo, QuantumBlack, Tesla, Archer Aviation. Duolingo, QuantumBlack, Tesla, Archer Aviation.
Are we missing anyone? Let us know. Are we missing anyone? Let us know.

View File

@ -71,6 +71,7 @@ def read_version_specifier_from_pyproject() -> str:
return f"=={version}" return f"=={version}"
arrays = [ arrays = [
*pyproject.get("dependency-groups", {}).values(),
pyproject.get("project", {}).get("dependencies"), pyproject.get("project", {}).get("dependencies"),
*pyproject.get("project", {}).get("optional-dependencies", {}).values(), *pyproject.get("project", {}).get("optional-dependencies", {}).values(),
] ]

View File

@ -75,8 +75,8 @@ def _initialize_black_env(upgrade=False):
return True return True
pyver = sys.version_info[:3] pyver = sys.version_info[:3]
if pyver < (3, 8): if pyver < (3, 9):
print("Sorry, Black requires Python 3.8+ to run.") print("Sorry, Black requires Python 3.9+ to run.")
return False return False
from pathlib import Path from pathlib import Path

View File

@ -29,8 +29,8 @@ frequently than monthly nets rapidly diminishing returns.
**You must have `write` permissions for the _Black_ repository to cut a release.** **You must have `write` permissions for the _Black_ repository to cut a release.**
The 10,000 foot view of the release process is that you prepare a release PR and then The 10,000 foot view of the release process is that you prepare a release PR and then
publish a [GitHub Release]. This triggers [release automation](#release-workflows) that builds publish a [GitHub Release]. This triggers [release automation](#release-workflows) that
all release artifacts and publishes them to the various platforms we publish to. builds all release artifacts and publishes them to the various platforms we publish to.
We now have a `scripts/release.py` script to help with cutting the release PRs. We now have a `scripts/release.py` script to help with cutting the release PRs.
@ -96,8 +96,9 @@ In the end, use your best judgement and ask other maintainers for their thoughts
## Release workflows ## Release workflows
All of _Black_'s release automation uses [GitHub Actions]. All workflows are therefore configured All of _Black_'s release automation uses [GitHub Actions]. All workflows are therefore
using YAML files in the `.github/workflows` directory of the _Black_ repository. configured using YAML files in the `.github/workflows` directory of the _Black_
repository.
They are triggered by the publication of a [GitHub Release]. They are triggered by the publication of a [GitHub Release].

View File

@ -7,7 +7,14 @@ An overview on contributing to the _Black_ project.
Development on the latest version of Python is preferred. You can use any operating Development on the latest version of Python is preferred. You can use any operating
system. system.
Install development dependencies inside a virtual environment of your choice, for First clone the _Black_ repository:
```console
$ git clone https://github.com/psf/black.git
$ cd black
```
Then install development dependencies inside a virtual environment of your choice, for
example: example:
```console ```console
@ -48,13 +55,16 @@ Further examples of invoking the tests
# Run tests on a specific python version # Run tests on a specific python version
(.venv)$ tox -e py39 (.venv)$ tox -e py39
# pass arguments to pytest # Run an individual test
(.venv)$ pytest -k <test name>
# Pass arguments to pytest
(.venv)$ tox -e py -- --no-cov (.venv)$ tox -e py -- --no-cov
# print full tree diff, see documentation below # Print full tree diff, see documentation below
(.venv)$ tox -e py -- --print-full-tree (.venv)$ tox -e py -- --print-full-tree
# disable diff printing, see documentation below # Disable diff printing, see documentation below
(.venv)$ tox -e py -- --print-tree-diff=False (.venv)$ tox -e py -- --print-tree-diff=False
``` ```
@ -99,16 +109,22 @@ default. To turn it off pass `--print-tree-diff=False`.
`Black` has CI that will check for an entry corresponding to your PR in `CHANGES.md`. If `Black` has CI that will check for an entry corresponding to your PR in `CHANGES.md`. If
you feel this PR does not require a changelog entry please state that in a comment and a you feel this PR does not require a changelog entry please state that in a comment and a
maintainer can add a `skip news` label to make the CI pass. Otherwise, please ensure you maintainer can add a `skip news` label to make the CI pass. Otherwise, please ensure you
have a line in the following format: have a line in the following format added below the appropriate header:
```md ```md
- `Black` is now more awesome (#X) - `Black` is now more awesome (#X)
``` ```
<!---
The Next PR Number link uses HTML because of a bug in MyST-Parser that double-escapes the ampersand, causing the query parameters to not be processed.
MyST-Parser issue: https://github.com/executablebooks/MyST-Parser/issues/760
MyST-Parser stalled fix PR: https://github.com/executablebooks/MyST-Parser/pull/929
-->
Note that X should be your PR number, not issue number! To workout X, please use Note that X should be your PR number, not issue number! To workout X, please use
[Next PR Number](https://ichard26.github.io/next-pr-number/?owner=psf&name=black). This <a href="https://ichard26.github.io/next-pr-number/?owner=psf&name=black">Next PR
is not perfect but saves a lot of release overhead as now the releaser does not need to Number</a>. This is not perfect but saves a lot of release overhead as now the releaser
go back and workout what to add to the `CHANGES.md` for each release. does not need to go back and workout what to add to the `CHANGES.md` for each release.
### Style Changes ### Style Changes
@ -116,7 +132,7 @@ If a change would affect the advertised code style, please modify the documentat
_Black_ code style) to reflect that change. Patches that fix unintended bugs in _Black_ code style) to reflect that change. Patches that fix unintended bugs in
formatting don't need to be mentioned separately though. If the change is implemented formatting don't need to be mentioned separately though. If the change is implemented
with the `--preview` flag, please include the change in the future style document with the `--preview` flag, please include the change in the future style document
instead and write the changelog entry under a dedicated "Preview changes" heading. instead and write the changelog entry under the dedicated "Preview style" heading.
### Docs Testing ### Docs Testing
@ -124,17 +140,17 @@ If you make changes to docs, you can test they still build locally too.
```console ```console
(.venv)$ pip install -r docs/requirements.txt (.venv)$ pip install -r docs/requirements.txt
(.venv)$ pip install -e .[d] (.venv)$ pip install -e ".[d]"
(.venv)$ sphinx-build -a -b html -W docs/ docs/_build/ (.venv)$ sphinx-build -a -b html -W docs/ docs/_build/
``` ```
## Hygiene ## Hygiene
If you're fixing a bug, add a test. Run it first to confirm it fails, then fix the bug, If you're fixing a bug, add a test. Run it first to confirm it fails, then fix the bug,
run it again to confirm it's really fixed. and run the test again to confirm it's really fixed.
If adding a new feature, add a test. In fact, always add a test. But wait, before adding If adding a new feature, add a test. In fact, always add a test. If adding a large
any large feature, first open an issue for us to discuss the idea first. feature, please first open an issue to discuss it beforehand.
## Finally ## Finally

View File

@ -84,16 +84,19 @@ See [Using _Black_ with other tools](labels/why-pycodestyle-warnings).
## Which Python versions does Black support? ## Which Python versions does Black support?
Currently the runtime requires Python 3.8-3.11. Formatting is supported for files _Black_ generally supports all Python versions supported by CPython (see
containing syntax from Python 3.3 to 3.11. We promise to support at least all Python [the Python devguide](https://devguide.python.org/versions/) for current information).
versions that have not reached their end of life. This is the case for both running We promise to support at least all Python versions that have not reached their end of
_Black_ and formatting code. life. This is the case for both running _Black_ and formatting code.
Support for formatting Python 2 code was removed in version 22.0. While we've made no Support for formatting Python 2 code was removed in version 22.0. While we've made no
plans to stop supporting older Python 3 minor versions immediately, their support might plans to stop supporting older Python 3 minor versions immediately, their support might
also be removed some time in the future without a deprecation period. also be removed some time in the future without a deprecation period.
Runtime support for 3.7 was removed in version 23.7.0. `await`/`async` as soft keywords/indentifiers are no longer supported as of 25.2.0.
Runtime support for 3.6 was removed in version 22.10.0, for 3.7 in version 23.7.0, and
for 3.8 in version 24.10.0.
## Why does my linter or typechecker complain after I format my code? ## Why does my linter or typechecker complain after I format my code?

View File

@ -16,7 +16,7 @@ Also, you can try out _Black_ online for minimal fuss on the
## Installation ## Installation
_Black_ can be installed by running `pip install black`. It requires Python 3.8+ to run. _Black_ can be installed by running `pip install black`. It requires Python 3.9+ to run.
If you want to format Jupyter Notebooks, install with `pip install "black[jupyter]"`. If you want to format Jupyter Notebooks, install with `pip install "black[jupyter]"`.
If you use pipx, you can install Black with `pipx install black`. If you use pipx, you can install Black with `pipx install black`.

View File

@ -236,7 +236,7 @@ Configuration:
#### Installation #### Installation
This plugin **requires Vim 7.0+ built with Python 3.8+ support**. It needs Python 3.8 to This plugin **requires Vim 7.0+ built with Python 3.9+ support**. It needs Python 3.9 to
be able to run _Black_ inside the Vim process which is much faster than calling an be able to run _Black_ inside the Vim process which is much faster than calling an
external command. external command.

View File

@ -37,10 +37,10 @@ the `pyproject.toml` file. `version` can be any
[valid version specifier](https://packaging.python.org/en/latest/glossary/#term-Version-Specifier) [valid version specifier](https://packaging.python.org/en/latest/glossary/#term-Version-Specifier)
or just the version number if you want an exact version. To read the version from the or just the version number if you want an exact version. To read the version from the
`pyproject.toml` file instead, set `use_pyproject` to `true`. This will first look into `pyproject.toml` file instead, set `use_pyproject` to `true`. This will first look into
the `tool.black.required-version` field, then the `project.dependencies` array and the `tool.black.required-version` field, then the `dependency-groups` table, then the
finally the `project.optional-dependencies` table. The action defaults to the latest `project.dependencies` array and finally the `project.optional-dependencies` table. The
release available on PyPI. Only versions available from PyPI are supported, so no commit action defaults to the latest release available on PyPI. Only versions available from
SHAs or branch names. PyPI are supported, so no commit SHAs or branch names.
If you want to include Jupyter Notebooks, _Black_ must be installed with the `jupyter` If you want to include Jupyter Notebooks, _Black_ must be installed with the `jupyter`
extra. Installing the extra and including Jupyter Notebook files can be configured via extra. Installing the extra and including Jupyter Notebook files can be configured via
@ -74,9 +74,14 @@ If you want to match versions covered by Black's
version: "~= 22.0" version: "~= 22.0"
``` ```
If you want to read the version from `pyproject.toml`, set `use_pyproject` to `true`: If you want to read the version from `pyproject.toml`, set `use_pyproject` to `true`.
Note that this requires Python >= 3.11, so using the setup-python action may be
required, for example:
```yaml ```yaml
- uses: actions/setup-python@v5
with:
python-version: "3.13"
- uses: psf/black@stable - uses: psf/black@stable
with: with:
options: "--check --verbose" options: "--check --verbose"

View File

@ -8,7 +8,7 @@ Use [pre-commit](https://pre-commit.com/). Once you
repos: repos:
# Using this mirror lets us use mypyc-compiled black, which is about 2x faster # Using this mirror lets us use mypyc-compiled black, which is about 2x faster
- repo: https://github.com/psf/black-pre-commit-mirror - repo: https://github.com/psf/black-pre-commit-mirror
rev: 24.10.0 rev: 25.1.0
hooks: hooks:
- id: black - id: black
# It is recommended to specify the latest version of Python # It is recommended to specify the latest version of Python
@ -35,7 +35,7 @@ include Jupyter Notebooks. To use this hook, simply replace the hook's `id: blac
repos: repos:
# Using this mirror lets us use mypyc-compiled black, which is about 2x faster # Using this mirror lets us use mypyc-compiled black, which is about 2x faster
- repo: https://github.com/psf/black-pre-commit-mirror - repo: https://github.com/psf/black-pre-commit-mirror
rev: 24.10.0 rev: 25.1.0
hooks: hooks:
- id: black-jupyter - id: black-jupyter
# It is recommended to specify the latest version of Python # It is recommended to specify the latest version of Python

View File

@ -1,9 +1,9 @@
# Used by ReadTheDocs; pinned requirements for stability. # Used by ReadTheDocs; pinned requirements for stability.
myst-parser==3.0.1 myst-parser==4.0.1
Sphinx==7.4.7 Sphinx==8.2.3
# Older versions break Sphinx even though they're declared to be supported. # Older versions break Sphinx even though they're declared to be supported.
docutils==0.20.1 docutils==0.21.2
sphinxcontrib-programoutput==0.17 sphinxcontrib-programoutput==0.18
sphinx_copybutton==0.5.2 sphinx_copybutton==0.5.2
furo==2024.8.6 furo==2024.8.6

View File

@ -250,6 +250,11 @@ exception of [capital "R" prefixes](#rstrings-and-rstrings), unicode literal mar
(`u`) are removed because they are meaningless in Python 3, and in the case of multiple (`u`) are removed because they are meaningless in Python 3, and in the case of multiple
characters "r" is put first as in spoken language: "raw f-string". characters "r" is put first as in spoken language: "raw f-string".
Another area where Python allows multiple ways to format a string is escape sequences.
For example, `"\uabcd"` and `"\uABCD"` evaluate to the same string. _Black_ normalizes
such escape sequences to lowercase, but uses uppercase for `\N` named character escapes,
such as `"\N{MEETEI MAYEK LETTER HUK}"`.
The main reason to standardize on a single form of quotes is aesthetics. Having one kind The main reason to standardize on a single form of quotes is aesthetics. Having one kind
of quotes everywhere reduces reader distraction. It will also enable a future version of of quotes everywhere reduces reader distraction. It will also enable a future version of
_Black_ to merge consecutive string literals that ended up on the same line (see _Black_ to merge consecutive string literals that ended up on the same line (see

View File

@ -2,6 +2,8 @@
## Preview style ## Preview style
(labels/preview-style)=
Experimental, potentially disruptive style changes are gathered under the `--preview` Experimental, potentially disruptive style changes are gathered under the `--preview`
CLI flag. At the end of each year, these changes may be adopted into the default style, CLI flag. At the end of each year, these changes may be adopted into the default style,
as described in [The Black Code Style](index.md). Because the functionality is as described in [The Black Code Style](index.md). Because the functionality is
@ -20,24 +22,13 @@ demoted from the `--preview` to the `--unstable` style, users can use the
Currently, the following features are included in the preview style: Currently, the following features are included in the preview style:
- `hex_codes_in_unicode_sequences`: normalize casing of Unicode escape characters in - `always_one_newline_after_import`: Always force one blank line after import
strings statements, except when the line after the import is a comment or an import statement
- `unify_docstring_detection`: fix inconsistencies in whether certain strings are - `wrap_long_dict_values_in_parens`: Add parentheses around long values in dictionaries
detected as docstrings ([see below](labels/wrap-long-dict-values))
- `no_normalize_fmt_skip_whitespace`: whitespace before `# fmt: skip` comments is no - `fix_fmt_skip_in_one_liners`: Fix `# fmt: skip` behaviour on one-liner declarations,
longer normalized such as `def foo(): return "mock" # fmt: skip`, where previously the declaration
- `typed_params_trailing_comma`: consistently add trailing commas to typed function would have been incorrectly collapsed.
parameters
- `is_simple_lookup_for_doublestar_expression`: fix line length computation for certain
expressions that involve the power operator
- `docstring_check_for_newline`: checks if there is a newline before the terminating
quotes of a docstring
- `remove_redundant_guard_parens`: Removes redundant parentheses in `if` guards for
`case` blocks.
- `parens_for_long_if_clauses_in_case_block`: Adds parentheses to `if` clauses in `case`
blocks when the line is too long
- `pep646_typed_star_arg_type_var_tuple`: fix type annotation spacing between * and more
complex type variable tuple (i.e. `def fn(*args: *tuple[*Ts, T]) -> None: pass`)
(labels/unstable-features)= (labels/unstable-features)=
@ -45,13 +36,38 @@ The unstable style additionally includes the following features:
- `string_processing`: split long string literals and related changes - `string_processing`: split long string literals and related changes
([see below](labels/string-processing)) ([see below](labels/string-processing))
- `wrap_long_dict_values_in_parens`: add parentheses to long values in dictionaries
([see below](labels/wrap-long-dict-values))
- `multiline_string_handling`: more compact formatting of expressions involving - `multiline_string_handling`: more compact formatting of expressions involving
multiline strings ([see below](labels/multiline-string-handling)) multiline strings ([see below](labels/multiline-string-handling))
- `hug_parens_with_braces_and_square_brackets`: more compact formatting of nested - `hug_parens_with_braces_and_square_brackets`: more compact formatting of nested
brackets ([see below](labels/hug-parens)) brackets ([see below](labels/hug-parens))
(labels/wrap-long-dict-values)=
### Improved parentheses management in dicts
For dict literals with long values, they are now wrapped in parentheses. Unnecessary
parentheses are now removed. For example:
```python
my_dict = {
"a key in my dict": a_very_long_variable
* and_a_very_long_function_call()
/ 100000.0,
"another key": (short_value),
}
```
will be changed to:
```python
my_dict = {
"a key in my dict": (
a_very_long_variable * and_a_very_long_function_call() / 100000.0
),
"another key": short_value,
}
```
(labels/hug-parens)= (labels/hug-parens)=
### Improved multiline dictionary and list indentation for sole function parameter ### Improved multiline dictionary and list indentation for sole function parameter
@ -132,37 +148,11 @@ foo(
_Black_ will split long string literals and merge short ones. Parentheses are used where _Black_ will split long string literals and merge short ones. Parentheses are used where
appropriate. When split, parts of f-strings that don't need formatting are converted to appropriate. When split, parts of f-strings that don't need formatting are converted to
plain strings. User-made splits are respected when they do not exceed the line length plain strings. f-strings will not be merged if they contain internal quotes and it would
limit. Line continuation backslashes are converted into parenthesized strings. change their quotation mark style. User-made splits are respected when they do not
Unnecessary parentheses are stripped. The stability and status of this feature is exceed the line length limit. Line continuation backslashes are converted into
tracked in [this issue](https://github.com/psf/black/issues/2188). parenthesized strings. Unnecessary parentheses are stripped. The stability and status of
this feature istracked in [this issue](https://github.com/psf/black/issues/2188).
(labels/wrap-long-dict-values)=
### Improved parentheses management in dicts
For dict literals with long values, they are now wrapped in parentheses. Unnecessary
parentheses are now removed. For example:
```python
my_dict = {
"a key in my dict": a_very_long_variable
* and_a_very_long_function_call()
/ 100000.0,
"another key": (short_value),
}
```
will be changed to:
```python
my_dict = {
"a key in my dict": (
a_very_long_variable * and_a_very_long_function_call() / 100000.0
),
"another key": short_value,
}
```
(labels/multiline-string-handling)= (labels/multiline-string-handling)=
@ -277,52 +267,3 @@ s = ( # Top comment
# Bottom comment # Bottom comment
) )
``` ```
## Potential future changes
This section lists changes that we may want to make in the future, but that aren't
implemented yet.
### Using backslashes for with statements
[Backslashes are bad and should be never be used](labels/why-no-backslashes) however
there is one exception: `with` statements using multiple context managers. Before Python
3.9 Python's grammar does not allow organizing parentheses around the series of context
managers.
We don't want formatting like:
```py3
with make_context_manager1() as cm1, make_context_manager2() as cm2, make_context_manager3() as cm3, make_context_manager4() as cm4:
... # nothing to split on - line too long
```
So _Black_ will, when we implement this, format it like this:
```py3
with \
make_context_manager1() as cm1, \
make_context_manager2() as cm2, \
make_context_manager3() as cm3, \
make_context_manager4() as cm4 \
:
... # backslashes and an ugly stranded colon
```
Although when the target version is Python 3.9 or higher, _Black_ uses parentheses
instead in `--preview` mode (see below) since they're allowed in Python 3.9 and higher.
An alternative to consider if the backslashes in the above formatting are undesirable is
to use {external:py:obj}`contextlib.ExitStack` to combine context managers in the
following way:
```python
with contextlib.ExitStack() as exit_stack:
cm1 = exit_stack.enter_context(make_context_manager1())
cm2 = exit_stack.enter_context(make_context_manager2())
cm3 = exit_stack.enter_context(make_context_manager3())
cm4 = exit_stack.enter_context(make_context_manager4())
...
```
(labels/preview-style)=

View File

@ -70,17 +70,17 @@ See also [the style documentation](labels/line-length).
Python versions that should be supported by Black's output. You can run `black --help` Python versions that should be supported by Black's output. You can run `black --help`
and look for the `--target-version` option to see the full list of supported versions. and look for the `--target-version` option to see the full list of supported versions.
You should include all versions that your code supports. If you support Python 3.8 You should include all versions that your code supports. If you support Python 3.11
through 3.11, you should write: through 3.13, you should write:
```console ```console
$ black -t py38 -t py39 -t py310 -t py311 $ black -t py311 -t py312 -t py313
``` ```
In a [configuration file](#configuration-via-a-file), you can write: In a [configuration file](#configuration-via-a-file), you can write:
```toml ```toml
target-version = ["py38", "py39", "py310", "py311"] target-version = ["py311", "py312", "py313"]
``` ```
By default, Black will infer target versions from the project metadata in By default, Black will infer target versions from the project metadata in
@ -269,8 +269,8 @@ configuration file for consistent results across environments.
```console ```console
$ black --version $ black --version
black, 24.10.0 (compiled: yes) black, 25.1.0 (compiled: yes)
$ black --required-version 24.10.0 -c "format = 'this'" $ black --required-version 25.1.0 -c "format = 'this'"
format = "this" format = "this"
$ black --required-version 31.5b2 -c "still = 'beta?!'" $ black --required-version 31.5b2 -c "still = 'beta?!'"
Oh no! 💥 💔 💥 The required version does not match the running version! Oh no! 💥 💔 💥 The required version does not match the running version!
@ -366,7 +366,7 @@ You can check the version of _Black_ you have installed using the `--version` fl
```console ```console
$ black --version $ black --version
black, 24.10.0 black, 25.1.0
``` ```
#### `--config` #### `--config`
@ -478,9 +478,10 @@ operating system, this configuration file should be stored as:
`XDG_CONFIG_HOME` environment variable is not set) `XDG_CONFIG_HOME` environment variable is not set)
Note that these are paths to the TOML file itself (meaning that they shouldn't be named Note that these are paths to the TOML file itself (meaning that they shouldn't be named
as `pyproject.toml`), not directories where you store the configuration. Here, `~` as `pyproject.toml`), not directories where you store the configuration (i.e.,
refers to the path to your home directory. On Windows, this will be something like `black`/`.black` is the file to create and add your configuration options to, in the
`C:\\Users\UserName`. `~/.config/` directory). Here, `~` refers to the path to your home directory. On
Windows, this will be something like `C:\\Users\UserName`.
You can also explicitly specify the path to a particular file that you want with You can also explicitly specify the path to a particular file that you want with
`--config`. In this situation _Black_ will not look for any other file. `--config`. In this situation _Black_ will not look for any other file.

View File

@ -7,15 +7,16 @@
import venv import venv
import zipfile import zipfile
from argparse import ArgumentParser, Namespace from argparse import ArgumentParser, Namespace
from collections.abc import Generator
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from functools import lru_cache, partial from functools import lru_cache, partial
from pathlib import Path from pathlib import Path
from typing import Generator, List, NamedTuple, Optional, Tuple, Union, cast from typing import NamedTuple, Optional, Union, cast
from urllib.request import urlopen, urlretrieve from urllib.request import urlopen, urlretrieve
PYPI_INSTANCE = "https://pypi.org/pypi" PYPI_INSTANCE = "https://pypi.org/pypi"
PYPI_TOP_PACKAGES = ( PYPI_TOP_PACKAGES = (
"https://hugovk.github.io/top-pypi-packages/top-pypi-packages-30-days.min.json" "https://hugovk.github.io/top-pypi-packages/top-pypi-packages.min.json"
) )
INTERNAL_BLACK_REPO = f"{tempfile.gettempdir()}/__black" INTERNAL_BLACK_REPO = f"{tempfile.gettempdir()}/__black"
@ -54,7 +55,7 @@ def get_pypi_download_url(package: str, version: Optional[str]) -> str:
return cast(str, source["url"]) return cast(str, source["url"])
def get_top_packages() -> List[str]: def get_top_packages() -> list[str]:
with urlopen(PYPI_TOP_PACKAGES) as page: with urlopen(PYPI_TOP_PACKAGES) as page:
result = json.load(page) result = json.load(page)
@ -150,7 +151,7 @@ def git_switch_branch(
subprocess.run(args, cwd=repo) subprocess.run(args, cwd=repo)
def init_repos(options: Namespace) -> Tuple[Path, ...]: def init_repos(options: Namespace) -> tuple[Path, ...]:
options.output.mkdir(exist_ok=True) options.output.mkdir(exist_ok=True)
if options.top_packages: if options.top_packages:
@ -206,7 +207,7 @@ def format_repo_with_version(
git_switch_branch(black_version.version, repo=black_repo) git_switch_branch(black_version.version, repo=black_repo)
git_switch_branch(current_branch, repo=repo, new=True, from_branch=from_branch) git_switch_branch(current_branch, repo=repo, new=True, from_branch=from_branch)
format_cmd: List[Union[Path, str]] = [ format_cmd: list[Union[Path, str]] = [
black_runner(black_version.version, black_repo), black_runner(black_version.version, black_repo),
(black_repo / "black.py").resolve(), (black_repo / "black.py").resolve(),
".", ".",
@ -222,7 +223,7 @@ def format_repo_with_version(
return current_branch return current_branch
def format_repos(repos: Tuple[Path, ...], options: Namespace) -> None: def format_repos(repos: tuple[Path, ...], options: Namespace) -> None:
black_versions = tuple( black_versions = tuple(
BlackVersion(*version.split(":")) for version in options.versions BlackVersion(*version.split(":")) for version in options.versions
) )

View File

@ -21,7 +21,7 @@ endif
if v:version < 700 || !has('python3') if v:version < 700 || !has('python3')
func! __BLACK_MISSING() func! __BLACK_MISSING()
echo "The black.vim plugin requires vim7.0+ with Python 3.6 support." echo "The black.vim plugin requires vim7.0+ with Python 3.9 support."
endfunc endfunc
command! Black :call __BLACK_MISSING() command! Black :call __BLACK_MISSING()
command! BlackUpgrade :call __BLACK_MISSING() command! BlackUpgrade :call __BLACK_MISSING()
@ -72,12 +72,11 @@ endif
function BlackComplete(ArgLead, CmdLine, CursorPos) function BlackComplete(ArgLead, CmdLine, CursorPos)
return [ return [
\ 'target_version=py27',
\ 'target_version=py36',
\ 'target_version=py37',
\ 'target_version=py38',
\ 'target_version=py39', \ 'target_version=py39',
\ 'target_version=py310', \ 'target_version=py310',
\ 'target_version=py311',
\ 'target_version=py312',
\ 'target_version=py313',
\ ] \ ]
endfunction endfunction

View File

@ -33,7 +33,7 @@ build-backend = "hatchling.build"
[project] [project]
name = "black" name = "black"
description = "The uncompromising code formatter." description = "The uncompromising code formatter."
license = { text = "MIT" } license = "MIT"
requires-python = ">=3.9" requires-python = ">=3.9"
authors = [ authors = [
{ name = "Łukasz Langa", email = "lukasz@langa.pl" }, { name = "Łukasz Langa", email = "lukasz@langa.pl" },
@ -69,6 +69,7 @@ dependencies = [
"packaging>=22.0", "packaging>=22.0",
"pathspec>=0.9.0", "pathspec>=0.9.0",
"platformdirs>=2", "platformdirs>=2",
"pytokens>=0.1.10",
"tomli>=1.1.0; python_version < '3.11'", "tomli>=1.1.0; python_version < '3.11'",
"typing_extensions>=4.0.1; python_version < '3.11'", "typing_extensions>=4.0.1; python_version < '3.11'",
] ]
@ -125,7 +126,7 @@ macos-max-compat = true
enable-by-default = false enable-by-default = false
dependencies = [ dependencies = [
"hatch-mypyc>=0.16.0", "hatch-mypyc>=0.16.0",
"mypy @ git+https://github.com/python/mypy@bc8119150e49895f7a496ae7ae7362a2828e7e9e", "mypy>=1.12",
"click>=8.1.7", "click>=8.1.7",
] ]
require-runtime-dependencies = true require-runtime-dependencies = true
@ -186,16 +187,6 @@ MYPYC_DEBUG_LEVEL = "0"
# Black needs Clang to compile successfully on Linux. # Black needs Clang to compile successfully on Linux.
CC = "clang" CC = "clang"
[tool.cibuildwheel.macos]
build-frontend = { name = "build", args = ["--no-isolation"] }
# Unfortunately, hatch doesn't respect MACOSX_DEPLOYMENT_TARGET
# Note we don't have a good test for this sed horror, so if you futz with it
# make sure to test manually
before-build = [
"python -m pip install 'hatchling==1.20.0' hatch-vcs hatch-fancy-pypi-readme 'hatch-mypyc>=0.16.0' 'mypy @ git+https://github.com/python/mypy@bc8119150e49895f7a496ae7ae7362a2828e7e9e' 'click>=8.1.7'",
"""sed -i '' -e "600,700s/'10_16'/os.environ['MACOSX_DEPLOYMENT_TARGET'].replace('.', '_')/" $(python -c 'import hatchling.builders.wheel as h; print(h.__file__)') """,
]
[tool.isort] [tool.isort]
atomic = true atomic = true
profile = "black" profile = "black"
@ -234,6 +225,8 @@ branch = true
python_version = "3.9" python_version = "3.9"
mypy_path = "src" mypy_path = "src"
strict = true strict = true
strict_bytes = true
local_partial_types = true
# Unreachable blocks have been an issue when compiling mypyc, let's try to avoid 'em in the first place. # Unreachable blocks have been an issue when compiling mypyc, let's try to avoid 'em in the first place.
warn_unreachable = true warn_unreachable = true
implicit_reexport = true implicit_reexport = true

View File

@ -5,14 +5,11 @@
a coverage-guided fuzzer I'm working on. a coverage-guided fuzzer I'm working on.
""" """
import re
import hypothesmith import hypothesmith
from hypothesis import HealthCheck, given, settings from hypothesis import HealthCheck, given, settings
from hypothesis import strategies as st from hypothesis import strategies as st
import black import black
from blib2to3.pgen2.tokenize import TokenError
# This test uses the Hypothesis and Hypothesmith libraries to generate random # This test uses the Hypothesis and Hypothesmith libraries to generate random
@ -45,23 +42,7 @@ def test_idempotent_any_syntatically_valid_python(
compile(src_contents, "<string>", "exec") # else the bug is in hypothesmith compile(src_contents, "<string>", "exec") # else the bug is in hypothesmith
# Then format the code... # Then format the code...
try: dst_contents = black.format_str(src_contents, mode=mode)
dst_contents = black.format_str(src_contents, mode=mode)
except black.InvalidInput:
# This is a bug - if it's valid Python code, as above, Black should be
# able to cope with it. See issues #970, #1012
# TODO: remove this try-except block when issues are resolved.
return
except TokenError as e:
if ( # Special-case logic for backslashes followed by newlines or end-of-input
e.args[0] == "EOF in multi-line statement"
and re.search(r"\\($|\r?\n)", src_contents) is not None
):
# This is a bug - if it's valid Python code, as above, Black should be
# able to cope with it. See issue #1012.
# TODO: remove this block when the issue is resolved.
return
raise
# And check that we got equivalent and stable output. # And check that we got equivalent and stable output.
black.assert_equivalent(src_contents, dst_contents) black.assert_equivalent(src_contents, dst_contents)
@ -80,7 +61,7 @@ def test_idempotent_any_syntatically_valid_python(
try: try:
import sys import sys
import atheris # type: ignore[import-not-found] import atheris
except ImportError: except ImportError:
pass pass
else: else:

View File

@ -17,13 +17,13 @@
""" """
import sys import sys
from collections.abc import Iterable
from os.path import basename, dirname, join from os.path import basename, dirname, join
from typing import Iterable, Tuple
import wcwidth # type: ignore[import-not-found] import wcwidth # type: ignore[import-not-found]
def make_width_table() -> Iterable[Tuple[int, int, int]]: def make_width_table() -> Iterable[tuple[int, int, int]]:
start_codepoint = -1 start_codepoint = -1
end_codepoint = -1 end_codepoint = -1
range_width = -2 range_width = -2
@ -53,9 +53,9 @@ def main() -> None:
f.write(f"""# Generated by {basename(__file__)} f.write(f"""# Generated by {basename(__file__)}
# wcwidth {wcwidth.__version__} # wcwidth {wcwidth.__version__}
# Unicode {wcwidth.list_versions()[-1]} # Unicode {wcwidth.list_versions()[-1]}
from typing import Final, List, Tuple from typing import Final
WIDTH_TABLE: Final[List[Tuple[int, int, int]]] = [ WIDTH_TABLE: Final[list[tuple[int, int, int]]] = [
""") """)
for triple in make_width_table(): for triple in make_width_table():
f.write(f" {triple!r},\n") f.write(f" {triple!r},\n")

View File

@ -77,7 +77,7 @@ def blackify(base_branch: str, black_command: str, logger: logging.Logger) -> in
git("commit", "--allow-empty", "-aqC", commit) git("commit", "--allow-empty", "-aqC", commit)
for commit in commits: for commit in commits:
git("branch", "-qD", "%s-black" % commit) git("branch", "-qD", f"{commit}-black")
return 0 return 0

View File

@ -5,24 +5,22 @@
import sys import sys
import tokenize import tokenize
import traceback import traceback
from collections.abc import (
Collection,
Generator,
Iterator,
MutableMapping,
Sequence,
Sized,
)
from contextlib import contextmanager from contextlib import contextmanager
from dataclasses import replace from dataclasses import replace
from datetime import datetime, timezone from datetime import datetime, timezone
from enum import Enum from enum import Enum
from json.decoder import JSONDecodeError from json.decoder import JSONDecodeError
from pathlib import Path from pathlib import Path
from typing import ( from re import Pattern
Any, from typing import Any, Optional, Union
Collection,
Generator,
Iterator,
MutableMapping,
Optional,
Pattern,
Sequence,
Sized,
Union,
)
import click import click
from click.core import ParameterSource from click.core import ParameterSource
@ -751,6 +749,12 @@ def get_sources(
for s in src: for s in src:
if s == "-" and stdin_filename: if s == "-" and stdin_filename:
path = Path(stdin_filename) path = Path(stdin_filename)
if path_is_excluded(stdin_filename, force_exclude):
report.path_ignored(
path,
"--stdin-filename matches the --force-exclude regular expression",
)
continue
is_stdin = True is_stdin = True
else: else:
path = Path(s) path = Path(s)

View File

@ -1,7 +1,8 @@
"""Builds on top of nodes.py to track brackets.""" """Builds on top of nodes.py to track brackets."""
from collections.abc import Iterable, Sequence
from dataclasses import dataclass, field from dataclasses import dataclass, field
from typing import Final, Iterable, Optional, Sequence, Union from typing import Final, Optional, Union
from black.nodes import ( from black.nodes import (
BRACKET, BRACKET,

View File

@ -5,9 +5,10 @@
import pickle import pickle
import sys import sys
import tempfile import tempfile
from collections.abc import Iterable
from dataclasses import dataclass, field from dataclasses import dataclass, field
from pathlib import Path from pathlib import Path
from typing import Iterable, NamedTuple from typing import NamedTuple
from platformdirs import user_cache_dir from platformdirs import user_cache_dir

View File

@ -1,7 +1,8 @@
import re import re
from collections.abc import Collection, Iterator
from dataclasses import dataclass from dataclasses import dataclass
from functools import lru_cache from functools import lru_cache
from typing import Collection, Final, Iterator, Optional, Union from typing import Final, Optional, Union
from black.mode import Mode, Preview from black.mode import Mode, Preview
from black.nodes import ( from black.nodes import (
@ -234,11 +235,7 @@ def convert_one_fmt_off_pair(
standalone_comment_prefix += fmt_off_prefix standalone_comment_prefix += fmt_off_prefix
hidden_value = comment.value + "\n" + hidden_value hidden_value = comment.value + "\n" + hidden_value
if is_fmt_skip: if is_fmt_skip:
hidden_value += ( hidden_value += comment.leading_whitespace + comment.value
comment.leading_whitespace
if Preview.no_normalize_fmt_skip_whitespace in mode
else " "
) + comment.value
if hidden_value.endswith("\n"): if hidden_value.endswith("\n"):
# That happens when one of the `ignored_nodes` ended with a NEWLINE # That happens when one of the `ignored_nodes` ended with a NEWLINE
# leaf (possibly followed by a DEDENT). # leaf (possibly followed by a DEDENT).
@ -273,7 +270,7 @@ def generate_ignored_nodes(
Stops at the end of the block. Stops at the end of the block.
""" """
if _contains_fmt_skip_comment(comment.value, mode): if _contains_fmt_skip_comment(comment.value, mode):
yield from _generate_ignored_nodes_from_fmt_skip(leaf, comment) yield from _generate_ignored_nodes_from_fmt_skip(leaf, comment, mode)
return return
container: Optional[LN] = container_of(leaf) container: Optional[LN] = container_of(leaf)
while container is not None and container.type != token.ENDMARKER: while container is not None and container.type != token.ENDMARKER:
@ -312,23 +309,67 @@ def generate_ignored_nodes(
def _generate_ignored_nodes_from_fmt_skip( def _generate_ignored_nodes_from_fmt_skip(
leaf: Leaf, comment: ProtoComment leaf: Leaf, comment: ProtoComment, mode: Mode
) -> Iterator[LN]: ) -> Iterator[LN]:
"""Generate all leaves that should be ignored by the `# fmt: skip` from `leaf`.""" """Generate all leaves that should be ignored by the `# fmt: skip` from `leaf`."""
prev_sibling = leaf.prev_sibling prev_sibling = leaf.prev_sibling
parent = leaf.parent parent = leaf.parent
ignored_nodes: list[LN] = []
# Need to properly format the leaf prefix to compare it to comment.value, # Need to properly format the leaf prefix to compare it to comment.value,
# which is also formatted # which is also formatted
comments = list_comments(leaf.prefix, is_endmarker=False) comments = list_comments(leaf.prefix, is_endmarker=False)
if not comments or comment.value != comments[0].value: if not comments or comment.value != comments[0].value:
return return
if prev_sibling is not None: if prev_sibling is not None:
leaf.prefix = "" leaf.prefix = leaf.prefix[comment.consumed :]
siblings = [prev_sibling]
while "\n" not in prev_sibling.prefix and prev_sibling.prev_sibling is not None: if Preview.fix_fmt_skip_in_one_liners not in mode:
prev_sibling = prev_sibling.prev_sibling siblings = [prev_sibling]
siblings.insert(0, prev_sibling) while (
yield from siblings "\n" not in prev_sibling.prefix
and prev_sibling.prev_sibling is not None
):
prev_sibling = prev_sibling.prev_sibling
siblings.insert(0, prev_sibling)
yield from siblings
return
# Generates the nodes to be ignored by `fmt: skip`.
# Nodes to ignore are the ones on the same line as the
# `# fmt: skip` comment, excluding the `# fmt: skip`
# node itself.
# Traversal process (starting at the `# fmt: skip` node):
# 1. Move to the `prev_sibling` of the current node.
# 2. If `prev_sibling` has children, go to its rightmost leaf.
# 3. If theres no `prev_sibling`, move up to the parent
# node and repeat.
# 4. Continue until:
# a. You encounter an `INDENT` or `NEWLINE` node (indicates
# start of the line).
# b. You reach the root node.
# Include all visited LEAVES in the ignored list, except INDENT
# or NEWLINE leaves.
current_node = prev_sibling
ignored_nodes = [current_node]
if current_node.prev_sibling is None and current_node.parent is not None:
current_node = current_node.parent
while "\n" not in current_node.prefix and current_node.prev_sibling is not None:
leaf_nodes = list(current_node.prev_sibling.leaves())
current_node = leaf_nodes[-1] if leaf_nodes else current_node
if current_node.type in (token.NEWLINE, token.INDENT):
current_node.prefix = ""
break
ignored_nodes.insert(0, current_node)
if current_node.prev_sibling is None and current_node.parent is not None:
current_node = current_node.parent
yield from ignored_nodes
elif ( elif (
parent is not None and parent.type == syms.suite and leaf.type == token.NEWLINE parent is not None and parent.type == syms.suite and leaf.type == token.NEWLINE
): ):
@ -336,7 +377,6 @@ def _generate_ignored_nodes_from_fmt_skip(
# statements. The ignored nodes should be previous siblings of the # statements. The ignored nodes should be previous siblings of the
# parent suite node. # parent suite node.
leaf.prefix = "" leaf.prefix = ""
ignored_nodes: list[LN] = []
parent_sibling = parent.prev_sibling parent_sibling = parent.prev_sibling
while parent_sibling is not None and parent_sibling.type != syms.suite: while parent_sibling is not None and parent_sibling.type != syms.suite:
ignored_nodes.insert(0, parent_sibling) ignored_nodes.insert(0, parent_sibling)

View File

@ -10,10 +10,11 @@
import signal import signal
import sys import sys
import traceback import traceback
from collections.abc import Iterable
from concurrent.futures import Executor, ProcessPoolExecutor, ThreadPoolExecutor from concurrent.futures import Executor, ProcessPoolExecutor, ThreadPoolExecutor
from multiprocessing import Manager from multiprocessing import Manager
from pathlib import Path from pathlib import Path
from typing import Any, Iterable, Optional from typing import Any, Optional
from mypy_extensions import mypyc_attr from mypy_extensions import mypyc_attr

View File

@ -1,5 +1,6 @@
from collections.abc import Iterator
from dataclasses import dataclass, field from dataclasses import dataclass, field
from typing import Any, Iterator, TypeVar, Union from typing import Any, TypeVar, Union
from black.nodes import Visitor from black.nodes import Visitor
from black.output import out from black.output import out

View File

@ -1,18 +1,11 @@
import io import io
import os import os
import sys import sys
from collections.abc import Iterable, Iterator, Sequence
from functools import lru_cache from functools import lru_cache
from pathlib import Path from pathlib import Path
from typing import ( from re import Pattern
TYPE_CHECKING, from typing import TYPE_CHECKING, Any, Optional, Union
Any,
Iterable,
Iterator,
Optional,
Pattern,
Sequence,
Union,
)
from mypy_extensions import mypyc_attr from mypy_extensions import mypyc_attr
from packaging.specifiers import InvalidSpecifier, Specifier, SpecifierSet from packaging.specifiers import InvalidSpecifier, Specifier, SpecifierSet

View File

@ -43,7 +43,6 @@
"time", "time",
"timeit", "timeit",
)) ))
TOKEN_HEX = secrets.token_hex
@dataclasses.dataclass(frozen=True) @dataclasses.dataclass(frozen=True)
@ -160,7 +159,7 @@ def mask_cell(src: str) -> tuple[str, list[Replacement]]:
becomes becomes
"25716f358c32750e" b"25716f358c32750"
'foo' 'foo'
The replacements are returned, along with the transformed code. The replacements are returned, along with the transformed code.
@ -178,18 +177,32 @@ def mask_cell(src: str) -> tuple[str, list[Replacement]]:
from IPython.core.inputtransformer2 import TransformerManager from IPython.core.inputtransformer2 import TransformerManager
transformer_manager = TransformerManager() transformer_manager = TransformerManager()
# A side effect of the following transformation is that it also removes any
# empty lines at the beginning of the cell.
transformed = transformer_manager.transform_cell(src) transformed = transformer_manager.transform_cell(src)
transformed, cell_magic_replacements = replace_cell_magics(transformed) transformed, cell_magic_replacements = replace_cell_magics(transformed)
replacements += cell_magic_replacements replacements += cell_magic_replacements
transformed = transformer_manager.transform_cell(transformed) transformed = transformer_manager.transform_cell(transformed)
transformed, magic_replacements = replace_magics(transformed) transformed, magic_replacements = replace_magics(transformed)
if len(transformed.splitlines()) != len(src.splitlines()): if len(transformed.strip().splitlines()) != len(src.strip().splitlines()):
# Multi-line magic, not supported. # Multi-line magic, not supported.
raise NothingChanged raise NothingChanged
replacements += magic_replacements replacements += magic_replacements
return transformed, replacements return transformed, replacements
def create_token(n_chars: int) -> str:
"""Create a randomly generated token that is n_chars characters long."""
assert n_chars > 0
n_bytes = max(n_chars // 2 - 1, 1)
token = secrets.token_hex(n_bytes)
if len(token) + 3 > n_chars:
token = token[:-1]
# We use a bytestring so that the string does not get interpreted
# as a docstring.
return f'b"{token}"'
def get_token(src: str, magic: str) -> str: def get_token(src: str, magic: str) -> str:
"""Return randomly generated token to mask IPython magic with. """Return randomly generated token to mask IPython magic with.
@ -199,11 +212,11 @@ def get_token(src: str, magic: str) -> str:
not already present anywhere else in the cell. not already present anywhere else in the cell.
""" """
assert magic assert magic
nbytes = max(len(magic) // 2 - 1, 1) n_chars = len(magic)
token = TOKEN_HEX(nbytes) token = create_token(n_chars)
counter = 0 counter = 0
while token in src: while token in src:
token = TOKEN_HEX(nbytes) token = create_token(n_chars)
counter += 1 counter += 1
if counter > 100: if counter > 100:
raise AssertionError( raise AssertionError(
@ -211,9 +224,7 @@ def get_token(src: str, magic: str) -> str:
"Please report a bug on https://github.com/psf/black/issues. " "Please report a bug on https://github.com/psf/black/issues. "
f"The magic might be helpful: {magic}" f"The magic might be helpful: {magic}"
) from None ) from None
if len(token) + 2 < len(magic): return token
token = f"{token}."
return f'"{token}"'
def replace_cell_magics(src: str) -> tuple[str, list[Replacement]]: def replace_cell_magics(src: str) -> tuple[str, list[Replacement]]:
@ -269,7 +280,7 @@ def replace_magics(src: str) -> tuple[str, list[Replacement]]:
magic_finder = MagicFinder() magic_finder = MagicFinder()
magic_finder.visit(ast.parse(src)) magic_finder.visit(ast.parse(src))
new_srcs = [] new_srcs = []
for i, line in enumerate(src.splitlines(), start=1): for i, line in enumerate(src.split("\n"), start=1):
if i in magic_finder.magics: if i in magic_finder.magics:
offsets_and_magics = magic_finder.magics[i] offsets_and_magics = magic_finder.magics[i]
if len(offsets_and_magics) != 1: # pragma: nocover if len(offsets_and_magics) != 1: # pragma: nocover

View File

@ -4,10 +4,11 @@
import re import re
import sys import sys
from collections.abc import Collection, Iterator
from dataclasses import replace from dataclasses import replace
from enum import Enum, auto from enum import Enum, auto
from functools import partial, wraps from functools import partial, wraps
from typing import Collection, Iterator, Optional, Union, cast from typing import Optional, Union, cast
from black.brackets import ( from black.brackets import (
COMMA_PRIORITY, COMMA_PRIORITY,
@ -39,11 +40,13 @@
ensure_visible, ensure_visible,
fstring_to_string, fstring_to_string,
get_annotation_type, get_annotation_type,
has_sibling_with_type,
is_arith_like, is_arith_like,
is_async_stmt_or_funcdef, is_async_stmt_or_funcdef,
is_atom_with_invisible_parens, is_atom_with_invisible_parens,
is_docstring, is_docstring,
is_empty_tuple, is_empty_tuple,
is_generator,
is_lpar_token, is_lpar_token,
is_multiline_string, is_multiline_string,
is_name_token, is_name_token,
@ -54,6 +57,8 @@
is_rpar_token, is_rpar_token,
is_stub_body, is_stub_body,
is_stub_suite, is_stub_suite,
is_tuple,
is_tuple_containing_star,
is_tuple_containing_walrus, is_tuple_containing_walrus,
is_type_ignore_comment_string, is_type_ignore_comment_string,
is_vararg, is_vararg,
@ -64,7 +69,7 @@
) )
from black.numerics import normalize_numeric_literal from black.numerics import normalize_numeric_literal
from black.strings import ( from black.strings import (
fix_docstring, fix_multiline_docstring,
get_string_prefix, get_string_prefix,
normalize_string_prefix, normalize_string_prefix,
normalize_string_quotes, normalize_string_quotes,
@ -411,10 +416,9 @@ def foo(a: (int), b: (float) = 7): ...
yield from self.visit_default(node) yield from self.visit_default(node)
def visit_STRING(self, leaf: Leaf) -> Iterator[Line]: def visit_STRING(self, leaf: Leaf) -> Iterator[Line]:
if Preview.hex_codes_in_unicode_sequences in self.mode: normalize_unicode_escape_sequences(leaf)
normalize_unicode_escape_sequences(leaf)
if is_docstring(leaf, self.mode) and not re.search(r"\\\s*\n", leaf.value): if is_docstring(leaf) and not re.search(r"\\\s*\n", leaf.value):
# We're ignoring docstrings with backslash newline escapes because changing # We're ignoring docstrings with backslash newline escapes because changing
# indentation of those changes the AST representation of the code. # indentation of those changes the AST representation of the code.
if self.mode.string_normalization: if self.mode.string_normalization:
@ -441,7 +445,7 @@ def visit_STRING(self, leaf: Leaf) -> Iterator[Line]:
indent = " " * 4 * self.current_line.depth indent = " " * 4 * self.current_line.depth
if is_multiline_string(leaf): if is_multiline_string(leaf):
docstring = fix_docstring(docstring, indent) docstring = fix_multiline_docstring(docstring, indent)
else: else:
docstring = docstring.strip() docstring = docstring.strip()
@ -485,10 +489,7 @@ def visit_STRING(self, leaf: Leaf) -> Iterator[Line]:
and len(indent) + quote_len <= self.mode.line_length and len(indent) + quote_len <= self.mode.line_length
and not has_trailing_backslash and not has_trailing_backslash
): ):
if ( if leaf.value[-1 - quote_len] == "\n":
Preview.docstring_check_for_newline in self.mode
and leaf.value[-1 - quote_len] == "\n"
):
leaf.value = prefix + quote + docstring + quote leaf.value = prefix + quote + docstring + quote
else: else:
leaf.value = prefix + quote + docstring + "\n" + indent + quote leaf.value = prefix + quote + docstring + "\n" + indent + quote
@ -506,6 +507,19 @@ def visit_NUMBER(self, leaf: Leaf) -> Iterator[Line]:
normalize_numeric_literal(leaf) normalize_numeric_literal(leaf)
yield from self.visit_default(leaf) yield from self.visit_default(leaf)
def visit_atom(self, node: Node) -> Iterator[Line]:
"""Visit any atom"""
if len(node.children) == 3:
first = node.children[0]
last = node.children[-1]
if (first.type == token.LSQB and last.type == token.RSQB) or (
first.type == token.LBRACE and last.type == token.RBRACE
):
# Lists or sets of one item
maybe_make_parens_invisible_in_atom(node.children[1], parent=node)
yield from self.visit_default(node)
def visit_fstring(self, node: Node) -> Iterator[Line]: def visit_fstring(self, node: Node) -> Iterator[Line]:
# currently we don't want to format and split f-strings at all. # currently we don't want to format and split f-strings at all.
string_leaf = fstring_to_string(node) string_leaf = fstring_to_string(node)
@ -583,8 +597,7 @@ def __post_init__(self) -> None:
# PEP 634 # PEP 634
self.visit_match_stmt = self.visit_match_case self.visit_match_stmt = self.visit_match_case
self.visit_case_block = self.visit_match_case self.visit_case_block = self.visit_match_case
if Preview.remove_redundant_guard_parens in self.mode: self.visit_guard = partial(v, keywords=Ø, parens={"if"})
self.visit_guard = partial(v, keywords=Ø, parens={"if"})
def _hugging_power_ops_line_to_string( def _hugging_power_ops_line_to_string(
@ -768,26 +781,29 @@ def left_hand_split(
Prefer RHS otherwise. This is why this function is not symmetrical with Prefer RHS otherwise. This is why this function is not symmetrical with
:func:`right_hand_split` which also handles optional parentheses. :func:`right_hand_split` which also handles optional parentheses.
""" """
tail_leaves: list[Leaf] = [] for leaf_type in [token.LPAR, token.LSQB]:
body_leaves: list[Leaf] = [] tail_leaves: list[Leaf] = []
head_leaves: list[Leaf] = [] body_leaves: list[Leaf] = []
current_leaves = head_leaves head_leaves: list[Leaf] = []
matching_bracket: Optional[Leaf] = None current_leaves = head_leaves
for leaf in line.leaves: matching_bracket: Optional[Leaf] = None
if ( for leaf in line.leaves:
current_leaves is body_leaves if (
and leaf.type in CLOSING_BRACKETS current_leaves is body_leaves
and leaf.opening_bracket is matching_bracket and leaf.type in CLOSING_BRACKETS
and isinstance(matching_bracket, Leaf) and leaf.opening_bracket is matching_bracket
): and isinstance(matching_bracket, Leaf)
ensure_visible(leaf) ):
ensure_visible(matching_bracket) ensure_visible(leaf)
current_leaves = tail_leaves if body_leaves else head_leaves ensure_visible(matching_bracket)
current_leaves.append(leaf) current_leaves = tail_leaves if body_leaves else head_leaves
if current_leaves is head_leaves: current_leaves.append(leaf)
if leaf.type in OPENING_BRACKETS: if current_leaves is head_leaves:
matching_bracket = leaf if leaf.type == leaf_type:
current_leaves = body_leaves matching_bracket = leaf
current_leaves = body_leaves
if matching_bracket and tail_leaves:
break
if not matching_bracket or not tail_leaves: if not matching_bracket or not tail_leaves:
raise CannotSplit("No brackets found") raise CannotSplit("No brackets found")
@ -954,29 +970,7 @@ def _maybe_split_omitting_optional_parens(
try: try:
# The RHSResult Omitting Optional Parens. # The RHSResult Omitting Optional Parens.
rhs_oop = _first_right_hand_split(line, omit=omit) rhs_oop = _first_right_hand_split(line, omit=omit)
is_split_right_after_equal = ( if _prefer_split_rhs_oop_over_rhs(rhs_oop, rhs, mode):
len(rhs.head.leaves) >= 2 and rhs.head.leaves[-2].type == token.EQUAL
)
rhs_head_contains_brackets = any(
leaf.type in BRACKETS for leaf in rhs.head.leaves[:-1]
)
# the -1 is for the ending optional paren
rhs_head_short_enough = is_line_short_enough(
rhs.head, mode=replace(mode, line_length=mode.line_length - 1)
)
rhs_head_explode_blocked_by_magic_trailing_comma = (
rhs.head.magic_trailing_comma is None
)
if (
not (
is_split_right_after_equal
and rhs_head_contains_brackets
and rhs_head_short_enough
and rhs_head_explode_blocked_by_magic_trailing_comma
)
# the omit optional parens split is preferred by some other reason
or _prefer_split_rhs_oop_over_rhs(rhs_oop, rhs, mode)
):
yield from _maybe_split_omitting_optional_parens( yield from _maybe_split_omitting_optional_parens(
rhs_oop, line, mode, features=features, omit=omit rhs_oop, line, mode, features=features, omit=omit
) )
@ -987,8 +981,15 @@ def _maybe_split_omitting_optional_parens(
if line.is_chained_assignment: if line.is_chained_assignment:
pass pass
elif not can_be_split(rhs.body) and not is_line_short_enough( elif (
rhs.body, mode=mode not can_be_split(rhs.body)
and not is_line_short_enough(rhs.body, mode=mode)
and not (
Preview.wrap_long_dict_values_in_parens
and rhs.opening_bracket.parent
and rhs.opening_bracket.parent.parent
and rhs.opening_bracket.parent.parent.type == syms.dictsetmaker
)
): ):
raise CannotSplit( raise CannotSplit(
"Splitting failed, body is still too long and can't be split." "Splitting failed, body is still too long and can't be split."
@ -1019,6 +1020,44 @@ def _prefer_split_rhs_oop_over_rhs(
Returns whether we should prefer the result from a split omitting optional parens Returns whether we should prefer the result from a split omitting optional parens
(rhs_oop) over the original (rhs). (rhs_oop) over the original (rhs).
""" """
# contains unsplittable type ignore
if (
rhs_oop.head.contains_unsplittable_type_ignore()
or rhs_oop.body.contains_unsplittable_type_ignore()
or rhs_oop.tail.contains_unsplittable_type_ignore()
):
return True
# Retain optional parens around dictionary values
if (
Preview.wrap_long_dict_values_in_parens
and rhs.opening_bracket.parent
and rhs.opening_bracket.parent.parent
and rhs.opening_bracket.parent.parent.type == syms.dictsetmaker
and rhs.body.bracket_tracker.delimiters
):
# Unless the split is inside the key
return any(leaf.type == token.COLON for leaf in rhs_oop.tail.leaves)
# the split is right after `=`
if not (len(rhs.head.leaves) >= 2 and rhs.head.leaves[-2].type == token.EQUAL):
return True
# the left side of assignment contains brackets
if not any(leaf.type in BRACKETS for leaf in rhs.head.leaves[:-1]):
return True
# the left side of assignment is short enough (the -1 is for the ending optional
# paren)
if not is_line_short_enough(
rhs.head, mode=replace(mode, line_length=mode.line_length - 1)
):
return True
# the left side of assignment won't explode further because of magic trailing comma
if rhs.head.magic_trailing_comma is not None:
return True
# If we have multiple targets, we prefer more `=`s on the head vs pushing them to # If we have multiple targets, we prefer more `=`s on the head vs pushing them to
# the body # the body
rhs_head_equal_count = [leaf.type for leaf in rhs.head.leaves].count(token.EQUAL) rhs_head_equal_count = [leaf.type for leaf in rhs.head.leaves].count(token.EQUAL)
@ -1046,10 +1085,6 @@ def _prefer_split_rhs_oop_over_rhs(
# the first line is short enough # the first line is short enough
and is_line_short_enough(rhs_oop.head, mode=mode) and is_line_short_enough(rhs_oop.head, mode=mode)
) )
# contains unsplittable type ignore
or rhs_oop.head.contains_unsplittable_type_ignore()
or rhs_oop.body.contains_unsplittable_type_ignore()
or rhs_oop.tail.contains_unsplittable_type_ignore()
) )
@ -1094,12 +1129,7 @@ def _ensure_trailing_comma(
return False return False
# Don't add commas if we already have any commas # Don't add commas if we already have any commas
if any( if any(
leaf.type == token.COMMA leaf.type == token.COMMA and not is_part_of_annotation(leaf) for leaf in leaves
and (
Preview.typed_params_trailing_comma not in original.mode
or not is_part_of_annotation(leaf)
)
for leaf in leaves
): ):
return False return False
@ -1380,11 +1410,7 @@ def normalize_invisible_parens( # noqa: C901
) )
# Add parentheses around if guards in case blocks # Add parentheses around if guards in case blocks
if ( if isinstance(child, Node) and child.type == syms.guard:
isinstance(child, Node)
and child.type == syms.guard
and Preview.parens_for_long_if_clauses_in_case_block in mode
):
normalize_invisible_parens( normalize_invisible_parens(
child, parens_after={"if"}, mode=mode, features=features child, parens_after={"if"}, mode=mode, features=features
) )
@ -1602,6 +1628,12 @@ def maybe_make_parens_invisible_in_atom(
node.type not in (syms.atom, syms.expr) node.type not in (syms.atom, syms.expr)
or is_empty_tuple(node) or is_empty_tuple(node)
or is_one_tuple(node) or is_one_tuple(node)
or (is_tuple(node) and parent.type == syms.asexpr_test)
or (
is_tuple(node)
and parent.type == syms.with_stmt
and has_sibling_with_type(node, token.COMMA)
)
or (is_yield(node) and parent.type != syms.expr_stmt) or (is_yield(node) and parent.type != syms.expr_stmt)
or ( or (
# This condition tries to prevent removing non-optional brackets # This condition tries to prevent removing non-optional brackets
@ -1611,6 +1643,8 @@ def maybe_make_parens_invisible_in_atom(
and max_delimiter_priority_in_atom(node) >= COMMA_PRIORITY and max_delimiter_priority_in_atom(node) >= COMMA_PRIORITY
) )
or is_tuple_containing_walrus(node) or is_tuple_containing_walrus(node)
or is_tuple_containing_star(node)
or is_generator(node)
): ):
return False return False
@ -1623,6 +1657,7 @@ def maybe_make_parens_invisible_in_atom(
syms.except_clause, syms.except_clause,
syms.funcdef, syms.funcdef,
syms.with_stmt, syms.with_stmt,
syms.testlist_gexp,
syms.tname, syms.tname,
# these ones aren't useful to end users, but they do please fuzzers # these ones aren't useful to end users, but they do please fuzzers
syms.for_stmt, syms.for_stmt,
@ -1642,9 +1677,6 @@ def maybe_make_parens_invisible_in_atom(
not is_type_ignore_comment_string(middle.prefix.strip()) not is_type_ignore_comment_string(middle.prefix.strip())
): ):
first.value = "" first.value = ""
if first.prefix.strip():
# Preserve comments before first paren
middle.prefix = first.prefix + middle.prefix
last.value = "" last.value = ""
maybe_make_parens_invisible_in_atom( maybe_make_parens_invisible_in_atom(
middle, middle,
@ -1656,6 +1688,13 @@ def maybe_make_parens_invisible_in_atom(
# Strip the invisible parens from `middle` by replacing # Strip the invisible parens from `middle` by replacing
# it with the child in-between the invisible parens # it with the child in-between the invisible parens
middle.replace(middle.children[1]) middle.replace(middle.children[1])
if middle.children[0].prefix.strip():
# Preserve comments before first paren
middle.children[1].prefix = (
middle.children[0].prefix + middle.children[1].prefix
)
if middle.children[-1].prefix.strip(): if middle.children[-1].prefix.strip():
# Preserve comments before last paren # Preserve comments before last paren
last.prefix = middle.children[-1].prefix + last.prefix last.prefix = middle.children[-1].prefix + last.prefix

View File

@ -1,7 +1,8 @@
import itertools import itertools
import math import math
from collections.abc import Callable, Iterator, Sequence
from dataclasses import dataclass, field from dataclasses import dataclass, field
from typing import Callable, Iterator, Optional, Sequence, TypeVar, Union, cast from typing import Optional, TypeVar, Union, cast
from black.brackets import COMMA_PRIORITY, DOT_PRIORITY, BracketTracker from black.brackets import COMMA_PRIORITY, DOT_PRIORITY, BracketTracker
from black.mode import Mode, Preview from black.mode import Mode, Preview
@ -203,9 +204,7 @@ def _is_triple_quoted_string(self) -> bool:
@property @property
def is_docstring(self) -> bool: def is_docstring(self) -> bool:
"""Is the line a docstring?""" """Is the line a docstring?"""
if Preview.unify_docstring_detection not in self.mode: return bool(self) and is_docstring(self.leaves[0])
return self._is_triple_quoted_string
return bool(self) and is_docstring(self.leaves[0], self.mode)
@property @property
def is_chained_assignment(self) -> bool: def is_chained_assignment(self) -> bool:
@ -670,6 +669,15 @@ def _maybe_empty_lines(self, current_line: Line) -> tuple[int, int]: # noqa: C9
current_line, before, user_had_newline current_line, before, user_had_newline
) )
if (
self.previous_line.is_import
and self.previous_line.depth == 0
and current_line.depth == 0
and not current_line.is_import
and Preview.always_one_newline_after_import in self.mode
):
return 1, 0
if ( if (
self.previous_line.is_import self.previous_line.is_import
and not current_line.is_import and not current_line.is_import

View File

@ -196,28 +196,19 @@ def supports_feature(target_versions: set[TargetVersion], feature: Feature) -> b
class Preview(Enum): class Preview(Enum):
"""Individual preview style features.""" """Individual preview style features."""
hex_codes_in_unicode_sequences = auto()
# NOTE: string_processing requires wrap_long_dict_values_in_parens # NOTE: string_processing requires wrap_long_dict_values_in_parens
# for https://github.com/psf/black/issues/3117 to be fixed. # for https://github.com/psf/black/issues/3117 to be fixed.
string_processing = auto() string_processing = auto()
hug_parens_with_braces_and_square_brackets = auto() hug_parens_with_braces_and_square_brackets = auto()
unify_docstring_detection = auto()
no_normalize_fmt_skip_whitespace = auto()
wrap_long_dict_values_in_parens = auto() wrap_long_dict_values_in_parens = auto()
multiline_string_handling = auto() multiline_string_handling = auto()
typed_params_trailing_comma = auto() always_one_newline_after_import = auto()
is_simple_lookup_for_doublestar_expression = auto() fix_fmt_skip_in_one_liners = auto()
docstring_check_for_newline = auto()
remove_redundant_guard_parens = auto()
parens_for_long_if_clauses_in_case_block = auto()
pep646_typed_star_arg_type_var_tuple = auto()
UNSTABLE_FEATURES: set[Preview] = { UNSTABLE_FEATURES: set[Preview] = {
# Many issues, see summary in https://github.com/psf/black/issues/4042 # Many issues, see summary in https://github.com/psf/black/issues/4042
Preview.string_processing, Preview.string_processing,
# See issues #3452 and #4158
Preview.wrap_long_dict_values_in_parens,
# See issue #4159 # See issue #4159
Preview.multiline_string_handling, Preview.multiline_string_handling,
# See issue #4036 (crash), #4098, #4099 (proposed tweaks) # See issue #4036 (crash), #4098, #4099 (proposed tweaks)

View File

@ -3,7 +3,8 @@
""" """
import sys import sys
from typing import Final, Generic, Iterator, Literal, Optional, TypeVar, Union from collections.abc import Iterator
from typing import Final, Generic, Literal, Optional, TypeVar, Union
if sys.version_info >= (3, 10): if sys.version_info >= (3, 10):
from typing import TypeGuard from typing import TypeGuard
@ -13,7 +14,7 @@
from mypy_extensions import mypyc_attr from mypy_extensions import mypyc_attr
from black.cache import CACHE_DIR from black.cache import CACHE_DIR
from black.mode import Mode, Preview from black.mode import Mode
from black.strings import get_string_prefix, has_triple_quotes from black.strings import get_string_prefix, has_triple_quotes
from blib2to3 import pygram from blib2to3 import pygram
from blib2to3.pgen2 import token from blib2to3.pgen2 import token
@ -243,13 +244,7 @@ def whitespace(leaf: Leaf, *, complex_subscript: bool, mode: Mode) -> str: # no
elif ( elif (
prevp.type == token.STAR prevp.type == token.STAR
and parent_type(prevp) == syms.star_expr and parent_type(prevp) == syms.star_expr
and ( and parent_type(prevp.parent) in (syms.subscriptlist, syms.tname_star)
parent_type(prevp.parent) == syms.subscriptlist
or (
Preview.pep646_typed_star_arg_type_var_tuple in mode
and parent_type(prevp.parent) == syms.tname_star
)
)
): ):
# No space between typevar tuples or unpacking them. # No space between typevar tuples or unpacking them.
return NO return NO
@ -550,7 +545,7 @@ def is_arith_like(node: LN) -> bool:
} }
def is_docstring(node: NL, mode: Mode) -> bool: def is_docstring(node: NL) -> bool:
if isinstance(node, Leaf): if isinstance(node, Leaf):
if node.type != token.STRING: if node.type != token.STRING:
return False return False
@ -560,8 +555,7 @@ def is_docstring(node: NL, mode: Mode) -> bool:
return False return False
if ( if (
Preview.unify_docstring_detection in mode node.parent
and node.parent
and node.parent.type == syms.simple_stmt and node.parent.type == syms.simple_stmt
and not node.parent.prev_sibling and not node.parent.prev_sibling
and node.parent.parent and node.parent.parent
@ -609,6 +603,17 @@ def is_one_tuple(node: LN) -> bool:
) )
def is_tuple(node: LN) -> bool:
"""Return True if `node` holds a tuple."""
if node.type != syms.atom:
return False
gexp = unwrap_singleton_parenthesis(node)
if gexp is None or gexp.type != syms.testlist_gexp:
return False
return True
def is_tuple_containing_walrus(node: LN) -> bool: def is_tuple_containing_walrus(node: LN) -> bool:
"""Return True if `node` holds a tuple that contains a walrus operator.""" """Return True if `node` holds a tuple that contains a walrus operator."""
if node.type != syms.atom: if node.type != syms.atom:
@ -620,6 +625,28 @@ def is_tuple_containing_walrus(node: LN) -> bool:
return any(child.type == syms.namedexpr_test for child in gexp.children) return any(child.type == syms.namedexpr_test for child in gexp.children)
def is_tuple_containing_star(node: LN) -> bool:
"""Return True if `node` holds a tuple that contains a star operator."""
if node.type != syms.atom:
return False
gexp = unwrap_singleton_parenthesis(node)
if gexp is None or gexp.type != syms.testlist_gexp:
return False
return any(child.type == syms.star_expr for child in gexp.children)
def is_generator(node: LN) -> bool:
"""Return True if `node` holds a generator."""
if node.type != syms.atom:
return False
gexp = unwrap_singleton_parenthesis(node)
if gexp is None or gexp.type != syms.testlist_gexp:
return False
return any(child.type == syms.old_comp_for for child in gexp.children)
def is_one_sequence_between( def is_one_sequence_between(
opening: Leaf, opening: Leaf,
closing: Leaf, closing: Leaf,
@ -1031,3 +1058,21 @@ def furthest_ancestor_with_last_leaf(leaf: Leaf) -> LN:
while node.parent and node.parent.children and node is node.parent.children[-1]: while node.parent and node.parent.children and node is node.parent.children[-1]:
node = node.parent node = node.parent
return node return node
def has_sibling_with_type(node: LN, type: int) -> bool:
# Check previous siblings
sibling = node.prev_sibling
while sibling is not None:
if sibling.type == type:
return True
sibling = sibling.prev_sibling
# Check next siblings
sibling = node.next_sibling
while sibling is not None:
if sibling.type == type:
return True
sibling = sibling.next_sibling
return False

View File

@ -5,7 +5,7 @@
import ast import ast
import sys import sys
import warnings import warnings
from typing import Collection, Iterator from collections.abc import Collection, Iterator
from black.mode import VERSION_TO_FEATURES, Feature, TargetVersion, supports_feature from black.mode import VERSION_TO_FEATURES, Feature, TargetVersion, supports_feature
from black.nodes import syms from black.nodes import syms
@ -213,7 +213,7 @@ def _stringify_ast(node: ast.AST, parent_stack: list[ast.AST]) -> Iterator[str]:
and isinstance(node, ast.Delete) and isinstance(node, ast.Delete)
and isinstance(item, ast.Tuple) and isinstance(item, ast.Tuple)
): ):
for elt in item.elts: for elt in _unwrap_tuples(item):
yield from _stringify_ast_with_new_parent( yield from _stringify_ast_with_new_parent(
elt, parent_stack, node elt, parent_stack, node
) )
@ -250,3 +250,11 @@ def _stringify_ast(node: ast.AST, parent_stack: list[ast.AST]) -> Iterator[str]:
) )
yield f"{' ' * len(parent_stack)}) # /{node.__class__.__name__}" yield f"{' ' * len(parent_stack)}) # /{node.__class__.__name__}"
def _unwrap_tuples(node: ast.Tuple) -> Iterator[ast.AST]:
for elt in node.elts:
if isinstance(elt, ast.Tuple):
yield from _unwrap_tuples(elt)
else:
yield elt

View File

@ -1,8 +1,9 @@
"""Functions related to Black's formatting by line ranges feature.""" """Functions related to Black's formatting by line ranges feature."""
import difflib import difflib
from collections.abc import Collection, Iterator, Sequence
from dataclasses import dataclass from dataclasses import dataclass
from typing import Collection, Iterator, Sequence, Union from typing import Union
from black.nodes import ( from black.nodes import (
LN, LN,

View File

@ -79,19 +79,12 @@
"type": "array", "type": "array",
"items": { "items": {
"enum": [ "enum": [
"hex_codes_in_unicode_sequences",
"string_processing", "string_processing",
"hug_parens_with_braces_and_square_brackets", "hug_parens_with_braces_and_square_brackets",
"unify_docstring_detection",
"no_normalize_fmt_skip_whitespace",
"wrap_long_dict_values_in_parens", "wrap_long_dict_values_in_parens",
"multiline_string_handling", "multiline_string_handling",
"typed_params_trailing_comma", "always_one_newline_after_import",
"is_simple_lookup_for_doublestar_expression", "fix_fmt_skip_in_one_liners"
"docstring_check_for_newline",
"remove_redundant_guard_parens",
"parens_for_long_if_clauses_in_case_block",
"pep646_typed_star_arg_type_var_tuple"
] ]
}, },
"description": "Enable specific features included in the `--unstable` style. Requires `--preview`. No compatibility guarantees are provided on the behavior or existence of any unstable features." "description": "Enable specific features included in the `--unstable` style. Requires `--preview`. No compatibility guarantees are provided on the behavior or existence of any unstable features."

View File

@ -5,7 +5,8 @@
import re import re
import sys import sys
from functools import lru_cache from functools import lru_cache
from typing import Final, Match, Pattern from re import Match, Pattern
from typing import Final
from black._width_table import WIDTH_TABLE from black._width_table import WIDTH_TABLE
from blib2to3.pytree import Leaf from blib2to3.pytree import Leaf
@ -62,10 +63,9 @@ def lines_with_leading_tabs_expanded(s: str) -> list[str]:
return lines return lines
def fix_docstring(docstring: str, prefix: str) -> str: def fix_multiline_docstring(docstring: str, prefix: str) -> str:
# https://www.python.org/dev/peps/pep-0257/#handling-docstring-indentation # https://www.python.org/dev/peps/pep-0257/#handling-docstring-indentation
if not docstring: assert docstring, "INTERNAL ERROR: Multiline docstrings cannot be empty"
return ""
lines = lines_with_leading_tabs_expanded(docstring) lines = lines_with_leading_tabs_expanded(docstring)
# Determine minimum indentation (first line doesn't count): # Determine minimum indentation (first line doesn't count):
indent = sys.maxsize indent = sys.maxsize
@ -185,8 +185,7 @@ def normalize_string_quotes(s: str) -> str:
orig_quote = "'" orig_quote = "'"
new_quote = '"' new_quote = '"'
first_quote_pos = s.find(orig_quote) first_quote_pos = s.find(orig_quote)
if first_quote_pos == -1: assert first_quote_pos != -1, f"INTERNAL ERROR: Malformed string {s!r}"
return s # There's an internal error
prefix = s[:first_quote_pos] prefix = s[:first_quote_pos]
unescaped_new_quote = _cached_compile(rf"(([^\\]|^)(\\\\)*){new_quote}") unescaped_new_quote = _cached_compile(rf"(([^\\]|^)(\\\\)*){new_quote}")

View File

@ -5,27 +5,15 @@
import re import re
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from collections import defaultdict from collections import defaultdict
from collections.abc import Callable, Collection, Iterable, Iterator, Sequence
from dataclasses import dataclass from dataclasses import dataclass
from typing import ( from typing import Any, ClassVar, Final, Literal, Optional, TypeVar, Union
Any,
Callable,
ClassVar,
Collection,
Final,
Iterable,
Iterator,
Literal,
Optional,
Sequence,
TypeVar,
Union,
)
from mypy_extensions import trait from mypy_extensions import trait
from black.comments import contains_pragma_comment from black.comments import contains_pragma_comment
from black.lines import Line, append_leaves from black.lines import Line, append_leaves
from black.mode import Feature, Mode, Preview from black.mode import Feature, Mode
from black.nodes import ( from black.nodes import (
CLOSING_BRACKETS, CLOSING_BRACKETS,
OPENING_BRACKETS, OPENING_BRACKETS,
@ -94,18 +82,12 @@ def is_simple_lookup(index: int, kind: Literal[1, -1]) -> bool:
# Brackets and parentheses indicate calls, subscripts, etc. ... # Brackets and parentheses indicate calls, subscripts, etc. ...
# basically stuff that doesn't count as "simple". Only a NAME lookup # basically stuff that doesn't count as "simple". Only a NAME lookup
# or dotted lookup (eg. NAME.NAME) is OK. # or dotted lookup (eg. NAME.NAME) is OK.
if Preview.is_simple_lookup_for_doublestar_expression not in mode: if kind == -1:
return original_is_simple_lookup_func(line, index, kind) return handle_is_simple_look_up_prev(line, index, {token.RPAR, token.RSQB})
else: else:
if kind == -1: return handle_is_simple_lookup_forward(
return handle_is_simple_look_up_prev( line, index, {token.LPAR, token.LSQB}
line, index, {token.RPAR, token.RSQB} )
)
else:
return handle_is_simple_lookup_forward(
line, index, {token.LPAR, token.LSQB}
)
def is_simple_operand(index: int, kind: Literal[1, -1]) -> bool: def is_simple_operand(index: int, kind: Literal[1, -1]) -> bool:
# An operand is considered "simple" if's a NAME, a numeric CONSTANT, a simple # An operand is considered "simple" if's a NAME, a numeric CONSTANT, a simple
@ -151,30 +133,6 @@ def is_simple_operand(index: int, kind: Literal[1, -1]) -> bool:
yield new_line yield new_line
def original_is_simple_lookup_func(
line: Line, index: int, step: Literal[1, -1]
) -> bool:
if step == -1:
disallowed = {token.RPAR, token.RSQB}
else:
disallowed = {token.LPAR, token.LSQB}
while 0 <= index < len(line.leaves):
current = line.leaves[index]
if current.type in disallowed:
return False
if current.type not in {token.NAME, token.DOT} or current.value == "for":
# If the current token isn't disallowed, we'll assume this is
# simple as only the disallowed tokens are semantically
# attached to this lookup expression we're checking. Also,
# stop early if we hit the 'for' bit of a comprehension.
return True
index += step
return True
def handle_is_simple_look_up_prev(line: Line, index: int, disallowed: set[int]) -> bool: def handle_is_simple_look_up_prev(line: Line, index: int, disallowed: set[int]) -> bool:
""" """
Handling the determination of is_simple_lookup for the lines prior to the doublestar Handling the determination of is_simple_lookup for the lines prior to the doublestar
@ -672,10 +630,10 @@ def make_naked(string: str, string_prefix: str) -> str:
""" """
assert_is_leaf_string(string) assert_is_leaf_string(string)
if "f" in string_prefix: if "f" in string_prefix:
f_expressions = ( f_expressions = [
string[span[0] + 1 : span[1] - 1] # +-1 to get rid of curly braces string[span[0] + 1 : span[1] - 1] # +-1 to get rid of curly braces
for span in iter_fexpr_spans(string) for span in iter_fexpr_spans(string)
) ]
debug_expressions_contain_visible_quotes = any( debug_expressions_contain_visible_quotes = any(
re.search(r".*[\'\"].*(?<![!:=])={1}(?!=)(?![^\s:])", expression) re.search(r".*[\'\"].*(?<![!:=])={1}(?!=)(?![^\s:])", expression)
for expression in f_expressions for expression in f_expressions
@ -806,6 +764,8 @@ def _validate_msg(line: Line, string_idx: int) -> TResult[None]:
- The set of all string prefixes in the string group is of - The set of all string prefixes in the string group is of
length greater than one and is not equal to {"", "f"}. length greater than one and is not equal to {"", "f"}.
- The string group consists of raw strings. - The string group consists of raw strings.
- The string group would merge f-strings with different quote types
and internal quotes.
- The string group is stringified type annotations. We don't want to - The string group is stringified type annotations. We don't want to
process stringified type annotations since pyright doesn't support process stringified type annotations since pyright doesn't support
them spanning multiple string values. (NOTE: mypy, pytype, pyre do them spanning multiple string values. (NOTE: mypy, pytype, pyre do
@ -832,6 +792,8 @@ def _validate_msg(line: Line, string_idx: int) -> TResult[None]:
i += inc i += inc
QUOTE = line.leaves[string_idx].value[-1]
num_of_inline_string_comments = 0 num_of_inline_string_comments = 0
set_of_prefixes = set() set_of_prefixes = set()
num_of_strings = 0 num_of_strings = 0
@ -854,6 +816,19 @@ def _validate_msg(line: Line, string_idx: int) -> TResult[None]:
set_of_prefixes.add(prefix) set_of_prefixes.add(prefix)
if (
"f" in prefix
and leaf.value[-1] != QUOTE
and (
"'" in leaf.value[len(prefix) + 1 : -1]
or '"' in leaf.value[len(prefix) + 1 : -1]
)
):
return TErr(
"StringMerger does NOT merge f-strings with different quote types"
" and internal quotes."
)
if id(leaf) in line.comments: if id(leaf) in line.comments:
num_of_inline_string_comments += 1 num_of_inline_string_comments += 1
if contains_pragma_comment(line.comments[id(leaf)]): if contains_pragma_comment(line.comments[id(leaf)]):
@ -882,6 +857,7 @@ class StringParenStripper(StringTransformer):
The line contains a string which is surrounded by parentheses and: The line contains a string which is surrounded by parentheses and:
- The target string is NOT the only argument to a function call. - The target string is NOT the only argument to a function call.
- The target string is NOT a "pointless" string. - The target string is NOT a "pointless" string.
- The target string is NOT a dictionary value.
- If the target string contains a PERCENT, the brackets are not - If the target string contains a PERCENT, the brackets are not
preceded or followed by an operator with higher precedence than preceded or followed by an operator with higher precedence than
PERCENT. PERCENT.
@ -929,11 +905,14 @@ def do_match(self, line: Line) -> TMatchResult:
): ):
continue continue
# That LPAR should NOT be preceded by a function name or a closing # That LPAR should NOT be preceded by a colon (which could be a
# bracket (which could be a function which returns a function or a # dictionary value), function name, or a closing bracket (which
# list/dictionary that contains a function)... # could be a function returning a function or a list/dictionary
# containing a function)...
if is_valid_index(idx - 2) and ( if is_valid_index(idx - 2) and (
LL[idx - 2].type == token.NAME or LL[idx - 2].type in CLOSING_BRACKETS LL[idx - 2].type == token.COLON
or LL[idx - 2].type == token.NAME
or LL[idx - 2].type in CLOSING_BRACKETS
): ):
continue continue
@ -2259,12 +2238,12 @@ def do_transform(
elif right_leaves and right_leaves[-1].type == token.RPAR: elif right_leaves and right_leaves[-1].type == token.RPAR:
# Special case for lambda expressions as dict's value, e.g.: # Special case for lambda expressions as dict's value, e.g.:
# my_dict = { # my_dict = {
# "key": lambda x: f"formatted: {x}, # "key": lambda x: f"formatted: {x}",
# } # }
# After wrapping the dict's value with parentheses, the string is # After wrapping the dict's value with parentheses, the string is
# followed by a RPAR but its opening bracket is lambda's, not # followed by a RPAR but its opening bracket is lambda's, not
# the string's: # the string's:
# "key": (lambda x: f"formatted: {x}), # "key": (lambda x: f"formatted: {x}"),
opening_bracket = right_leaves[-1].opening_bracket opening_bracket = right_leaves[-1].opening_bracket
if opening_bracket is not None and opening_bracket in left_leaves: if opening_bracket is not None and opening_bracket in left_leaves:
index = left_leaves.index(opening_bracket) index = left_leaves.index(opening_bracket)

View File

@ -2,7 +2,7 @@
import logging import logging
from concurrent.futures import Executor, ProcessPoolExecutor from concurrent.futures import Executor, ProcessPoolExecutor
from datetime import datetime, timezone from datetime import datetime, timezone
from functools import partial from functools import cache, partial
from multiprocessing import freeze_support from multiprocessing import freeze_support
try: try:
@ -85,12 +85,16 @@ def main(bind_host: str, bind_port: int) -> None:
web.run_app(app, host=bind_host, port=bind_port, handle_signals=True, print=None) web.run_app(app, host=bind_host, port=bind_port, handle_signals=True, print=None)
@cache
def executor() -> Executor:
return ProcessPoolExecutor()
def make_app() -> web.Application: def make_app() -> web.Application:
app = web.Application( app = web.Application(
middlewares=[cors(allow_headers=(*BLACK_HEADERS, "Content-Type"))] middlewares=[cors(allow_headers=(*BLACK_HEADERS, "Content-Type"))]
) )
executor = ProcessPoolExecutor() app.add_routes([web.post("/", partial(handle, executor=executor()))])
app.add_routes([web.post("/", partial(handle, executor=executor))])
return app return app

View File

@ -1,4 +1,4 @@
from typing import Awaitable, Callable, Iterable from collections.abc import Awaitable, Callable, Iterable
from aiohttp.typedefs import Middleware from aiohttp.typedefs import Middleware
from aiohttp.web_middlewares import middleware from aiohttp.web_middlewares import middleware

View File

@ -12,9 +12,9 @@ file_input: (NEWLINE | stmt)* ENDMARKER
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
eval_input: testlist NEWLINE* ENDMARKER eval_input: testlist NEWLINE* ENDMARKER
typevar: NAME [':' expr] ['=' expr] typevar: NAME [':' test] ['=' test]
paramspec: '**' NAME ['=' expr] paramspec: '**' NAME ['=' test]
typevartuple: '*' NAME ['=' (expr|star_expr)] typevartuple: '*' NAME ['=' (test|star_expr)]
typeparam: typevar | paramspec | typevartuple typeparam: typevar | paramspec | typevartuple
typeparams: '[' typeparam (',' typeparam)* [','] ']' typeparams: '[' typeparam (',' typeparam)* [','] ']'

View File

@ -21,13 +21,14 @@
import os import os
import pkgutil import pkgutil
import sys import sys
from collections.abc import Iterable, Iterator
from contextlib import contextmanager from contextlib import contextmanager
from dataclasses import dataclass, field from dataclasses import dataclass, field
from logging import Logger from logging import Logger
from typing import IO, Any, Iterable, Iterator, Optional, Union, cast from typing import IO, Any, Optional, Union, cast
from blib2to3.pgen2.grammar import Grammar from blib2to3.pgen2.grammar import Grammar
from blib2to3.pgen2.tokenize import GoodTokenInfo from blib2to3.pgen2.tokenize import TokenInfo
from blib2to3.pytree import NL from blib2to3.pytree import NL
# Pgen imports # Pgen imports
@ -111,7 +112,7 @@ def __init__(self, grammar: Grammar, logger: Optional[Logger] = None) -> None:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
self.logger = logger self.logger = logger
def parse_tokens(self, tokens: Iterable[GoodTokenInfo], debug: bool = False) -> NL: def parse_tokens(self, tokens: Iterable[TokenInfo], debug: bool = False) -> NL:
"""Parse a series of tokens and return the syntax tree.""" """Parse a series of tokens and return the syntax tree."""
# XXX Move the prefix computation into a wrapper around tokenize. # XXX Move the prefix computation into a wrapper around tokenize.
proxy = TokenProxy(tokens) proxy = TokenProxy(tokens)
@ -179,27 +180,17 @@ def parse_tokens(self, tokens: Iterable[GoodTokenInfo], debug: bool = False) ->
assert p.rootnode is not None assert p.rootnode is not None
return p.rootnode return p.rootnode
def parse_stream_raw(self, stream: IO[str], debug: bool = False) -> NL:
"""Parse a stream and return the syntax tree."""
tokens = tokenize.generate_tokens(stream.readline, grammar=self.grammar)
return self.parse_tokens(tokens, debug)
def parse_stream(self, stream: IO[str], debug: bool = False) -> NL:
"""Parse a stream and return the syntax tree."""
return self.parse_stream_raw(stream, debug)
def parse_file( def parse_file(
self, filename: Path, encoding: Optional[str] = None, debug: bool = False self, filename: Path, encoding: Optional[str] = None, debug: bool = False
) -> NL: ) -> NL:
"""Parse a file and return the syntax tree.""" """Parse a file and return the syntax tree."""
with open(filename, encoding=encoding) as stream: with open(filename, encoding=encoding) as stream:
return self.parse_stream(stream, debug) text = stream.read()
return self.parse_string(text, debug)
def parse_string(self, text: str, debug: bool = False) -> NL: def parse_string(self, text: str, debug: bool = False) -> NL:
"""Parse a string and return the syntax tree.""" """Parse a string and return the syntax tree."""
tokens = tokenize.generate_tokens( tokens = tokenize.tokenize(text, grammar=self.grammar)
io.StringIO(text).readline, grammar=self.grammar
)
return self.parse_tokens(tokens, debug) return self.parse_tokens(tokens, debug)
def _partially_consume_prefix(self, prefix: str, column: int) -> tuple[str, str]: def _partially_consume_prefix(self, prefix: str, column: int) -> tuple[str, str]:

View File

@ -4,7 +4,6 @@
"""Safely evaluate Python string literals without using eval().""" """Safely evaluate Python string literals without using eval()."""
import re import re
from typing import Match
simple_escapes: dict[str, str] = { simple_escapes: dict[str, str] = {
"a": "\a", "a": "\a",
@ -20,7 +19,7 @@
} }
def escape(m: Match[str]) -> str: def escape(m: re.Match[str]) -> str:
all, tail = m.group(0, 1) all, tail = m.group(0, 1)
assert all.startswith("\\") assert all.startswith("\\")
esc = simple_escapes.get(tail) esc = simple_escapes.get(tail)
@ -29,16 +28,16 @@ def escape(m: Match[str]) -> str:
if tail.startswith("x"): if tail.startswith("x"):
hexes = tail[1:] hexes = tail[1:]
if len(hexes) < 2: if len(hexes) < 2:
raise ValueError("invalid hex string escape ('\\%s')" % tail) raise ValueError(f"invalid hex string escape ('\\{tail}')")
try: try:
i = int(hexes, 16) i = int(hexes, 16)
except ValueError: except ValueError:
raise ValueError("invalid hex string escape ('\\%s')" % tail) from None raise ValueError(f"invalid hex string escape ('\\{tail}')") from None
else: else:
try: try:
i = int(tail, 8) i = int(tail, 8)
except ValueError: except ValueError:
raise ValueError("invalid octal string escape ('\\%s')" % tail) from None raise ValueError(f"invalid octal string escape ('\\{tail}')") from None
return chr(i) return chr(i)

View File

@ -9,8 +9,9 @@
how this parsing engine works. how this parsing engine works.
""" """
from collections.abc import Callable, Iterator
from contextlib import contextmanager from contextlib import contextmanager
from typing import TYPE_CHECKING, Any, Callable, Iterator, Optional, Union, cast from typing import TYPE_CHECKING, Any, Optional, Union, cast
from blib2to3.pgen2.grammar import Grammar from blib2to3.pgen2.grammar import Grammar
from blib2to3.pytree import NL, Context, Leaf, Node, RawNode, convert from blib2to3.pytree import NL, Context, Leaf, Node, RawNode, convert
@ -88,18 +89,12 @@ def backtrack(self) -> Iterator[None]:
self.parser.is_backtracking = is_backtracking self.parser.is_backtracking = is_backtracking
def add_token(self, tok_type: int, tok_val: str, raw: bool = False) -> None: def add_token(self, tok_type: int, tok_val: str, raw: bool = False) -> None:
func: Callable[..., Any]
if raw:
func = self.parser._addtoken
else:
func = self.parser.addtoken
for ilabel in self.ilabels: for ilabel in self.ilabels:
with self.switch_to(ilabel): with self.switch_to(ilabel):
args = [tok_type, tok_val, self.context]
if raw: if raw:
args.insert(0, ilabel) self.parser._addtoken(ilabel, tok_type, tok_val, self.context)
func(*args) else:
self.parser.addtoken(tok_type, tok_val, self.context)
def determine_route( def determine_route(
self, value: Optional[str] = None, force: bool = False self, value: Optional[str] = None, force: bool = False

View File

@ -2,10 +2,11 @@
# Licensed to PSF under a Contributor Agreement. # Licensed to PSF under a Contributor Agreement.
import os import os
from typing import IO, Any, Iterator, NoReturn, Optional, Sequence, Union from collections.abc import Iterator, Sequence
from typing import IO, Any, NoReturn, Optional, Union
from blib2to3.pgen2 import grammar, token, tokenize from blib2to3.pgen2 import grammar, token, tokenize
from blib2to3.pgen2.tokenize import GoodTokenInfo from blib2to3.pgen2.tokenize import TokenInfo
Path = Union[str, "os.PathLike[str]"] Path = Union[str, "os.PathLike[str]"]
@ -17,7 +18,7 @@ class PgenGrammar(grammar.Grammar):
class ParserGenerator: class ParserGenerator:
filename: Path filename: Path
stream: IO[str] stream: IO[str]
generator: Iterator[GoodTokenInfo] generator: Iterator[TokenInfo]
first: dict[str, Optional[dict[str, int]]] first: dict[str, Optional[dict[str, int]]]
def __init__(self, filename: Path, stream: Optional[IO[str]] = None) -> None: def __init__(self, filename: Path, stream: Optional[IO[str]] = None) -> None:
@ -26,8 +27,7 @@ def __init__(self, filename: Path, stream: Optional[IO[str]] = None) -> None:
stream = open(filename, encoding="utf-8") stream = open(filename, encoding="utf-8")
close_stream = stream.close close_stream = stream.close
self.filename = filename self.filename = filename
self.stream = stream self.generator = tokenize.tokenize(stream.read())
self.generator = tokenize.generate_tokens(stream.readline)
self.gettoken() # Initialize lookahead self.gettoken() # Initialize lookahead
self.dfas, self.startsymbol = self.parse() self.dfas, self.startsymbol = self.parse()
if close_stream is not None: if close_stream is not None:
@ -140,7 +140,7 @@ def calcfirst(self, name: str) -> None:
if label in self.first: if label in self.first:
fset = self.first[label] fset = self.first[label]
if fset is None: if fset is None:
raise ValueError("recursion for rule %r" % name) raise ValueError(f"recursion for rule {name!r}")
else: else:
self.calcfirst(label) self.calcfirst(label)
fset = self.first[label] fset = self.first[label]
@ -155,8 +155,8 @@ def calcfirst(self, name: str) -> None:
for symbol in itsfirst: for symbol in itsfirst:
if symbol in inverse: if symbol in inverse:
raise ValueError( raise ValueError(
"rule %s is ambiguous; %s is in the first sets of %s as well" f"rule {name} is ambiguous; {symbol} is in the first sets of"
" as %s" % (name, symbol, label, inverse[symbol]) f" {label} as well as {inverse[symbol]}"
) )
inverse[symbol] = label inverse[symbol] = label
self.first[name] = totalset self.first[name] = totalset
@ -237,16 +237,16 @@ def dump_nfa(self, name: str, start: "NFAState", finish: "NFAState") -> None:
j = len(todo) j = len(todo)
todo.append(next) todo.append(next)
if label is None: if label is None:
print(" -> %d" % j) print(f" -> {j}")
else: else:
print(" %s -> %d" % (label, j)) print(f" {label} -> {j}")
def dump_dfa(self, name: str, dfa: Sequence["DFAState"]) -> None: def dump_dfa(self, name: str, dfa: Sequence["DFAState"]) -> None:
print("Dump of DFA for", name) print("Dump of DFA for", name)
for i, state in enumerate(dfa): for i, state in enumerate(dfa):
print(" State", i, state.isfinal and "(final)" or "") print(" State", i, state.isfinal and "(final)" or "")
for label, next in sorted(state.arcs.items()): for label, next in sorted(state.arcs.items()):
print(" %s -> %d" % (label, dfa.index(next))) print(f" {label} -> {dfa.index(next)}")
def simplify_dfa(self, dfa: list["DFAState"]) -> None: def simplify_dfa(self, dfa: list["DFAState"]) -> None:
# This is not theoretically optimal, but works well enough. # This is not theoretically optimal, but works well enough.
@ -330,15 +330,12 @@ def parse_atom(self) -> tuple["NFAState", "NFAState"]:
return a, z return a, z
else: else:
self.raise_error( self.raise_error(
"expected (...) or NAME or STRING, got %s/%s", self.type, self.value f"expected (...) or NAME or STRING, got {self.type}/{self.value}"
) )
raise AssertionError
def expect(self, type: int, value: Optional[Any] = None) -> str: def expect(self, type: int, value: Optional[Any] = None) -> str:
if self.type != type or (value is not None and self.value != value): if self.type != type or (value is not None and self.value != value):
self.raise_error( self.raise_error(f"expected {type}/{value}, got {self.type}/{self.value}")
"expected %s/%s, got %s/%s", type, value, self.type, self.value
)
value = self.value value = self.value
self.gettoken() self.gettoken()
return value return value
@ -350,13 +347,10 @@ def gettoken(self) -> None:
self.type, self.value, self.begin, self.end, self.line = tup self.type, self.value, self.begin, self.end, self.line = tup
# print token.tok_name[self.type], repr(self.value) # print token.tok_name[self.type], repr(self.value)
def raise_error(self, msg: str, *args: Any) -> NoReturn: def raise_error(self, msg: str) -> NoReturn:
if args: raise SyntaxError(
try: msg, (str(self.filename), self.end[0], self.end[1], self.line)
msg = msg % args )
except Exception:
msg = " ".join([msg] + list(map(str, args)))
raise SyntaxError(msg, (self.filename, self.end[0], self.end[1], self.line))
class NFAState: class NFAState:

File diff suppressed because it is too large Load Diff

View File

@ -12,7 +12,8 @@
# mypy: allow-untyped-defs, allow-incomplete-defs # mypy: allow-untyped-defs, allow-incomplete-defs
from typing import Any, Iterable, Iterator, Optional, TypeVar, Union from collections.abc import Iterable, Iterator
from typing import Any, Optional, TypeVar, Union
from blib2to3.pgen2.grammar import Grammar from blib2to3.pgen2.grammar import Grammar
@ -267,11 +268,7 @@ def __init__(
def __repr__(self) -> str: def __repr__(self) -> str:
"""Return a canonical string representation.""" """Return a canonical string representation."""
assert self.type is not None assert self.type is not None
return "{}({}, {!r})".format( return f"{self.__class__.__name__}({type_repr(self.type)}, {self.children!r})"
self.__class__.__name__,
type_repr(self.type),
self.children,
)
def __str__(self) -> str: def __str__(self) -> str:
""" """
@ -420,10 +417,9 @@ def __repr__(self) -> str:
from .pgen2.token import tok_name from .pgen2.token import tok_name
assert self.type is not None assert self.type is not None
return "{}({}, {!r})".format( return (
self.__class__.__name__, f"{self.__class__.__name__}({tok_name.get(self.type, self.type)},"
tok_name.get(self.type, self.type), f" {self.value!r})"
self.value,
) )
def __str__(self) -> str: def __str__(self) -> str:
@ -526,7 +522,7 @@ def __repr__(self) -> str:
args = [type_repr(self.type), self.content, self.name] args = [type_repr(self.type), self.content, self.name]
while args and args[-1] is None: while args and args[-1] is None:
del args[-1] del args[-1]
return "{}({})".format(self.__class__.__name__, ", ".join(map(repr, args))) return f"{self.__class__.__name__}({', '.join(map(repr, args))})"
def _submatch(self, node, results=None) -> bool: def _submatch(self, node, results=None) -> bool:
raise NotImplementedError raise NotImplementedError

View File

@ -0,0 +1,17 @@
# regression test for #1765
class Foo:
def foo(self):
if True:
content_ids: Mapping[
str, Optional[ContentId]
] = self.publisher_content_store.store_config_contents(files)
# output
# regression test for #1765
class Foo:
def foo(self):
if True:
content_ids: Mapping[str, Optional[ContentId]] = (
self.publisher_content_store.store_config_contents(files)
)

View File

@ -1,4 +1,3 @@
# flags: --preview
# long variable name # long variable name
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = 0 this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = 0
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = 1 # with a comment this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = 1 # with a comment
@ -32,7 +31,8 @@
raise ValueError(err.format(key)) raise ValueError(err.format(key))
concatenated_strings = "some strings that are " "concatenated implicitly, so if you put them on separate " "lines it will fit" concatenated_strings = "some strings that are " "concatenated implicitly, so if you put them on separate " "lines it will fit"
del concatenated_strings, string_variable_name, normal_function_name, normal_name, need_more_to_make_the_line_long_enough del concatenated_strings, string_variable_name, normal_function_name, normal_name, need_more_to_make_the_line_long_enough
del ([], name_1, name_2), [(), [], name_4, name_3], name_1[[name_2 for name_1 in name_0]]
del (),
# output # output
@ -92,3 +92,9 @@
normal_name, normal_name,
need_more_to_make_the_line_long_enough, need_more_to_make_the_line_long_enough,
) )
del (
([], name_1, name_2),
[(), [], name_4, name_3],
name_1[[name_2 for name_1 in name_0]],
)
del ((),)

View File

@ -1,4 +1,3 @@
# flags: --minimum-version=3.8
with \ with \
make_context_manager1() as cm1, \ make_context_manager1() as cm1, \
make_context_manager2() as cm2, \ make_context_manager2() as cm2, \

View File

@ -1,4 +1,3 @@
# flags: --minimum-version=3.9
with \ with \
make_context_manager1() as cm1, \ make_context_manager1() as cm1, \
make_context_manager2() as cm2, \ make_context_manager2() as cm2, \
@ -85,6 +84,31 @@ async def func():
pass pass
# don't remove the brackets here, it changes the meaning of the code.
with (x, y) as z:
pass
# don't remove the brackets here, it changes the meaning of the code.
# even though the code will always trigger a runtime error
with (name_5, name_4), name_5:
pass
def test_tuple_as_contextmanager():
from contextlib import nullcontext
try:
with (nullcontext(),nullcontext()),nullcontext():
pass
except TypeError:
# test passed
pass
else:
# this should be a type error
assert False
# output # output
@ -173,3 +197,28 @@ async def func():
some_other_function(argument1, argument2, argument3="some_value"), some_other_function(argument1, argument2, argument3="some_value"),
): ):
pass pass
# don't remove the brackets here, it changes the meaning of the code.
with (x, y) as z:
pass
# don't remove the brackets here, it changes the meaning of the code.
# even though the code will always trigger a runtime error
with (name_5, name_4), name_5:
pass
def test_tuple_as_contextmanager():
from contextlib import nullcontext
try:
with (nullcontext(), nullcontext()), nullcontext():
pass
except TypeError:
# test passed
pass
else:
# this should be a type error
assert False

View File

@ -1,4 +1,3 @@
# flags: --minimum-version=3.9
# This file uses parenthesized context managers introduced in Python 3.9. # This file uses parenthesized context managers introduced in Python 3.9.

View File

@ -1,4 +1,3 @@
# flags: --preview
""" """
87 characters ............................................................................ 87 characters ............................................................................
""" """

View File

@ -0,0 +1,9 @@
# flags: --preview
def foo(): return "mock" # fmt: skip
if True: print("yay") # fmt: skip
for i in range(10): print(i) # fmt: skip
j = 1 # fmt: skip
while j < 10: j += 1 # fmt: skip
b = [c for c in "A very long string that would normally generate some kind of collapse, since it is this long"] # fmt: skip

View File

@ -0,0 +1,6 @@
def foo():
pass
# comment 1 # fmt: skip
# comment 2

View File

@ -1,4 +1,3 @@
# flags: --preview
print () # fmt: skip print () # fmt: skip
print () # fmt:skip print () # fmt:skip

View File

@ -1,4 +1,3 @@
# flags: --preview
x = "\x1F" x = "\x1F"
x = "\\x1B" x = "\\x1B"
x = "\\\x1B" x = "\\\x1B"

View File

@ -0,0 +1,67 @@
# Regression tests for long f-strings, including examples from issue #3623
a = (
'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
)
a = (
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
)
a = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' + \
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
a = f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"' + \
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
a = (
f'bbbbbbb"{"b"}"'
'aaaaaaaa'
)
a = (
f'"{"b"}"'
)
a = (
f'\"{"b"}\"'
)
a = (
r'\"{"b"}\"'
)
# output
# Regression tests for long f-strings, including examples from issue #3623
a = (
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
)
a = (
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
)
a = (
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
+ f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
)
a = (
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
+ f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
)
a = f'bbbbbbb"{"b"}"' "aaaaaaaa"
a = f'"{"b"}"'
a = f'"{"b"}"'
a = r'\"{"b"}\"'

View File

@ -1,4 +1,4 @@
# flags: --preview --minimum-version=3.10 # flags: --minimum-version=3.10
# normal, short, function definition # normal, short, function definition
def foo(a, b) -> tuple[int, float]: ... def foo(a, b) -> tuple[int, float]: ...

View File

@ -0,0 +1,307 @@
# flags: --minimum-version=3.12
def plain[T, B](a: T, b: T) -> T:
return a
def arg_magic[T, B](a: T, b: T,) -> T:
return a
def type_param_magic[T, B,](a: T, b: T) -> T:
return a
def both_magic[T, B,](a: T, b: T,) -> T:
return a
def plain_multiline[
T,
B
](
a: T,
b: T
) -> T:
return a
def arg_magic_multiline[
T,
B
](
a: T,
b: T,
) -> T:
return a
def type_param_magic_multiline[
T,
B,
](
a: T,
b: T
) -> T:
return a
def both_magic_multiline[
T,
B,
](
a: T,
b: T,
) -> T:
return a
def plain_mixed1[
T,
B
](a: T, b: T) -> T:
return a
def plain_mixed2[T, B](
a: T,
b: T
) -> T:
return a
def arg_magic_mixed1[
T,
B
](a: T, b: T,) -> T:
return a
def arg_magic_mixed2[T, B](
a: T,
b: T,
) -> T:
return a
def type_param_magic_mixed1[
T,
B,
](a: T, b: T) -> T:
return a
def type_param_magic_mixed2[T, B,](
a: T,
b: T
) -> T:
return a
def both_magic_mixed1[
T,
B,
](a: T, b: T,) -> T:
return a
def both_magic_mixed2[T, B,](
a: T,
b: T,
) -> T:
return a
def something_something_function[
T: Model
](param: list[int], other_param: type[T], *, some_other_param: bool = True) -> QuerySet[
T
]:
pass
def func[A_LOT_OF_GENERIC_TYPES: AreBeingDefinedHere, LIKE_THIS, AND_THIS, ANOTHER_ONE, AND_YET_ANOTHER_ONE: ThisOneHasTyping](a: T, b: T, c: T, d: T, e: T, f: T, g: T, h: T, i: T, j: T, k: T, l: T, m: T, n: T, o: T, p: T) -> T:
return a
def with_random_comments[
Z
# bye
]():
return a
def func[
T, # comment
U # comment
,
Z: # comment
int
](): pass
def func[
T, # comment but it's long so it doesn't just move to the end of the line
U # comment comment comm comm ent ent
,
Z: # comment ent ent comm comm comment
int
](): pass
# output
def plain[T, B](a: T, b: T) -> T:
return a
def arg_magic[T, B](
a: T,
b: T,
) -> T:
return a
def type_param_magic[
T,
B,
](
a: T, b: T
) -> T:
return a
def both_magic[
T,
B,
](
a: T,
b: T,
) -> T:
return a
def plain_multiline[T, B](a: T, b: T) -> T:
return a
def arg_magic_multiline[T, B](
a: T,
b: T,
) -> T:
return a
def type_param_magic_multiline[
T,
B,
](
a: T, b: T
) -> T:
return a
def both_magic_multiline[
T,
B,
](
a: T,
b: T,
) -> T:
return a
def plain_mixed1[T, B](a: T, b: T) -> T:
return a
def plain_mixed2[T, B](a: T, b: T) -> T:
return a
def arg_magic_mixed1[T, B](
a: T,
b: T,
) -> T:
return a
def arg_magic_mixed2[T, B](
a: T,
b: T,
) -> T:
return a
def type_param_magic_mixed1[
T,
B,
](
a: T, b: T
) -> T:
return a
def type_param_magic_mixed2[
T,
B,
](
a: T, b: T
) -> T:
return a
def both_magic_mixed1[
T,
B,
](
a: T,
b: T,
) -> T:
return a
def both_magic_mixed2[
T,
B,
](
a: T,
b: T,
) -> T:
return a
def something_something_function[T: Model](
param: list[int], other_param: type[T], *, some_other_param: bool = True
) -> QuerySet[T]:
pass
def func[
A_LOT_OF_GENERIC_TYPES: AreBeingDefinedHere,
LIKE_THIS,
AND_THIS,
ANOTHER_ONE,
AND_YET_ANOTHER_ONE: ThisOneHasTyping,
](
a: T,
b: T,
c: T,
d: T,
e: T,
f: T,
g: T,
h: T,
i: T,
j: T,
k: T,
l: T,
m: T,
n: T,
o: T,
p: T,
) -> T:
return a
def with_random_comments[
Z
# bye
]():
return a
def func[T, U, Z: int](): # comment # comment # comment
pass
def func[
T, # comment but it's long so it doesn't just move to the end of the line
U, # comment comment comm comm ent ent
Z: int, # comment ent ent comm comm comment
]():
pass

View File

@ -1,4 +1,3 @@
# flags: --preview
m2 = None if not isinstance(dist, Normal) else m** 2 + s * 2 m2 = None if not isinstance(dist, Normal) else m** 2 + s * 2
m3 = None if not isinstance(dist, Normal) else m ** 2 + s * 2 m3 = None if not isinstance(dist, Normal) else m ** 2 + s * 2
m4 = None if not isinstance(dist, Normal) else m**2 + s * 2 m4 = None if not isinstance(dist, Normal) else m**2 + s * 2

View File

@ -1,4 +1,3 @@
# flags: --preview
def func( def func(
arg1, arg1,
arg2, arg2,

View File

@ -1,7 +1,6 @@
# flags: --preview
"""I am a very helpful module docstring. """I am a very helpful module docstring.
With trailing spaces (only removed with unify_docstring_detection on): With trailing spaces:
Lorem ipsum dolor sit amet, consectetur adipiscing elit, Lorem ipsum dolor sit amet, consectetur adipiscing elit,
sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, Ut enim ad minim veniam,
@ -39,7 +38,7 @@
# output # output
"""I am a very helpful module docstring. """I am a very helpful module docstring.
With trailing spaces (only removed with unify_docstring_detection on): With trailing spaces:
Lorem ipsum dolor sit amet, consectetur adipiscing elit, Lorem ipsum dolor sit amet, consectetur adipiscing elit,
sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Ut enim ad minim veniam, Ut enim ad minim veniam,

View File

@ -62,5 +62,4 @@ class MultilineDocstringsAsWell:
class SingleQuotedDocstring: class SingleQuotedDocstring:
"I'm a docstring but I don't even get triple quotes." "I'm a docstring but I don't even get triple quotes."

View File

@ -1,4 +1,4 @@
# flags: --preview --minimum-version=3.10 # flags: --minimum-version=3.10
match match: match match:
case "test" if case != "not very loooooooooooooog condition": # comment case "test" if case != "not very loooooooooooooog condition": # comment
pass pass

View File

@ -1,4 +1,4 @@
# flags: --minimum-version=3.11 --preview # flags: --minimum-version=3.11
def fn(*args: *tuple[*A, B]) -> None: def fn(*args: *tuple[*A, B]) -> None:

View File

@ -1,4 +1,3 @@
# flags: --minimum-version=3.8
def positional_only_arg(a, /): def positional_only_arg(a, /):
pass pass

View File

@ -1,4 +1,3 @@
# flags: --minimum-version=3.8
(a := 1) (a := 1)
(a := a) (a := a)
if (match := pattern.search(data)) is None: if (match := pattern.search(data)) is None:

View File

@ -14,3 +14,8 @@
f((a := b + c for c in range(10)), x) f((a := b + c for c in range(10)), x)
f(y=(a := b + c for c in range(10))) f(y=(a := b + c for c in range(10)))
f(x, (a := b + c for c in range(10)), y=z, **q) f(x, (a := b + c for c in range(10)), y=z, **q)
# Don't remove parens when assignment expr is one of the exprs in a with statement
with x, (a := b):
pass

View File

@ -1,4 +1,3 @@
# flags: --minimum-version=3.9
# Unparenthesized walruses are now allowed in set literals & set comprehensions # Unparenthesized walruses are now allowed in set literals & set comprehensions
# since Python 3.9 # since Python 3.9
{x := 1, 2, 3} {x := 1, 2, 3}

View File

@ -1,4 +1,3 @@
# flags: --minimum-version=3.8
if (foo := 0): if (foo := 0):
pass pass

View File

@ -11,6 +11,14 @@
# exactly line length limit + 1, it won't be split like that. # exactly line length limit + 1, it won't be split like that.
xxxxxxxxx_yyy_zzzzzzzz[xx.xxxxxx(x_yyy_zzzzzz.xxxxx[0]), x_yyy_zzzzzz.xxxxxx(xxxx=1)] = 1 xxxxxxxxx_yyy_zzzzzzzz[xx.xxxxxx(x_yyy_zzzzzz.xxxxx[0]), x_yyy_zzzzzz.xxxxxx(xxxx=1)] = 1
# Regression test for #1187
print(
dict(
a=1,
b=2 if some_kind_of_data is not None else some_other_kind_of_data, # some explanation of why this is actually necessary
c=3,
)
)
# output # output
@ -36,3 +44,14 @@
xxxxxxxxx_yyy_zzzzzzzz[ xxxxxxxxx_yyy_zzzzzzzz[
xx.xxxxxx(x_yyy_zzzzzz.xxxxx[0]), x_yyy_zzzzzz.xxxxxx(xxxx=1) xx.xxxxxx(x_yyy_zzzzzz.xxxxx[0]), x_yyy_zzzzzz.xxxxxx(xxxx=1)
] = 1 ] = 1
# Regression test for #1187
print(
dict(
a=1,
b=(
2 if some_kind_of_data is not None else some_other_kind_of_data
), # some explanation of why this is actually necessary
c=3,
)
)

View File

@ -177,7 +177,6 @@ def test_fails_invalid_post_data(
MyLovelyCompanyTeamProjectComponent as component, # DRY MyLovelyCompanyTeamProjectComponent as component, # DRY
) )
result = 1 # look ma, no comment migration xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx result = 1 # look ma, no comment migration xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
result = 1 # look ma, no comment migration xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx result = 1 # look ma, no comment migration xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

View File

@ -0,0 +1,2 @@
# flags: --unstable
f"{''=}" f'{""=}'

View File

@ -0,0 +1,180 @@
# flags: --preview
from middleman.authentication import validate_oauth_token
logger = logging.getLogger(__name__)
# case 2 comment after import
from middleman.authentication import validate_oauth_token
#comment
logger = logging.getLogger(__name__)
# case 3 comment after import
from middleman.authentication import validate_oauth_token
# comment
logger = logging.getLogger(__name__)
from middleman.authentication import validate_oauth_token
logger = logging.getLogger(__name__)
# case 4 try catch with import after import
import os
import os
try:
import os
except Exception:
pass
try:
import os
def func():
a = 1
except Exception:
pass
# case 5 multiple imports
import os
import os
import os
import os
for i in range(10):
print(i)
# case 6 import in function
def func():
print()
import os
def func():
pass
print()
def func():
import os
a = 1
print()
def func():
import os
a = 1
print()
def func():
import os
a = 1
print()
# output
from middleman.authentication import validate_oauth_token
logger = logging.getLogger(__name__)
# case 2 comment after import
from middleman.authentication import validate_oauth_token
# comment
logger = logging.getLogger(__name__)
# case 3 comment after import
from middleman.authentication import validate_oauth_token
# comment
logger = logging.getLogger(__name__)
from middleman.authentication import validate_oauth_token
logger = logging.getLogger(__name__)
# case 4 try catch with import after import
import os
import os
try:
import os
except Exception:
pass
try:
import os
def func():
a = 1
except Exception:
pass
# case 5 multiple imports
import os
import os
import os
import os
for i in range(10):
print(i)
# case 6 import in function
def func():
print()
import os
def func():
pass
print()
def func():
import os
a = 1
print()
def func():
import os
a = 1
print()
def func():
import os
a = 1
print()

View File

@ -1,4 +1,25 @@
# flags: --unstable # flags: --preview
x = {
"xx_xxxxx_xxxxxxxxxx_xxxxxxxxx_xx": (
"xx:xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxx{xx}xxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx"
)
}
x = {
"xx_xxxxx_xxxxxxxxxx_xxxxxxxxx_xx": (
"xx:xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxx{xx}xxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx"
),
}
x = {
"foo": bar,
"foo": bar,
"foo": (
xx_xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxxxxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx
),
}
x = {
"xx_xxxxx_xxxxxxxxxx_xxxxxxxxx_xx": "xx:xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxx"
}
my_dict = { my_dict = {
"something_something": "something_something":
r"Lorem ipsum dolor sit amet, an sed convenire eloquentiam \t" r"Lorem ipsum dolor sit amet, an sed convenire eloquentiam \t"
@ -6,23 +27,90 @@
r"signiferumque, duo ea vocibus consetetur scriptorem. Facer \t", r"signiferumque, duo ea vocibus consetetur scriptorem. Facer \t",
} }
# Function calls as keys
tasks = {
get_key_name(
foo,
bar,
baz,
): src,
loop.run_in_executor(): src,
loop.run_in_executor(xx_xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxxxxx): src,
loop.run_in_executor(
xx_xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxxxxx_xxxxx_xxxxx
): src,
loop.run_in_executor(): (
xx_xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxxxxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx
),
}
# Dictionary comprehensions
tasks = {
key_name: (
xx_xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxxxxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx
)
for src in sources
}
tasks = {key_name: foobar for src in sources}
tasks = {
get_key_name(
src,
): "foo"
for src in sources
}
tasks = {
get_key_name(
foo,
bar,
baz,
): src
for src in sources
}
tasks = {
get_key_name(): (
xx_xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxxxxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx
)
for src in sources
}
tasks = {get_key_name(): foobar for src in sources}
# Delimiters inside the value
def foo():
def bar():
x = {
common.models.DateTimeField: datetime(2020, 1, 31, tzinfo=utc) + timedelta(
days=i
),
}
x = {
common.models.DateTimeField: (
datetime(2020, 1, 31, tzinfo=utc) + timedelta(days=i)
),
}
x = {
"foobar": (123 + 456),
}
x = {
"foobar": (123) + 456,
}
my_dict = { my_dict = {
"a key in my dict": a_very_long_variable * and_a_very_long_function_call() / 100000.0 "a key in my dict": a_very_long_variable * and_a_very_long_function_call() / 100000.0
} }
my_dict = { my_dict = {
"a key in my dict": a_very_long_variable * and_a_very_long_function_call() * and_another_long_func() / 100000.0 "a key in my dict": a_very_long_variable * and_a_very_long_function_call() * and_another_long_func() / 100000.0
} }
my_dict = { my_dict = {
"a key in my dict": MyClass.some_attribute.first_call().second_call().third_call(some_args="some value") "a key in my dict": MyClass.some_attribute.first_call().second_call().third_call(some_args="some value")
} }
{ {
'xxxxxx': "xxxxxx":
xxxxxxxxxxxxxxxxxxx.xxxxxxxxxxxxxx( xxxxxxxxxxxxxxxxxxx.xxxxxxxxxxxxxx(
xxxxxxxxxxxxxx={ xxxxxxxxxxxxxx={
'x': "x":
xxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxx( xxxxxxxxxxxxxxxxxxxxxxxxxx.xxxxxxxxxxxxxxxxxxxxxxxxxxxxx(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=( xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
@ -30,8 +118,8 @@
xxxxxxxxxxxxx=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxx=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx( .xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx(
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx={ xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx={
'x': x.xx, "x": x.xx,
'x': x.x, "x": x.x,
})))) }))))
}), }),
} }
@ -58,7 +146,26 @@ def func():
# output # output
x = {
"xx_xxxxx_xxxxxxxxxx_xxxxxxxxx_xx": (
"xx:xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxx{xx}xxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx"
)
}
x = {
"xx_xxxxx_xxxxxxxxxx_xxxxxxxxx_xx": (
"xx:xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxx{xx}xxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx"
),
}
x = {
"foo": bar,
"foo": bar,
"foo": (
xx_xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxxxxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx
),
}
x = {
"xx_xxxxx_xxxxxxxxxx_xxxxxxxxx_xx": "xx:xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxx"
}
my_dict = { my_dict = {
"something_something": ( "something_something": (
@ -68,12 +175,80 @@ def func():
), ),
} }
# Function calls as keys
tasks = {
get_key_name(
foo,
bar,
baz,
): src,
loop.run_in_executor(): src,
loop.run_in_executor(xx_xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxxxxx): src,
loop.run_in_executor(
xx_xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxxxxx_xxxxx_xxxxx
): src,
loop.run_in_executor(): (
xx_xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxxxxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx
),
}
# Dictionary comprehensions
tasks = {
key_name: (
xx_xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxxxxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx
)
for src in sources
}
tasks = {key_name: foobar for src in sources}
tasks = {
get_key_name(
src,
): "foo"
for src in sources
}
tasks = {
get_key_name(
foo,
bar,
baz,
): src
for src in sources
}
tasks = {
get_key_name(): (
xx_xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxxxxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx
)
for src in sources
}
tasks = {get_key_name(): foobar for src in sources}
# Delimiters inside the value
def foo():
def bar():
x = {
common.models.DateTimeField: (
datetime(2020, 1, 31, tzinfo=utc) + timedelta(days=i)
),
}
x = {
common.models.DateTimeField: (
datetime(2020, 1, 31, tzinfo=utc) + timedelta(days=i)
),
}
x = {
"foobar": 123 + 456,
}
x = {
"foobar": (123) + 456,
}
my_dict = { my_dict = {
"a key in my dict": ( "a key in my dict": (
a_very_long_variable * and_a_very_long_function_call() / 100000.0 a_very_long_variable * and_a_very_long_function_call() / 100000.0
) )
} }
my_dict = { my_dict = {
"a key in my dict": ( "a key in my dict": (
a_very_long_variable a_very_long_variable
@ -82,7 +257,6 @@ def func():
/ 100000.0 / 100000.0
) )
} }
my_dict = { my_dict = {
"a key in my dict": ( "a key in my dict": (
MyClass.some_attribute.first_call() MyClass.some_attribute.first_call()
@ -113,8 +287,8 @@ def func():
class Random: class Random:
def func(): def func():
random_service.status.active_states.inactive = ( random_service.status.active_states.inactive = make_new_top_level_state_from_dict(
make_new_top_level_state_from_dict({ {
"topLevelBase": { "topLevelBase": {
"secondaryBase": { "secondaryBase": {
"timestamp": 1234, "timestamp": 1234,
@ -125,5 +299,5 @@ def func():
), ),
} }
}, },
}) }
) )

View File

@ -279,7 +279,7 @@ def foo():
"........................................................................... \\N{LAO KO LA}" "........................................................................... \\N{LAO KO LA}"
) )
msg = lambda x: f"this is a very very very long lambda value {x} that doesn't fit on a single line" msg = lambda x: f"this is a very very very very long lambda value {x} that doesn't fit on a single line"
dict_with_lambda_values = { dict_with_lambda_values = {
"join": lambda j: ( "join": lambda j: (
@ -329,6 +329,20 @@ def foo():
log.info(f"""Skipping: {'a' == 'b'} {desc['ms_name']} {money=} {dte=} {pos_share=} {desc['status']} {desc['exposure_max']}""") log.info(f"""Skipping: {'a' == 'b'} {desc['ms_name']} {money=} {dte=} {pos_share=} {desc['status']} {desc['exposure_max']}""")
x = {
"xx_xxxxx_xxxxxxxxxx_xxxxxxxxx_xx": (
"xx:xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxx{xx}xxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx"
)
}
x = {
"xx_xxxxx_xxxxxxxxxx_xxxxxxxxx_xx": "xx:xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxx{xx}xxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx",
}
x = {
"xx_xxxxx_xxxxxxxxxx_xxxxxxxxx_xx": (
"xx:xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxx"
)
}
# output # output
@ -842,11 +856,9 @@ def foo():
" \\N{LAO KO LA}" " \\N{LAO KO LA}"
) )
msg = ( msg = lambda x: (
lambda x: ( f"this is a very very very very long lambda value {x} that doesn't fit on a"
f"this is a very very very long lambda value {x} that doesn't fit on a single" " single line"
" line"
)
) )
dict_with_lambda_values = { dict_with_lambda_values = {
@ -882,7 +894,7 @@ def foo():
log.info( log.info(
"Skipping:" "Skipping:"
f" {desc['db_id']} {foo('bar',x=123)} {'foo' != 'bar'} {(x := 'abc=')} {pos_share=} {desc['status']} {desc['exposure_max']}" f' {desc["db_id"]} {foo("bar",x=123)} {"foo" != "bar"} {(x := "abc=")} {pos_share=} {desc["status"]} {desc["exposure_max"]}'
) )
log.info( log.info(
@ -902,7 +914,7 @@ def foo():
log.info( log.info(
"Skipping:" "Skipping:"
f" {'a' == 'b' == 'c' == 'd'} {desc['ms_name']} {money=} {dte=} {pos_share=} {desc['status']} {desc['exposure_max']}" f' {"a" == "b" == "c" == "d"} {desc["ms_name"]} {money=} {dte=} {pos_share=} {desc["status"]} {desc["exposure_max"]}'
) )
log.info( log.info(
@ -926,3 +938,17 @@ def foo():
log.info( log.info(
f"""Skipping: {'a' == 'b'} {desc['ms_name']} {money=} {dte=} {pos_share=} {desc['status']} {desc['exposure_max']}""" f"""Skipping: {'a' == 'b'} {desc['ms_name']} {money=} {dte=} {pos_share=} {desc['status']} {desc['exposure_max']}"""
) )
x = {
"xx_xxxxx_xxxxxxxxxx_xxxxxxxxx_xx": (
"xx:xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxx{xx}xxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx"
)
}
x = {
"xx_xxxxx_xxxxxxxxxx_xxxxxxxxx_xx": (
"xx:xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxxx{xx}xxx_xxxxx_xxxxxxxxx_xxxxxxxxxxxx_xxxx"
),
}
x = {
"xx_xxxxx_xxxxxxxxxx_xxxxxxxxx_xx": "xx:xxxxxxxxxxxxxxxxx_xxxxx_xxxxxxx_xxxxxxxxxx"
}

View File

@ -552,6 +552,7 @@ async def foo(self):
} }
# Regression test for https://github.com/psf/black/issues/3506. # Regression test for https://github.com/psf/black/issues/3506.
# Regressed again by https://github.com/psf/black/pull/4498
s = ( s = (
"With single quote: ' " "With single quote: ' "
f" {my_dict['foo']}" f" {my_dict['foo']}"
@ -1239,9 +1240,15 @@ async def foo(self):
} }
# Regression test for https://github.com/psf/black/issues/3506. # Regression test for https://github.com/psf/black/issues/3506.
s = f"With single quote: ' {my_dict['foo']} With double quote: \" {my_dict['bar']}" # Regressed again by https://github.com/psf/black/pull/4498
s = (
"With single quote: ' "
f" {my_dict['foo']}"
' With double quote: " '
f' {my_dict["bar"]}'
)
s = ( s = (
"Lorem Ipsum is simply dummy text of the printing and typesetting" "Lorem Ipsum is simply dummy text of the printing and typesetting"
f" industry:'{my_dict['foo']}'" f' industry:\'{my_dict["foo"]}\''
) )

View File

@ -0,0 +1,246 @@
# flags: --unstable
items = [(x for x in [1])]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
]
items = [
(
{"key1": "val1", "key2": "val2"}
if some_var == ""
else {"key": "val"}
)
]
items = [
(
"123456890123457890123468901234567890"
if some_var == "long strings"
else "123467890123467890"
)
]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
and some_var == "long strings"
and {"key": "val"}
)
]
items = [
(
"123456890123457890123468901234567890"
and some_var == "long strings"
and "123467890123467890"
)
]
items = [
(
long_variable_name
and even_longer_variable_name
and yet_another_very_long_variable_name
)
]
# Shouldn't remove trailing commas
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
),
]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
and some_var == "long strings"
and {"key": "val"}
),
]
items = [
(
"123456890123457890123468901234567890"
and some_var == "long strings"
and "123467890123467890"
),
]
items = [
(
long_variable_name
and even_longer_variable_name
and yet_another_very_long_variable_name
),
]
# Shouldn't add parentheses
items = [
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
]
items = [{"key1": "val1", "key2": "val2"} if some_var == "" else {"key": "val"}]
# Shouldn't crash with comments
items = [
( # comment
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
) # comment
]
items = [ # comment
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
] # comment
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"} # comment
if some_var == "long strings"
else {"key": "val"}
)
]
items = [ # comment
( # comment
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
) # comment
] # comment
# output
items = [(x for x in [1])]
items = [
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
]
items = [{"key1": "val1", "key2": "val2"} if some_var == "" else {"key": "val"}]
items = [
"123456890123457890123468901234567890"
if some_var == "long strings"
else "123467890123467890"
]
items = [
{"key1": "val1", "key2": "val2", "key3": "val3"}
and some_var == "long strings"
and {"key": "val"}
]
items = [
"123456890123457890123468901234567890"
and some_var == "long strings"
and "123467890123467890"
]
items = [
long_variable_name
and even_longer_variable_name
and yet_another_very_long_variable_name
]
# Shouldn't remove trailing commas
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
),
]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
and some_var == "long strings"
and {"key": "val"}
),
]
items = [
(
"123456890123457890123468901234567890"
and some_var == "long strings"
and "123467890123467890"
),
]
items = [
(
long_variable_name
and even_longer_variable_name
and yet_another_very_long_variable_name
),
]
# Shouldn't add parentheses
items = [
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
]
items = [{"key1": "val1", "key2": "val2"} if some_var == "" else {"key": "val"}]
# Shouldn't crash with comments
items = [ # comment
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
]
items = [
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
] # comment
items = [ # comment
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
]
items = [
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
] # comment
items = [
{"key1": "val1", "key2": "val2", "key3": "val3"} # comment
if some_var == "long strings"
else {"key": "val"}
]
items = [ # comment # comment
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
]
items = [
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
] # comment # comment

View File

@ -1,6 +1,3 @@
# flags: --minimum-version=3.7
def f(): def f():
return (i * 2 async for i in arange(42)) return (i * 2 async for i in arange(42))
@ -13,6 +10,7 @@ def g():
async def func(): async def func():
await ...
if test: if test:
out_batched = [ out_batched = [
i i
@ -45,6 +43,7 @@ def g():
async def func(): async def func():
await ...
if test: if test:
out_batched = [ out_batched = [
i i

View File

@ -1,6 +1,3 @@
# flags: --minimum-version=3.8
def starred_return(): def starred_return():
my_list = ["value2", "value3"] my_list = ["value2", "value3"]
return "value1", *my_list return "value1", *my_list

View File

@ -1,5 +1,3 @@
# flags: --minimum-version=3.9
@relaxed_decorator[0] @relaxed_decorator[0]
def f(): def f():
... ...

View File

@ -0,0 +1,157 @@
items = [(123)]
items = [(True)]
items = [(((((True)))))]
items = [(((((True,)))))]
items = [((((()))))]
items = [(x for x in [1])]
items = {(123)}
items = {(True)}
items = {(((((True)))))}
# Requires `hug_parens_with_braces_and_square_brackets` unstable style to remove parentheses
# around multiline values
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
]
items = [
(
{"key1": "val1", "key2": "val2"}
if some_var == ""
else {"key": "val"}
)
]
# Comments should not cause crashes
items = [
( # comment
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
) # comment
]
items = [ # comment
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
] # comment
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"} # comment
if some_var == "long strings"
else {"key": "val"}
)
]
items = [ # comment
( # comment
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
) # comment
] # comment
# output
items = [123]
items = [True]
items = [True]
items = [(True,)]
items = [()]
items = [(x for x in [1])]
items = {123}
items = {True}
items = {True}
# Requires `hug_parens_with_braces_and_square_brackets` unstable style to remove parentheses
# around multiline values
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
]
items = [{"key1": "val1", "key2": "val2"} if some_var == "" else {"key": "val"}]
# Comments should not cause crashes
items = [
( # comment
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
) # comment
]
items = [ # comment
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
] # comment
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"} # comment
if some_var == "long strings"
else {"key": "val"}
)
]
items = [ # comment
( # comment
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
)
]
items = [
(
{"key1": "val1", "key2": "val2", "key3": "val3"}
if some_var == "long strings"
else {"key": "val"}
) # comment
] # comment

View File

@ -1,4 +1,4 @@
# flags: --minimum-version=3.10 --preview --line-length=79 # flags: --minimum-version=3.10 --line-length=79
match 1: match 1:
case _ if (True): case _ if (True):

View File

@ -1,4 +1,3 @@
# flags: --minimum-version=3.9
with (open("bla.txt")): with (open("bla.txt")):
pass pass
@ -54,6 +53,19 @@
with ((((CtxManager1()))) as example1, (((CtxManager2()))) as example2): with ((((CtxManager1()))) as example1, (((CtxManager2()))) as example2):
... ...
# regression tests for #3678
with (a, *b):
pass
with (a, (b, *c)):
pass
with (a for b in c):
pass
with (a, (b for c in d)):
pass
# output # output
with open("bla.txt"): with open("bla.txt"):
pass pass
@ -118,3 +130,16 @@
with CtxManager1() as example1, CtxManager2() as example2: with CtxManager1() as example1, CtxManager2() as example2:
... ...
# regression tests for #3678
with (a, *b):
pass
with a, (b, *c):
pass
with (a for b in c):
pass
with a, (b for c in d):
pass

View File

@ -0,0 +1,163 @@
# flags: --minimum-version=3.12 --skip-magic-trailing-comma
def plain[T, B](a: T, b: T) -> T:
return a
def arg_magic[T, B](a: T, b: T,) -> T:
return a
def type_param_magic[T, B,](a: T, b: T) -> T:
return a
def both_magic[T, B,](a: T, b: T,) -> T:
return a
def plain_multiline[
T,
B
](
a: T,
b: T
) -> T:
return a
def arg_magic_multiline[
T,
B
](
a: T,
b: T,
) -> T:
return a
def type_param_magic_multiline[
T,
B,
](
a: T,
b: T
) -> T:
return a
def both_magic_multiline[
T,
B,
](
a: T,
b: T,
) -> T:
return a
def plain_mixed1[
T,
B
](a: T, b: T) -> T:
return a
def plain_mixed2[T, B](
a: T,
b: T
) -> T:
return a
def arg_magic_mixed1[
T,
B
](a: T, b: T,) -> T:
return a
def arg_magic_mixed2[T, B](
a: T,
b: T,
) -> T:
return a
def type_param_magic_mixed1[
T,
B,
](a: T, b: T) -> T:
return a
def type_param_magic_mixed2[T, B,](
a: T,
b: T
) -> T:
return a
def both_magic_mixed1[
T,
B,
](a: T, b: T,) -> T:
return a
def both_magic_mixed2[T, B,](
a: T,
b: T,
) -> T:
return a
# output
def plain[T, B](a: T, b: T) -> T:
return a
def arg_magic[T, B](a: T, b: T) -> T:
return a
def type_param_magic[T, B](a: T, b: T) -> T:
return a
def both_magic[T, B](a: T, b: T) -> T:
return a
def plain_multiline[T, B](a: T, b: T) -> T:
return a
def arg_magic_multiline[T, B](a: T, b: T) -> T:
return a
def type_param_magic_multiline[T, B](a: T, b: T) -> T:
return a
def both_magic_multiline[T, B](a: T, b: T) -> T:
return a
def plain_mixed1[T, B](a: T, b: T) -> T:
return a
def plain_mixed2[T, B](a: T, b: T) -> T:
return a
def arg_magic_mixed1[T, B](a: T, b: T) -> T:
return a
def arg_magic_mixed2[T, B](a: T, b: T) -> T:
return a
def type_param_magic_mixed1[T, B](a: T, b: T) -> T:
return a
def type_param_magic_mixed2[T, B](a: T, b: T) -> T:
return a
def both_magic_mixed1[T, B](a: T, b: T) -> T:
return a
def both_magic_mixed2[T, B](a: T, b: T) -> T:
return a

View File

@ -20,6 +20,8 @@ def trailing_comma1[T=int,](a: str):
def trailing_comma2[T=int](a: str,): def trailing_comma2[T=int](a: str,):
pass pass
def weird_syntax[T=lambda: 42, **P=lambda: 43, *Ts=lambda: 44](): pass
# output # output
type A[T = int] = float type A[T = int] = float
@ -37,25 +39,31 @@ def trailing_comma2[T=int](a: str,):
] = something_that_is_long ] = something_that_is_long
def simple[ def simple[T = something_that_is_long](
T = something_that_is_long short1: int, short2: str, short3: bytes
](short1: int, short2: str, short3: bytes) -> float: ) -> float:
pass pass
def longer[ def longer[something_that_is_long = something_that_is_long](
something_that_is_long = something_that_is_long something_that_is_long: something_that_is_long,
](something_that_is_long: something_that_is_long) -> something_that_is_long: ) -> something_that_is_long:
pass pass
def trailing_comma1[ def trailing_comma1[
T = int, T = int,
](a: str): ](
a: str,
):
pass pass
def trailing_comma2[ def trailing_comma2[T = int](
T = int a: str,
](a: str,): ):
pass
def weird_syntax[T = lambda: 42, **P = lambda: 43, *Ts = lambda: 44]():
pass pass

View File

@ -13,6 +13,8 @@ def it_gets_worse[WhatIsTheLongestTypeVarNameYouCanThinkOfEnoughToMakeBlackSplit
def magic[Trailing, Comma,](): pass def magic[Trailing, Comma,](): pass
def weird_syntax[T: lambda: 42, U: a or b](): pass
# output # output
@ -56,3 +58,7 @@ def magic[
Comma, Comma,
](): ]():
pass pass
def weird_syntax[T: lambda: 42, U: a or b]():
pass

View File

@ -1,4 +1,3 @@
# flags: --preview
def long_function_name_goes_here( def long_function_name_goes_here(
x: Callable[List[int]] x: Callable[List[int]]
) -> Union[List[int], float, str, bytes, Tuple[int]]: ) -> Union[List[int], float, str, bytes, Tuple[int]]:

View File

@ -1,9 +1,9 @@
# flags: --preview # flags: --preview
# This is testing an issue that is specific to the preview style # This is testing an issue that is specific to the preview style (wrap_long_dict_values_in_parens)
{ {
"is_update": (up := commit.hash in update_hashes) "is_update": (up := commit.hash in update_hashes)
} }
# output # output
# This is testing an issue that is specific to the preview style # This is testing an issue that is specific to the preview style (wrap_long_dict_values_in_parens)
{"is_update": (up := commit.hash in update_hashes)} {"is_update": (up := commit.hash in update_hashes)}

View File

@ -232,8 +232,6 @@ file_input
fstring fstring
FSTRING_START FSTRING_START
"f'" "f'"
FSTRING_MIDDLE
''
fstring_replacement_field fstring_replacement_field
LBRACE LBRACE
'{' '{'
@ -242,8 +240,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
fstring_replacement_field fstring_replacement_field
LBRACE LBRACE
'{' '{'
@ -252,8 +248,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
FSTRING_END FSTRING_END
"'" "'"
/fstring /fstring
@ -399,8 +393,6 @@ file_input
fstring fstring
FSTRING_START FSTRING_START
"f'" "f'"
FSTRING_MIDDLE
''
fstring_replacement_field fstring_replacement_field
LBRACE LBRACE
'{' '{'
@ -419,8 +411,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
FSTRING_END FSTRING_END
"'" "'"
/fstring /fstring
@ -549,8 +539,6 @@ file_input
fstring fstring
FSTRING_START FSTRING_START
"f'" "f'"
FSTRING_MIDDLE
''
fstring_replacement_field fstring_replacement_field
LBRACE LBRACE
'{' '{'
@ -559,8 +547,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
fstring_replacement_field fstring_replacement_field
LBRACE LBRACE
'{' '{'
@ -569,8 +555,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
FSTRING_END FSTRING_END
"'" "'"
/fstring /fstring
@ -660,8 +644,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
FSTRING_END FSTRING_END
"'" "'"
/fstring /fstring
@ -744,8 +726,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
FSTRING_END FSTRING_END
"'" "'"
/fstring /fstring

View File

@ -18,7 +18,7 @@
import logging import logging
import re import re
from functools import lru_cache from functools import lru_cache
from typing import TYPE_CHECKING, FrozenSet, List, Set from typing import TYPE_CHECKING
import pytest import pytest
@ -46,8 +46,8 @@
from _pytest.nodes import Node from _pytest.nodes import Node
ALL_POSSIBLE_OPTIONAL_MARKERS = StashKey[FrozenSet[str]]() ALL_POSSIBLE_OPTIONAL_MARKERS = StashKey[frozenset[str]]()
ENABLED_OPTIONAL_MARKERS = StashKey[FrozenSet[str]]() ENABLED_OPTIONAL_MARKERS = StashKey[frozenset[str]]()
def pytest_addoption(parser: "Parser") -> None: def pytest_addoption(parser: "Parser") -> None:
@ -69,7 +69,7 @@ def pytest_configure(config: "Config") -> None:
""" """
ot_ini = config.inicfg.get("optional-tests") or [] ot_ini = config.inicfg.get("optional-tests") or []
ot_markers = set() ot_markers = set()
ot_run: Set[str] = set() ot_run: set[str] = set()
if isinstance(ot_ini, str): if isinstance(ot_ini, str):
ot_ini = ot_ini.strip().split("\n") ot_ini = ot_ini.strip().split("\n")
marker_re = re.compile(r"^\s*(?P<no>no_)?(?P<marker>\w+)(:\s*(?P<description>.*))?") marker_re = re.compile(r"^\s*(?P<no>no_)?(?P<marker>\w+)(:\s*(?P<description>.*))?")
@ -103,7 +103,7 @@ def pytest_configure(config: "Config") -> None:
store[ENABLED_OPTIONAL_MARKERS] = frozenset(ot_run) store[ENABLED_OPTIONAL_MARKERS] = frozenset(ot_run)
def pytest_collection_modifyitems(config: "Config", items: "List[Node]") -> None: def pytest_collection_modifyitems(config: "Config", items: "list[Node]") -> None:
store = config._store store = config._store
all_possible_optional_markers = store[ALL_POSSIBLE_OPTIONAL_MARKERS] all_possible_optional_markers = store[ALL_POSSIBLE_OPTIONAL_MARKERS]
enabled_optional_markers = store[ENABLED_OPTIONAL_MARKERS] enabled_optional_markers = store[ENABLED_OPTIONAL_MARKERS]
@ -120,7 +120,7 @@ def pytest_collection_modifyitems(config: "Config", items: "List[Node]") -> None
@lru_cache @lru_cache
def skip_mark(tests: FrozenSet[str]) -> "MarkDecorator": def skip_mark(tests: frozenset[str]) -> "MarkDecorator":
names = ", ".join(sorted(tests)) names = ", ".join(sorted(tests))
return pytest.mark.skip(reason=f"Marked with disabled optional tests ({names})") return pytest.mark.skip(reason=f"Marked with disabled optional tests ({names})")

Some files were not shown because too many files have changed in this diff Show More