Compare commits

...

35 Commits
25.1.0 ... main

Author SHA1 Message Date
GiGaGon
7987951e24
Convert legacy string formatting to f-strings (#4685)
* the changes

* Update driver.py
2025-06-05 18:51:26 -07:00
GiGaGon
e5e5dad792
Fix await ellipses and remove async/await soft keyword/identifier support (#4676)
* Update tokenize.py

* Update driver.py

* Update test_black.py

* Update test_black.py

* Update python37.py

* Update tokenize.py

* Update CHANGES.md

* Update CHANGES.md

* Update faq.md

* Update driver.py

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-06-05 18:50:42 -07:00
GiGaGon
24e4cb20ab
Fix backslash cr nl bug (#4673)
* Update tokenize.py

* Update CHANGES.md

* Update test_black.py

* Update test_black.py

* Update test_black.py
2025-06-05 18:49:15 -07:00
GiGaGon
e7bf7b4619
Fix CI mypyc 1.16 failure (#4671) 2025-05-29 14:10:29 -07:00
cobalt
71e380aedf
CI: Remove now-uneeded workarounds (#4665) 2025-05-25 18:23:42 -05:00
dependabot[bot]
2630801f95
Bump pypa/cibuildwheel from 2.22.0 to 2.23.3 (#4660)
Bumps [pypa/cibuildwheel](https://github.com/pypa/cibuildwheel) from 2.22.0 to 2.23.3.
- [Release notes](https://github.com/pypa/cibuildwheel/releases)
- [Changelog](https://github.com/pypa/cibuildwheel/blob/main/docs/changelog.md)
- [Commits](https://github.com/pypa/cibuildwheel/compare/v2.22.0...v2.23.3)

---
updated-dependencies:
- dependency-name: pypa/cibuildwheel
  dependency-version: 2.23.3
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-05-15 07:22:11 -05:00
danigm
b0f36f5b42
Update test_code_option_safe to work with click 8.2.0 (#4666) 2025-05-15 07:04:00 -05:00
cobalt
314f8cf92b
Update Prettier pre-commit configuration (#4662)
* Update Prettier configuration

Signed-off-by: cobalt <61329810+cobaltt7@users.noreply.github.com>

* Update .github/workflows/diff_shades.yml

Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>

---------

Signed-off-by: cobalt <61329810+cobaltt7@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
2025-05-11 19:21:50 -05:00
Pedro Mezacasa Muller
d0ff3bd6cb
Fix crash when a tuple is used as a ContextManager (#4646) 2025-04-08 21:42:17 -07:00
pre-commit-ci[bot]
a41dc89f1f
[pre-commit.ci] pre-commit autoupdate (#4644)
updates:
- [github.com/pycqa/isort: 5.13.2 → 6.0.1](https://github.com/pycqa/isort/compare/5.13.2...6.0.1)
- [github.com/pycqa/flake8: 7.1.1 → 7.2.0](https://github.com/pycqa/flake8/compare/7.1.1...7.2.0)

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
2025-04-07 14:45:01 -07:00
Tushar Sadhwani
950ec38c11
Disallow unwrapping tuples in an as clause (#4634) 2025-04-01 07:49:37 -07:00
Tushar Sadhwani
2c135edf37
Handle # fmt: skip followed by a comment (#4635) 2025-03-22 19:30:40 -07:00
Tushar Sadhwani
6144c46c6a
Fix parsing of walrus operator in complex with statements (#4630) 2025-03-20 14:00:11 -07:00
Tsvika Shapira
dd278cb316
update github-action to look for black version in "dependency-groups" (#4606)
"dependency-groups" is the mechanism for storing package requirements in `pyproject.toml`, recommended for formatting tools (see https://packaging.python.org/en/latest/specifications/dependency-groups/ )

this change allow the black action to look also in those locations when determining the version of black to install
2025-03-20 08:01:31 -07:00
Tushar Sadhwani
dbb14eac93
Recursively unwrap tuples in del statements (#4628) 2025-03-19 15:02:40 -07:00
Tushar Sadhwani
5342d2eeda
Replace the blib2to3 tokenizer with pytokens (#4536) 2025-03-15 17:41:19 -07:00
Glyph
9f38928414
github is deprecating the ubuntu 20.04 actions runner image (#4607)
see https://github.com/actions/runner-images/issues/11101
2025-03-05 18:26:00 -08:00
Pedro Mezacasa Muller
3e9dd25dad
Fix bug where # fmt: skip is not being respected with one-liner functions (#4552) 2025-03-03 15:11:21 -08:00
dependabot[bot]
bb802cf19a
Bump sphinx from 8.2.1 to 8.2.3 in /docs (#4603)
Bumps [sphinx](https://github.com/sphinx-doc/sphinx) from 8.2.1 to 8.2.3.
- [Release notes](https://github.com/sphinx-doc/sphinx/releases)
- [Changelog](https://github.com/sphinx-doc/sphinx/blob/master/CHANGES.rst)
- [Commits](https://github.com/sphinx-doc/sphinx/compare/v8.2.1...v8.2.3)

---
updated-dependencies:
- dependency-name: sphinx
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-03-03 06:24:03 -08:00
Jelle Zijlstra
5ae38dd370
Fix parser for TypeVar bounds (#4602) 2025-03-03 00:20:23 -08:00
rdrll
45cbe572ee
Add regression tests for Black’s previous inconsistent quote formatting with adjacent string literals (#4580) 2025-03-02 19:23:58 -08:00
Hugo van Kemenade
fccd70cff1
Update top-pypi-packages filename (#4598)
To stay within quota, it now has just under 30 days of data, so the filename has been updated. Both will be available for a while. See https://github.com/hugovk/top-pypi-packages/pull/46.
2025-03-02 08:09:40 -08:00
🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко)
00c0d6d91a
📦 Tell git archive to include numbered tags (#4593)
The wildcard at the beginning used to match tags with arbitrary
prefixes otherwise. This patch corrects that making it more accurate.
2025-02-28 16:09:40 -08:00
🇺🇦 Sviatoslav Sydorenko (Святослав Сидоренко)
0580ecbef3
📦 Make Git archives for tags immutable (#4592)
This change will help with reproducibility in downstreams.

Ref: https://setuptools-scm.rtfd.io/en/latest/usage/#git-archives
2025-02-27 09:08:50 -08:00
Michael R. Crusoe
ed64d89faa
additional fix for click 8.2.0 (#4591) 2025-02-27 08:46:59 -08:00
dependabot[bot]
452d3b68f4
Bump sphinx from 8.1.3 to 8.2.1 in /docs (#4587)
Bumps [sphinx](https://github.com/sphinx-doc/sphinx) from 8.1.3 to 8.2.1.
- [Release notes](https://github.com/sphinx-doc/sphinx/releases)
- [Changelog](https://github.com/sphinx-doc/sphinx/blob/v8.2.1/CHANGES.rst)
- [Commits](https://github.com/sphinx-doc/sphinx/compare/v8.1.3...v8.2.1)

---
updated-dependencies:
- dependency-name: sphinx
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-24 05:19:48 -08:00
sobolevn
256f3420b1
Add --local-partial-types and --strict-bytes to mypy (#4583) 2025-02-20 15:27:23 -08:00
dependabot[bot]
00cb6d15c5
Bump myst-parser from 4.0.0 to 4.0.1 in /docs (#4578)
Bumps [myst-parser](https://github.com/executablebooks/MyST-Parser) from 4.0.0 to 4.0.1.
- [Release notes](https://github.com/executablebooks/MyST-Parser/releases)
- [Changelog](https://github.com/executablebooks/MyST-Parser/blob/master/CHANGELOG.md)
- [Commits](https://github.com/executablebooks/MyST-Parser/compare/v4.0.0...v4.0.1)

---
updated-dependencies:
- dependency-name: myst-parser
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-02-18 21:16:59 -08:00
MeggyCal
14e1de805a
mix_stderr parameter was removed from click 8.2.0 (#4577) 2025-02-18 07:30:11 -08:00
GiGaGon
5f23701708
Fix diff shades CI (#4576) 2025-02-06 18:59:16 -08:00
GiGaGon
9c129567e7
Re-add packaging CHANGES.md comment (#4568) 2025-01-29 14:29:55 -08:00
Michał Górny
c02ca47daa
Fix mis-synced version check in black.vim (#4567)
The message has been updated to indicate Python 3.9+, but the check
still compares to 3.8
2025-01-29 12:25:00 -08:00
Jelle Zijlstra
edaf085a18 new changelog template 2025-01-28 21:55:27 -08:00
Jelle Zijlstra
b844c8a136
unhack pyproject.toml (#4566) 2025-01-28 21:54:46 -08:00
Jelle Zijlstra
d82da0f0e9
Fix hatch build (#4565) 2025-01-28 20:52:03 -08:00
44 changed files with 618 additions and 1286 deletions

View File

@ -1,4 +1,3 @@
node: $Format:%H$ node: $Format:%H$
node-date: $Format:%cI$ node-date: $Format:%cI$
describe-name: $Format:%(describe:tags=true,match=*[0-9]*)$ describe-name: $Format:%(describe:tags=true,match=[0-9]*)$
ref-names: $Format:%D$

View File

@ -34,7 +34,8 @@ jobs:
env: env:
GITHUB_TOKEN: ${{ github.token }} GITHUB_TOKEN: ${{ github.token }}
run: > run: >
python scripts/diff_shades_gha_helper.py config ${{ github.event_name }} ${{ matrix.mode }} python scripts/diff_shades_gha_helper.py config ${{ github.event_name }}
${{ matrix.mode }}
analysis: analysis:
name: analysis / ${{ matrix.mode }} name: analysis / ${{ matrix.mode }}
@ -48,7 +49,7 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
include: ${{ fromJson(needs.configure.outputs.matrix )}} include: ${{ fromJson(needs.configure.outputs.matrix) }}
steps: steps:
- name: Checkout this repository (full clone) - name: Checkout this repository (full clone)
@ -110,19 +111,19 @@ jobs:
${{ matrix.baseline-analysis }} ${{ matrix.target-analysis }} ${{ matrix.baseline-analysis }} ${{ matrix.target-analysis }}
- name: Upload diff report - name: Upload diff report
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: ${{ matrix.mode }}-diff.html name: ${{ matrix.mode }}-diff.html
path: diff.html path: diff.html
- name: Upload baseline analysis - name: Upload baseline analysis
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: ${{ matrix.baseline-analysis }} name: ${{ matrix.baseline-analysis }}
path: ${{ matrix.baseline-analysis }} path: ${{ matrix.baseline-analysis }}
- name: Upload target analysis - name: Upload target analysis
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: ${{ matrix.target-analysis }} name: ${{ matrix.target-analysis }}
path: ${{ matrix.target-analysis }} path: ${{ matrix.target-analysis }}
@ -130,14 +131,13 @@ jobs:
- name: Generate summary file (PR only) - name: Generate summary file (PR only)
if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes' if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes'
run: > run: >
python helper.py comment-body python helper.py comment-body ${{ matrix.baseline-analysis }}
${{ matrix.baseline-analysis }} ${{ matrix.target-analysis }} ${{ matrix.target-analysis }} ${{ matrix.baseline-sha }}
${{ matrix.baseline-sha }} ${{ matrix.target-sha }} ${{ matrix.target-sha }} ${{ github.event.pull_request.number }}
${{ github.event.pull_request.number }}
- name: Upload summary file (PR only) - name: Upload summary file (PR only)
if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes' if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes'
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
name: .pr-comment.json name: .pr-comment.json
path: .pr-comment.json path: .pr-comment.json

View File

@ -92,7 +92,7 @@ jobs:
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
# Keep cibuildwheel version in sync with above # Keep cibuildwheel version in sync with above
- uses: pypa/cibuildwheel@v2.22.0 - uses: pypa/cibuildwheel@v2.23.3
with: with:
only: ${{ matrix.only }} only: ${{ matrix.only }}

View File

@ -13,13 +13,13 @@ jobs:
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
os: [windows-2019, ubuntu-20.04, macos-latest] os: [windows-2019, ubuntu-22.04, macos-latest]
include: include:
- os: windows-2019 - os: windows-2019
pathsep: ";" pathsep: ";"
asset_name: black_windows.exe asset_name: black_windows.exe
executable_mime: "application/vnd.microsoft.portable-executable" executable_mime: "application/vnd.microsoft.portable-executable"
- os: ubuntu-20.04 - os: ubuntu-22.04
pathsep: ":" pathsep: ":"
asset_name: black_linux asset_name: black_linux
executable_mime: "application/x-executable" executable_mime: "application/x-executable"

View File

@ -24,12 +24,12 @@ repos:
additional_dependencies: *version_check_dependencies additional_dependencies: *version_check_dependencies
- repo: https://github.com/pycqa/isort - repo: https://github.com/pycqa/isort
rev: 5.13.2 rev: 6.0.1
hooks: hooks:
- id: isort - id: isort
- repo: https://github.com/pycqa/flake8 - repo: https://github.com/pycqa/flake8
rev: 7.1.1 rev: 7.2.0
hooks: hooks:
- id: flake8 - id: flake8
additional_dependencies: additional_dependencies:
@ -39,17 +39,21 @@ repos:
exclude: ^src/blib2to3/ exclude: ^src/blib2to3/
- repo: https://github.com/pre-commit/mirrors-mypy - repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.14.1 rev: v1.15.0
hooks: hooks:
- id: mypy - id: mypy
exclude: ^(docs/conf.py|scripts/generate_schema.py)$ exclude: ^(docs/conf.py|scripts/generate_schema.py)$
args: [] args: []
additional_dependencies: &mypy_deps additional_dependencies: &mypy_deps
- types-PyYAML - types-PyYAML
- types-atheris
- tomli >= 0.2.6, < 2.0.0 - tomli >= 0.2.6, < 2.0.0
- click >= 8.1.0, != 8.1.4, != 8.1.5 - click >= 8.2.0
# Click is intentionally out-of-sync with pyproject.toml
# v8.2 has breaking changes. We work around them at runtime, but we need the newer stubs.
- packaging >= 22.0 - packaging >= 22.0
- platformdirs >= 2.1.0 - platformdirs >= 2.1.0
- pytokens >= 0.1.10
- pytest - pytest
- hypothesis - hypothesis
- aiohttp >= 3.7.4 - aiohttp >= 3.7.4
@ -62,11 +66,11 @@ repos:
args: ["--python-version=3.10"] args: ["--python-version=3.10"]
additional_dependencies: *mypy_deps additional_dependencies: *mypy_deps
- repo: https://github.com/pre-commit/mirrors-prettier - repo: https://github.com/rbubley/mirrors-prettier
rev: v4.0.0-alpha.8 rev: v3.5.3
hooks: hooks:
- id: prettier - id: prettier
types_or: [css, javascript, html, json, yaml] types_or: [markdown, yaml, json]
exclude: \.github/workflows/diff_shades\.yml exclude: \.github/workflows/diff_shades\.yml
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks

View File

@ -1,11 +1,80 @@
# Change Log # Change Log
## Unreleased
### Highlights
<!-- Include any especially major or disruptive changes here -->
### Stable style
<!-- Changes that affect Black's stable style -->
- Fix crash while formatting a long `del` statement containing tuples (#4628)
- Fix crash while formatting expressions using the walrus operator in complex `with`
statements (#4630)
- Handle `# fmt: skip` followed by a comment at the end of file (#4635)
- Fix crash when a tuple appears in the `as` clause of a `with` statement (#4634)
- Fix crash when tuple is used as a context manager inside a `with` statement (#4646)
- Fix crash on a `\\r\n` (#4673)
- Fix crash on `await ...` (where `...` is a literal `Ellipsis`) (#4676)
- Remove support for pre-python 3.7 `await/async` as soft keywords/variable names
(#4676)
### Preview style
<!-- Changes that affect Black's preview style -->
- Fix a bug where one-liner functions/conditionals marked with `# fmt: skip` would still
be formatted (#4552)
### Configuration
<!-- Changes to how Black can be configured -->
### Packaging
<!-- Changes to how Black is packaged, such as dependency requirements -->
### Parser
<!-- Changes to the parser or to version autodetection -->
- Rewrite tokenizer to improve performance and compliance (#4536)
- Fix bug where certain unusual expressions (e.g., lambdas) were not accepted in type
parameter bounds and defaults. (#4602)
### Performance
<!-- Changes that improve Black's performance. -->
### Output
<!-- Changes to Black's terminal output and error messages -->
### _Blackd_
<!-- Changes to blackd -->
### Integrations
<!-- For example, Docker, GitHub Actions, pre-commit, editors -->
- Fix the version check in the vim file to reject Python 3.8 (#4567)
- Enhance GitHub Action `psf/black` to read Black version from an additional section in
pyproject.toml: `[project.dependency-groups]` (#4606)
### Documentation
<!-- Major changes to documentation and policies. Small docs changes
don't need a changelog entry. -->
## 25.1.0 ## 25.1.0
### Highlights ### Highlights
This release introduces the new 2025 stable style (#4558), stabilizing This release introduces the new 2025 stable style (#4558), stabilizing the following
the following changes: changes:
- Normalize casing of Unicode escape characters in strings to lowercase (#2916) - Normalize casing of Unicode escape characters in strings to lowercase (#2916)
- Fix inconsistencies in whether certain strings are detected as docstrings (#4095) - Fix inconsistencies in whether certain strings are detected as docstrings (#4095)
@ -13,15 +82,16 @@ the following changes:
- Remove redundant parentheses in if guards for case blocks (#4214) - Remove redundant parentheses in if guards for case blocks (#4214)
- Add parentheses to if clauses in case blocks when the line is too long (#4269) - Add parentheses to if clauses in case blocks when the line is too long (#4269)
- Whitespace before `# fmt: skip` comments is no longer normalized (#4146) - Whitespace before `# fmt: skip` comments is no longer normalized (#4146)
- Fix line length computation for certain expressions that involve the power operator (#4154) - Fix line length computation for certain expressions that involve the power operator
(#4154)
- Check if there is a newline before the terminating quotes of a docstring (#4185) - Check if there is a newline before the terminating quotes of a docstring (#4185)
- Fix type annotation spacing between `*` and more complex type variable tuple (#4440) - Fix type annotation spacing between `*` and more complex type variable tuple (#4440)
The following changes were not in any previous release: The following changes were not in any previous release:
- Remove parentheses around sole list items (#4312) - Remove parentheses around sole list items (#4312)
- Generic function definitions are now formatted more elegantly: parameters are - Generic function definitions are now formatted more elegantly: parameters are split
split over multiple lines first instead of type parameter definitions (#4553) over multiple lines first instead of type parameter definitions (#4553)
### Stable style ### Stable style

View File

@ -137,8 +137,8 @@ SQLAlchemy, Poetry, PyPA applications (Warehouse, Bandersnatch, Pipenv, virtuale
pandas, Pillow, Twisted, LocalStack, every Datadog Agent Integration, Home Assistant, pandas, Pillow, Twisted, LocalStack, every Datadog Agent Integration, Home Assistant,
Zulip, Kedro, OpenOA, FLORIS, ORBIT, WOMBAT, and many more. Zulip, Kedro, OpenOA, FLORIS, ORBIT, WOMBAT, and many more.
The following organizations use _Black_: Dropbox, KeepTruckin, Lyft, Mozilla, The following organizations use _Black_: Dropbox, KeepTruckin, Lyft, Mozilla, Quora,
Quora, Duolingo, QuantumBlack, Tesla, Archer Aviation. Duolingo, QuantumBlack, Tesla, Archer Aviation.
Are we missing anyone? Let us know. Are we missing anyone? Let us know.

View File

@ -71,6 +71,7 @@ def read_version_specifier_from_pyproject() -> str:
return f"=={version}" return f"=={version}"
arrays = [ arrays = [
*pyproject.get("dependency-groups", {}).values(),
pyproject.get("project", {}).get("dependencies"), pyproject.get("project", {}).get("dependencies"),
*pyproject.get("project", {}).get("optional-dependencies", {}).values(), *pyproject.get("project", {}).get("optional-dependencies", {}).values(),
] ]

View File

@ -75,7 +75,7 @@ def _initialize_black_env(upgrade=False):
return True return True
pyver = sys.version_info[:3] pyver = sys.version_info[:3]
if pyver < (3, 8): if pyver < (3, 9):
print("Sorry, Black requires Python 3.9+ to run.") print("Sorry, Black requires Python 3.9+ to run.")
return False return False

View File

@ -29,8 +29,8 @@ frequently than monthly nets rapidly diminishing returns.
**You must have `write` permissions for the _Black_ repository to cut a release.** **You must have `write` permissions for the _Black_ repository to cut a release.**
The 10,000 foot view of the release process is that you prepare a release PR and then The 10,000 foot view of the release process is that you prepare a release PR and then
publish a [GitHub Release]. This triggers [release automation](#release-workflows) that builds publish a [GitHub Release]. This triggers [release automation](#release-workflows) that
all release artifacts and publishes them to the various platforms we publish to. builds all release artifacts and publishes them to the various platforms we publish to.
We now have a `scripts/release.py` script to help with cutting the release PRs. We now have a `scripts/release.py` script to help with cutting the release PRs.
@ -96,8 +96,9 @@ In the end, use your best judgement and ask other maintainers for their thoughts
## Release workflows ## Release workflows
All of _Black_'s release automation uses [GitHub Actions]. All workflows are therefore configured All of _Black_'s release automation uses [GitHub Actions]. All workflows are therefore
using YAML files in the `.github/workflows` directory of the _Black_ repository. configured using YAML files in the `.github/workflows` directory of the _Black_
repository.
They are triggered by the publication of a [GitHub Release]. They are triggered by the publication of a [GitHub Release].

View File

@ -93,6 +93,8 @@ Support for formatting Python 2 code was removed in version 22.0. While we've ma
plans to stop supporting older Python 3 minor versions immediately, their support might plans to stop supporting older Python 3 minor versions immediately, their support might
also be removed some time in the future without a deprecation period. also be removed some time in the future without a deprecation period.
`await`/`async` as soft keywords/indentifiers are no longer supported as of 25.2.0.
Runtime support for 3.6 was removed in version 22.10.0, for 3.7 in version 23.7.0, and Runtime support for 3.6 was removed in version 22.10.0, for 3.7 in version 23.7.0, and
for 3.8 in version 24.10.0. for 3.8 in version 24.10.0.

View File

@ -37,10 +37,10 @@ the `pyproject.toml` file. `version` can be any
[valid version specifier](https://packaging.python.org/en/latest/glossary/#term-Version-Specifier) [valid version specifier](https://packaging.python.org/en/latest/glossary/#term-Version-Specifier)
or just the version number if you want an exact version. To read the version from the or just the version number if you want an exact version. To read the version from the
`pyproject.toml` file instead, set `use_pyproject` to `true`. This will first look into `pyproject.toml` file instead, set `use_pyproject` to `true`. This will first look into
the `tool.black.required-version` field, then the `project.dependencies` array and the `tool.black.required-version` field, then the `dependency-groups` table, then the
finally the `project.optional-dependencies` table. The action defaults to the latest `project.dependencies` array and finally the `project.optional-dependencies` table. The
release available on PyPI. Only versions available from PyPI are supported, so no commit action defaults to the latest release available on PyPI. Only versions available from
SHAs or branch names. PyPI are supported, so no commit SHAs or branch names.
If you want to include Jupyter Notebooks, _Black_ must be installed with the `jupyter` If you want to include Jupyter Notebooks, _Black_ must be installed with the `jupyter`
extra. Installing the extra and including Jupyter Notebook files can be configured via extra. Installing the extra and including Jupyter Notebook files can be configured via

View File

@ -1,7 +1,7 @@
# Used by ReadTheDocs; pinned requirements for stability. # Used by ReadTheDocs; pinned requirements for stability.
myst-parser==4.0.0 myst-parser==4.0.1
Sphinx==8.1.3 Sphinx==8.2.3
# Older versions break Sphinx even though they're declared to be supported. # Older versions break Sphinx even though they're declared to be supported.
docutils==0.21.2 docutils==0.21.2
sphinxcontrib-programoutput==0.18 sphinxcontrib-programoutput==0.18

View File

@ -26,6 +26,9 @@ Currently, the following features are included in the preview style:
statements, except when the line after the import is a comment or an import statement statements, except when the line after the import is a comment or an import statement
- `wrap_long_dict_values_in_parens`: Add parentheses around long values in dictionaries - `wrap_long_dict_values_in_parens`: Add parentheses around long values in dictionaries
([see below](labels/wrap-long-dict-values)) ([see below](labels/wrap-long-dict-values))
- `fix_fmt_skip_in_one_liners`: Fix `# fmt: skip` behaviour on one-liner declarations,
such as `def foo(): return "mock" # fmt: skip`, where previously the declaration
would have been incorrectly collapsed.
(labels/unstable-features)= (labels/unstable-features)=

View File

@ -16,7 +16,7 @@
PYPI_INSTANCE = "https://pypi.org/pypi" PYPI_INSTANCE = "https://pypi.org/pypi"
PYPI_TOP_PACKAGES = ( PYPI_TOP_PACKAGES = (
"https://hugovk.github.io/top-pypi-packages/top-pypi-packages-30-days.min.json" "https://hugovk.github.io/top-pypi-packages/top-pypi-packages.min.json"
) )
INTERNAL_BLACK_REPO = f"{tempfile.gettempdir()}/__black" INTERNAL_BLACK_REPO = f"{tempfile.gettempdir()}/__black"

View File

@ -69,6 +69,7 @@ dependencies = [
"packaging>=22.0", "packaging>=22.0",
"pathspec>=0.9.0", "pathspec>=0.9.0",
"platformdirs>=2", "platformdirs>=2",
"pytokens>=0.1.10",
"tomli>=1.1.0; python_version < '3.11'", "tomli>=1.1.0; python_version < '3.11'",
"typing_extensions>=4.0.1; python_version < '3.11'", "typing_extensions>=4.0.1; python_version < '3.11'",
] ]
@ -186,16 +187,6 @@ MYPYC_DEBUG_LEVEL = "0"
# Black needs Clang to compile successfully on Linux. # Black needs Clang to compile successfully on Linux.
CC = "clang" CC = "clang"
[tool.cibuildwheel.macos]
build-frontend = { name = "build", args = ["--no-isolation"] }
# Unfortunately, hatch doesn't respect MACOSX_DEPLOYMENT_TARGET
# Note we don't have a good test for this sed horror, so if you futz with it
# make sure to test manually
before-build = [
"python -m pip install 'hatchling==1.20.0' hatch-vcs hatch-fancy-pypi-readme 'hatch-mypyc>=0.16.0' 'mypy>=1.12' 'click>=8.1.7'",
"""sed -i '' -e "600,700s/'10_16'/os.environ['MACOSX_DEPLOYMENT_TARGET'].replace('.', '_')/" $(python -c 'import hatchling.builders.wheel as h; print(h.__file__)') """,
]
[tool.isort] [tool.isort]
atomic = true atomic = true
profile = "black" profile = "black"
@ -234,6 +225,8 @@ branch = true
python_version = "3.9" python_version = "3.9"
mypy_path = "src" mypy_path = "src"
strict = true strict = true
strict_bytes = true
local_partial_types = true
# Unreachable blocks have been an issue when compiling mypyc, let's try to avoid 'em in the first place. # Unreachable blocks have been an issue when compiling mypyc, let's try to avoid 'em in the first place.
warn_unreachable = true warn_unreachable = true
implicit_reexport = true implicit_reexport = true

View File

@ -5,14 +5,11 @@
a coverage-guided fuzzer I'm working on. a coverage-guided fuzzer I'm working on.
""" """
import re
import hypothesmith import hypothesmith
from hypothesis import HealthCheck, given, settings from hypothesis import HealthCheck, given, settings
from hypothesis import strategies as st from hypothesis import strategies as st
import black import black
from blib2to3.pgen2.tokenize import TokenError
# This test uses the Hypothesis and Hypothesmith libraries to generate random # This test uses the Hypothesis and Hypothesmith libraries to generate random
@ -45,23 +42,7 @@ def test_idempotent_any_syntatically_valid_python(
compile(src_contents, "<string>", "exec") # else the bug is in hypothesmith compile(src_contents, "<string>", "exec") # else the bug is in hypothesmith
# Then format the code... # Then format the code...
try: dst_contents = black.format_str(src_contents, mode=mode)
dst_contents = black.format_str(src_contents, mode=mode)
except black.InvalidInput:
# This is a bug - if it's valid Python code, as above, Black should be
# able to cope with it. See issues #970, #1012
# TODO: remove this try-except block when issues are resolved.
return
except TokenError as e:
if ( # Special-case logic for backslashes followed by newlines or end-of-input
e.args[0] == "EOF in multi-line statement"
and re.search(r"\\($|\r?\n)", src_contents) is not None
):
# This is a bug - if it's valid Python code, as above, Black should be
# able to cope with it. See issue #1012.
# TODO: remove this block when the issue is resolved.
return
raise
# And check that we got equivalent and stable output. # And check that we got equivalent and stable output.
black.assert_equivalent(src_contents, dst_contents) black.assert_equivalent(src_contents, dst_contents)
@ -80,7 +61,7 @@ def test_idempotent_any_syntatically_valid_python(
try: try:
import sys import sys
import atheris # type: ignore[import-not-found] import atheris
except ImportError: except ImportError:
pass pass
else: else:

View File

@ -77,7 +77,7 @@ def blackify(base_branch: str, black_command: str, logger: logging.Logger) -> in
git("commit", "--allow-empty", "-aqC", commit) git("commit", "--allow-empty", "-aqC", commit)
for commit in commits: for commit in commits:
git("branch", "-qD", "%s-black" % commit) git("branch", "-qD", f"{commit}-black")
return 0 return 0

View File

@ -4,7 +4,7 @@
from functools import lru_cache from functools import lru_cache
from typing import Final, Optional, Union from typing import Final, Optional, Union
from black.mode import Mode from black.mode import Mode, Preview
from black.nodes import ( from black.nodes import (
CLOSING_BRACKETS, CLOSING_BRACKETS,
STANDALONE_COMMENT, STANDALONE_COMMENT,
@ -270,7 +270,7 @@ def generate_ignored_nodes(
Stops at the end of the block. Stops at the end of the block.
""" """
if _contains_fmt_skip_comment(comment.value, mode): if _contains_fmt_skip_comment(comment.value, mode):
yield from _generate_ignored_nodes_from_fmt_skip(leaf, comment) yield from _generate_ignored_nodes_from_fmt_skip(leaf, comment, mode)
return return
container: Optional[LN] = container_of(leaf) container: Optional[LN] = container_of(leaf)
while container is not None and container.type != token.ENDMARKER: while container is not None and container.type != token.ENDMARKER:
@ -309,23 +309,67 @@ def generate_ignored_nodes(
def _generate_ignored_nodes_from_fmt_skip( def _generate_ignored_nodes_from_fmt_skip(
leaf: Leaf, comment: ProtoComment leaf: Leaf, comment: ProtoComment, mode: Mode
) -> Iterator[LN]: ) -> Iterator[LN]:
"""Generate all leaves that should be ignored by the `# fmt: skip` from `leaf`.""" """Generate all leaves that should be ignored by the `# fmt: skip` from `leaf`."""
prev_sibling = leaf.prev_sibling prev_sibling = leaf.prev_sibling
parent = leaf.parent parent = leaf.parent
ignored_nodes: list[LN] = []
# Need to properly format the leaf prefix to compare it to comment.value, # Need to properly format the leaf prefix to compare it to comment.value,
# which is also formatted # which is also formatted
comments = list_comments(leaf.prefix, is_endmarker=False) comments = list_comments(leaf.prefix, is_endmarker=False)
if not comments or comment.value != comments[0].value: if not comments or comment.value != comments[0].value:
return return
if prev_sibling is not None: if prev_sibling is not None:
leaf.prefix = "" leaf.prefix = leaf.prefix[comment.consumed :]
siblings = [prev_sibling]
while "\n" not in prev_sibling.prefix and prev_sibling.prev_sibling is not None: if Preview.fix_fmt_skip_in_one_liners not in mode:
prev_sibling = prev_sibling.prev_sibling siblings = [prev_sibling]
siblings.insert(0, prev_sibling) while (
yield from siblings "\n" not in prev_sibling.prefix
and prev_sibling.prev_sibling is not None
):
prev_sibling = prev_sibling.prev_sibling
siblings.insert(0, prev_sibling)
yield from siblings
return
# Generates the nodes to be ignored by `fmt: skip`.
# Nodes to ignore are the ones on the same line as the
# `# fmt: skip` comment, excluding the `# fmt: skip`
# node itself.
# Traversal process (starting at the `# fmt: skip` node):
# 1. Move to the `prev_sibling` of the current node.
# 2. If `prev_sibling` has children, go to its rightmost leaf.
# 3. If theres no `prev_sibling`, move up to the parent
# node and repeat.
# 4. Continue until:
# a. You encounter an `INDENT` or `NEWLINE` node (indicates
# start of the line).
# b. You reach the root node.
# Include all visited LEAVES in the ignored list, except INDENT
# or NEWLINE leaves.
current_node = prev_sibling
ignored_nodes = [current_node]
if current_node.prev_sibling is None and current_node.parent is not None:
current_node = current_node.parent
while "\n" not in current_node.prefix and current_node.prev_sibling is not None:
leaf_nodes = list(current_node.prev_sibling.leaves())
current_node = leaf_nodes[-1] if leaf_nodes else current_node
if current_node.type in (token.NEWLINE, token.INDENT):
current_node.prefix = ""
break
ignored_nodes.insert(0, current_node)
if current_node.prev_sibling is None and current_node.parent is not None:
current_node = current_node.parent
yield from ignored_nodes
elif ( elif (
parent is not None and parent.type == syms.suite and leaf.type == token.NEWLINE parent is not None and parent.type == syms.suite and leaf.type == token.NEWLINE
): ):
@ -333,7 +377,6 @@ def _generate_ignored_nodes_from_fmt_skip(
# statements. The ignored nodes should be previous siblings of the # statements. The ignored nodes should be previous siblings of the
# parent suite node. # parent suite node.
leaf.prefix = "" leaf.prefix = ""
ignored_nodes: list[LN] = []
parent_sibling = parent.prev_sibling parent_sibling = parent.prev_sibling
while parent_sibling is not None and parent_sibling.type != syms.suite: while parent_sibling is not None and parent_sibling.type != syms.suite:
ignored_nodes.insert(0, parent_sibling) ignored_nodes.insert(0, parent_sibling)

View File

@ -40,6 +40,7 @@
ensure_visible, ensure_visible,
fstring_to_string, fstring_to_string,
get_annotation_type, get_annotation_type,
has_sibling_with_type,
is_arith_like, is_arith_like,
is_async_stmt_or_funcdef, is_async_stmt_or_funcdef,
is_atom_with_invisible_parens, is_atom_with_invisible_parens,
@ -56,6 +57,7 @@
is_rpar_token, is_rpar_token,
is_stub_body, is_stub_body,
is_stub_suite, is_stub_suite,
is_tuple,
is_tuple_containing_star, is_tuple_containing_star,
is_tuple_containing_walrus, is_tuple_containing_walrus,
is_type_ignore_comment_string, is_type_ignore_comment_string,
@ -1626,6 +1628,12 @@ def maybe_make_parens_invisible_in_atom(
node.type not in (syms.atom, syms.expr) node.type not in (syms.atom, syms.expr)
or is_empty_tuple(node) or is_empty_tuple(node)
or is_one_tuple(node) or is_one_tuple(node)
or (is_tuple(node) and parent.type == syms.asexpr_test)
or (
is_tuple(node)
and parent.type == syms.with_stmt
and has_sibling_with_type(node, token.COMMA)
)
or (is_yield(node) and parent.type != syms.expr_stmt) or (is_yield(node) and parent.type != syms.expr_stmt)
or ( or (
# This condition tries to prevent removing non-optional brackets # This condition tries to prevent removing non-optional brackets
@ -1649,6 +1657,7 @@ def maybe_make_parens_invisible_in_atom(
syms.except_clause, syms.except_clause,
syms.funcdef, syms.funcdef,
syms.with_stmt, syms.with_stmt,
syms.testlist_gexp,
syms.tname, syms.tname,
# these ones aren't useful to end users, but they do please fuzzers # these ones aren't useful to end users, but they do please fuzzers
syms.for_stmt, syms.for_stmt,

View File

@ -203,6 +203,7 @@ class Preview(Enum):
wrap_long_dict_values_in_parens = auto() wrap_long_dict_values_in_parens = auto()
multiline_string_handling = auto() multiline_string_handling = auto()
always_one_newline_after_import = auto() always_one_newline_after_import = auto()
fix_fmt_skip_in_one_liners = auto()
UNSTABLE_FEATURES: set[Preview] = { UNSTABLE_FEATURES: set[Preview] = {

View File

@ -603,6 +603,17 @@ def is_one_tuple(node: LN) -> bool:
) )
def is_tuple(node: LN) -> bool:
"""Return True if `node` holds a tuple."""
if node.type != syms.atom:
return False
gexp = unwrap_singleton_parenthesis(node)
if gexp is None or gexp.type != syms.testlist_gexp:
return False
return True
def is_tuple_containing_walrus(node: LN) -> bool: def is_tuple_containing_walrus(node: LN) -> bool:
"""Return True if `node` holds a tuple that contains a walrus operator.""" """Return True if `node` holds a tuple that contains a walrus operator."""
if node.type != syms.atom: if node.type != syms.atom:
@ -1047,3 +1058,21 @@ def furthest_ancestor_with_last_leaf(leaf: Leaf) -> LN:
while node.parent and node.parent.children and node is node.parent.children[-1]: while node.parent and node.parent.children and node is node.parent.children[-1]:
node = node.parent node = node.parent
return node return node
def has_sibling_with_type(node: LN, type: int) -> bool:
# Check previous siblings
sibling = node.prev_sibling
while sibling is not None:
if sibling.type == type:
return True
sibling = sibling.prev_sibling
# Check next siblings
sibling = node.next_sibling
while sibling is not None:
if sibling.type == type:
return True
sibling = sibling.next_sibling
return False

View File

@ -213,7 +213,7 @@ def _stringify_ast(node: ast.AST, parent_stack: list[ast.AST]) -> Iterator[str]:
and isinstance(node, ast.Delete) and isinstance(node, ast.Delete)
and isinstance(item, ast.Tuple) and isinstance(item, ast.Tuple)
): ):
for elt in item.elts: for elt in _unwrap_tuples(item):
yield from _stringify_ast_with_new_parent( yield from _stringify_ast_with_new_parent(
elt, parent_stack, node elt, parent_stack, node
) )
@ -250,3 +250,11 @@ def _stringify_ast(node: ast.AST, parent_stack: list[ast.AST]) -> Iterator[str]:
) )
yield f"{' ' * len(parent_stack)}) # /{node.__class__.__name__}" yield f"{' ' * len(parent_stack)}) # /{node.__class__.__name__}"
def _unwrap_tuples(node: ast.Tuple) -> Iterator[ast.AST]:
for elt in node.elts:
if isinstance(elt, ast.Tuple):
yield from _unwrap_tuples(elt)
else:
yield elt

View File

@ -83,7 +83,8 @@
"hug_parens_with_braces_and_square_brackets", "hug_parens_with_braces_and_square_brackets",
"wrap_long_dict_values_in_parens", "wrap_long_dict_values_in_parens",
"multiline_string_handling", "multiline_string_handling",
"always_one_newline_after_import" "always_one_newline_after_import",
"fix_fmt_skip_in_one_liners"
] ]
}, },
"description": "Enable specific features included in the `--unstable` style. Requires `--preview`. No compatibility guarantees are provided on the behavior or existence of any unstable features." "description": "Enable specific features included in the `--unstable` style. Requires `--preview`. No compatibility guarantees are provided on the behavior or existence of any unstable features."

View File

@ -12,9 +12,9 @@ file_input: (NEWLINE | stmt)* ENDMARKER
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
eval_input: testlist NEWLINE* ENDMARKER eval_input: testlist NEWLINE* ENDMARKER
typevar: NAME [':' expr] ['=' expr] typevar: NAME [':' test] ['=' test]
paramspec: '**' NAME ['=' expr] paramspec: '**' NAME ['=' test]
typevartuple: '*' NAME ['=' (expr|star_expr)] typevartuple: '*' NAME ['=' (test|star_expr)]
typeparam: typevar | paramspec | typevartuple typeparam: typevar | paramspec | typevartuple
typeparams: '[' typeparam (',' typeparam)* [','] ']' typeparams: '[' typeparam (',' typeparam)* [','] ']'

View File

@ -28,7 +28,7 @@
from typing import IO, Any, Optional, Union, cast from typing import IO, Any, Optional, Union, cast
from blib2to3.pgen2.grammar import Grammar from blib2to3.pgen2.grammar import Grammar
from blib2to3.pgen2.tokenize import GoodTokenInfo from blib2to3.pgen2.tokenize import TokenInfo
from blib2to3.pytree import NL from blib2to3.pytree import NL
# Pgen imports # Pgen imports
@ -112,7 +112,7 @@ def __init__(self, grammar: Grammar, logger: Optional[Logger] = None) -> None:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
self.logger = logger self.logger = logger
def parse_tokens(self, tokens: Iterable[GoodTokenInfo], debug: bool = False) -> NL: def parse_tokens(self, tokens: Iterable[TokenInfo], debug: bool = False) -> NL:
"""Parse a series of tokens and return the syntax tree.""" """Parse a series of tokens and return the syntax tree."""
# XXX Move the prefix computation into a wrapper around tokenize. # XXX Move the prefix computation into a wrapper around tokenize.
proxy = TokenProxy(tokens) proxy = TokenProxy(tokens)
@ -180,27 +180,17 @@ def parse_tokens(self, tokens: Iterable[GoodTokenInfo], debug: bool = False) ->
assert p.rootnode is not None assert p.rootnode is not None
return p.rootnode return p.rootnode
def parse_stream_raw(self, stream: IO[str], debug: bool = False) -> NL:
"""Parse a stream and return the syntax tree."""
tokens = tokenize.generate_tokens(stream.readline, grammar=self.grammar)
return self.parse_tokens(tokens, debug)
def parse_stream(self, stream: IO[str], debug: bool = False) -> NL:
"""Parse a stream and return the syntax tree."""
return self.parse_stream_raw(stream, debug)
def parse_file( def parse_file(
self, filename: Path, encoding: Optional[str] = None, debug: bool = False self, filename: Path, encoding: Optional[str] = None, debug: bool = False
) -> NL: ) -> NL:
"""Parse a file and return the syntax tree.""" """Parse a file and return the syntax tree."""
with open(filename, encoding=encoding) as stream: with open(filename, encoding=encoding) as stream:
return self.parse_stream(stream, debug) text = stream.read()
return self.parse_string(text, debug)
def parse_string(self, text: str, debug: bool = False) -> NL: def parse_string(self, text: str, debug: bool = False) -> NL:
"""Parse a string and return the syntax tree.""" """Parse a string and return the syntax tree."""
tokens = tokenize.generate_tokens( tokens = tokenize.tokenize(text, grammar=self.grammar)
io.StringIO(text).readline, grammar=self.grammar
)
return self.parse_tokens(tokens, debug) return self.parse_tokens(tokens, debug)
def _partially_consume_prefix(self, prefix: str, column: int) -> tuple[str, str]: def _partially_consume_prefix(self, prefix: str, column: int) -> tuple[str, str]:

View File

@ -28,16 +28,16 @@ def escape(m: re.Match[str]) -> str:
if tail.startswith("x"): if tail.startswith("x"):
hexes = tail[1:] hexes = tail[1:]
if len(hexes) < 2: if len(hexes) < 2:
raise ValueError("invalid hex string escape ('\\%s')" % tail) raise ValueError(f"invalid hex string escape ('\\{tail}')")
try: try:
i = int(hexes, 16) i = int(hexes, 16)
except ValueError: except ValueError:
raise ValueError("invalid hex string escape ('\\%s')" % tail) from None raise ValueError(f"invalid hex string escape ('\\{tail}')") from None
else: else:
try: try:
i = int(tail, 8) i = int(tail, 8)
except ValueError: except ValueError:
raise ValueError("invalid octal string escape ('\\%s')" % tail) from None raise ValueError(f"invalid octal string escape ('\\{tail}')") from None
return chr(i) return chr(i)

View File

@ -89,18 +89,12 @@ def backtrack(self) -> Iterator[None]:
self.parser.is_backtracking = is_backtracking self.parser.is_backtracking = is_backtracking
def add_token(self, tok_type: int, tok_val: str, raw: bool = False) -> None: def add_token(self, tok_type: int, tok_val: str, raw: bool = False) -> None:
func: Callable[..., Any]
if raw:
func = self.parser._addtoken
else:
func = self.parser.addtoken
for ilabel in self.ilabels: for ilabel in self.ilabels:
with self.switch_to(ilabel): with self.switch_to(ilabel):
args = [tok_type, tok_val, self.context]
if raw: if raw:
args.insert(0, ilabel) self.parser._addtoken(ilabel, tok_type, tok_val, self.context)
func(*args) else:
self.parser.addtoken(tok_type, tok_val, self.context)
def determine_route( def determine_route(
self, value: Optional[str] = None, force: bool = False self, value: Optional[str] = None, force: bool = False

View File

@ -6,7 +6,7 @@
from typing import IO, Any, NoReturn, Optional, Union from typing import IO, Any, NoReturn, Optional, Union
from blib2to3.pgen2 import grammar, token, tokenize from blib2to3.pgen2 import grammar, token, tokenize
from blib2to3.pgen2.tokenize import GoodTokenInfo from blib2to3.pgen2.tokenize import TokenInfo
Path = Union[str, "os.PathLike[str]"] Path = Union[str, "os.PathLike[str]"]
@ -18,7 +18,7 @@ class PgenGrammar(grammar.Grammar):
class ParserGenerator: class ParserGenerator:
filename: Path filename: Path
stream: IO[str] stream: IO[str]
generator: Iterator[GoodTokenInfo] generator: Iterator[TokenInfo]
first: dict[str, Optional[dict[str, int]]] first: dict[str, Optional[dict[str, int]]]
def __init__(self, filename: Path, stream: Optional[IO[str]] = None) -> None: def __init__(self, filename: Path, stream: Optional[IO[str]] = None) -> None:
@ -27,8 +27,7 @@ def __init__(self, filename: Path, stream: Optional[IO[str]] = None) -> None:
stream = open(filename, encoding="utf-8") stream = open(filename, encoding="utf-8")
close_stream = stream.close close_stream = stream.close
self.filename = filename self.filename = filename
self.stream = stream self.generator = tokenize.tokenize(stream.read())
self.generator = tokenize.generate_tokens(stream.readline)
self.gettoken() # Initialize lookahead self.gettoken() # Initialize lookahead
self.dfas, self.startsymbol = self.parse() self.dfas, self.startsymbol = self.parse()
if close_stream is not None: if close_stream is not None:
@ -141,7 +140,7 @@ def calcfirst(self, name: str) -> None:
if label in self.first: if label in self.first:
fset = self.first[label] fset = self.first[label]
if fset is None: if fset is None:
raise ValueError("recursion for rule %r" % name) raise ValueError(f"recursion for rule {name!r}")
else: else:
self.calcfirst(label) self.calcfirst(label)
fset = self.first[label] fset = self.first[label]
@ -156,8 +155,8 @@ def calcfirst(self, name: str) -> None:
for symbol in itsfirst: for symbol in itsfirst:
if symbol in inverse: if symbol in inverse:
raise ValueError( raise ValueError(
"rule %s is ambiguous; %s is in the first sets of %s as well" f"rule {name} is ambiguous; {symbol} is in the first sets of"
" as %s" % (name, symbol, label, inverse[symbol]) f" {label} as well as {inverse[symbol]}"
) )
inverse[symbol] = label inverse[symbol] = label
self.first[name] = totalset self.first[name] = totalset
@ -238,16 +237,16 @@ def dump_nfa(self, name: str, start: "NFAState", finish: "NFAState") -> None:
j = len(todo) j = len(todo)
todo.append(next) todo.append(next)
if label is None: if label is None:
print(" -> %d" % j) print(f" -> {j}")
else: else:
print(" %s -> %d" % (label, j)) print(f" {label} -> {j}")
def dump_dfa(self, name: str, dfa: Sequence["DFAState"]) -> None: def dump_dfa(self, name: str, dfa: Sequence["DFAState"]) -> None:
print("Dump of DFA for", name) print("Dump of DFA for", name)
for i, state in enumerate(dfa): for i, state in enumerate(dfa):
print(" State", i, state.isfinal and "(final)" or "") print(" State", i, state.isfinal and "(final)" or "")
for label, next in sorted(state.arcs.items()): for label, next in sorted(state.arcs.items()):
print(" %s -> %d" % (label, dfa.index(next))) print(f" {label} -> {dfa.index(next)}")
def simplify_dfa(self, dfa: list["DFAState"]) -> None: def simplify_dfa(self, dfa: list["DFAState"]) -> None:
# This is not theoretically optimal, but works well enough. # This is not theoretically optimal, but works well enough.
@ -331,15 +330,12 @@ def parse_atom(self) -> tuple["NFAState", "NFAState"]:
return a, z return a, z
else: else:
self.raise_error( self.raise_error(
"expected (...) or NAME or STRING, got %s/%s", self.type, self.value f"expected (...) or NAME or STRING, got {self.type}/{self.value}"
) )
raise AssertionError
def expect(self, type: int, value: Optional[Any] = None) -> str: def expect(self, type: int, value: Optional[Any] = None) -> str:
if self.type != type or (value is not None and self.value != value): if self.type != type or (value is not None and self.value != value):
self.raise_error( self.raise_error(f"expected {type}/{value}, got {self.type}/{self.value}")
"expected %s/%s, got %s/%s", type, value, self.type, self.value
)
value = self.value value = self.value
self.gettoken() self.gettoken()
return value return value
@ -351,12 +347,7 @@ def gettoken(self) -> None:
self.type, self.value, self.begin, self.end, self.line = tup self.type, self.value, self.begin, self.end, self.line = tup
# print token.tok_name[self.type], repr(self.value) # print token.tok_name[self.type], repr(self.value)
def raise_error(self, msg: str, *args: Any) -> NoReturn: def raise_error(self, msg: str) -> NoReturn:
if args:
try:
msg = msg % args
except Exception:
msg = " ".join([msg] + list(map(str, args)))
raise SyntaxError( raise SyntaxError(
msg, (str(self.filename), self.end[0], self.end[1], self.line) msg, (str(self.filename), self.end[0], self.end[1], self.line)
) )

File diff suppressed because it is too large Load Diff

View File

@ -268,11 +268,7 @@ def __init__(
def __repr__(self) -> str: def __repr__(self) -> str:
"""Return a canonical string representation.""" """Return a canonical string representation."""
assert self.type is not None assert self.type is not None
return "{}({}, {!r})".format( return f"{self.__class__.__name__}({type_repr(self.type)}, {self.children!r})"
self.__class__.__name__,
type_repr(self.type),
self.children,
)
def __str__(self) -> str: def __str__(self) -> str:
""" """
@ -421,10 +417,9 @@ def __repr__(self) -> str:
from .pgen2.token import tok_name from .pgen2.token import tok_name
assert self.type is not None assert self.type is not None
return "{}({}, {!r})".format( return (
self.__class__.__name__, f"{self.__class__.__name__}({tok_name.get(self.type, self.type)},"
tok_name.get(self.type, self.type), f" {self.value!r})"
self.value,
) )
def __str__(self) -> str: def __str__(self) -> str:
@ -527,7 +522,7 @@ def __repr__(self) -> str:
args = [type_repr(self.type), self.content, self.name] args = [type_repr(self.type), self.content, self.name]
while args and args[-1] is None: while args and args[-1] is None:
del args[-1] del args[-1]
return "{}({})".format(self.__class__.__name__, ", ".join(map(repr, args))) return f"{self.__class__.__name__}({', '.join(map(repr, args))})"
def _submatch(self, node, results=None) -> bool: def _submatch(self, node, results=None) -> bool:
raise NotImplementedError raise NotImplementedError

View File

@ -31,7 +31,8 @@
raise ValueError(err.format(key)) raise ValueError(err.format(key))
concatenated_strings = "some strings that are " "concatenated implicitly, so if you put them on separate " "lines it will fit" concatenated_strings = "some strings that are " "concatenated implicitly, so if you put them on separate " "lines it will fit"
del concatenated_strings, string_variable_name, normal_function_name, normal_name, need_more_to_make_the_line_long_enough del concatenated_strings, string_variable_name, normal_function_name, normal_name, need_more_to_make_the_line_long_enough
del ([], name_1, name_2), [(), [], name_4, name_3], name_1[[name_2 for name_1 in name_0]]
del (),
# output # output
@ -91,3 +92,9 @@
normal_name, normal_name,
need_more_to_make_the_line_long_enough, need_more_to_make_the_line_long_enough,
) )
del (
([], name_1, name_2),
[(), [], name_4, name_3],
name_1[[name_2 for name_1 in name_0]],
)
del ((),)

View File

@ -84,6 +84,31 @@ async def func():
pass pass
# don't remove the brackets here, it changes the meaning of the code.
with (x, y) as z:
pass
# don't remove the brackets here, it changes the meaning of the code.
# even though the code will always trigger a runtime error
with (name_5, name_4), name_5:
pass
def test_tuple_as_contextmanager():
from contextlib import nullcontext
try:
with (nullcontext(),nullcontext()),nullcontext():
pass
except TypeError:
# test passed
pass
else:
# this should be a type error
assert False
# output # output
@ -172,3 +197,28 @@ async def func():
some_other_function(argument1, argument2, argument3="some_value"), some_other_function(argument1, argument2, argument3="some_value"),
): ):
pass pass
# don't remove the brackets here, it changes the meaning of the code.
with (x, y) as z:
pass
# don't remove the brackets here, it changes the meaning of the code.
# even though the code will always trigger a runtime error
with (name_5, name_4), name_5:
pass
def test_tuple_as_contextmanager():
from contextlib import nullcontext
try:
with (nullcontext(), nullcontext()), nullcontext():
pass
except TypeError:
# test passed
pass
else:
# this should be a type error
assert False

View File

@ -0,0 +1,9 @@
# flags: --preview
def foo(): return "mock" # fmt: skip
if True: print("yay") # fmt: skip
for i in range(10): print(i) # fmt: skip
j = 1 # fmt: skip
while j < 10: j += 1 # fmt: skip
b = [c for c in "A very long string that would normally generate some kind of collapse, since it is this long"] # fmt: skip

View File

@ -0,0 +1,6 @@
def foo():
pass
# comment 1 # fmt: skip
# comment 2

View File

@ -0,0 +1,67 @@
# Regression tests for long f-strings, including examples from issue #3623
a = (
'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
)
a = (
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa'
)
a = 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' + \
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
a = f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"' + \
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
a = (
f'bbbbbbb"{"b"}"'
'aaaaaaaa'
)
a = (
f'"{"b"}"'
)
a = (
f'\"{"b"}\"'
)
a = (
r'\"{"b"}\"'
)
# output
# Regression tests for long f-strings, including examples from issue #3623
a = (
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
)
a = (
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
)
a = (
"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
+ f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
)
a = (
f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
+ f'bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb"{"b"}"'
)
a = f'bbbbbbb"{"b"}"' "aaaaaaaa"
a = f'"{"b"}"'
a = f'"{"b"}"'
a = r'\"{"b"}\"'

View File

@ -14,3 +14,8 @@
f((a := b + c for c in range(10)), x) f((a := b + c for c in range(10)), x)
f(y=(a := b + c for c in range(10))) f(y=(a := b + c for c in range(10)))
f(x, (a := b + c for c in range(10)), y=z, **q) f(x, (a := b + c for c in range(10)), y=z, **q)
# Don't remove parens when assignment expr is one of the exprs in a with statement
with x, (a := b):
pass

View File

@ -10,6 +10,7 @@ def g():
async def func(): async def func():
await ...
if test: if test:
out_batched = [ out_batched = [
i i
@ -42,6 +43,7 @@ def g():
async def func(): async def func():
await ...
if test: if test:
out_batched = [ out_batched = [
i i

View File

@ -20,6 +20,8 @@ def trailing_comma1[T=int,](a: str):
def trailing_comma2[T=int](a: str,): def trailing_comma2[T=int](a: str,):
pass pass
def weird_syntax[T=lambda: 42, **P=lambda: 43, *Ts=lambda: 44](): pass
# output # output
type A[T = int] = float type A[T = int] = float
@ -61,3 +63,7 @@ def trailing_comma2[T = int](
a: str, a: str,
): ):
pass pass
def weird_syntax[T = lambda: 42, **P = lambda: 43, *Ts = lambda: 44]():
pass

View File

@ -13,6 +13,8 @@ def it_gets_worse[WhatIsTheLongestTypeVarNameYouCanThinkOfEnoughToMakeBlackSplit
def magic[Trailing, Comma,](): pass def magic[Trailing, Comma,](): pass
def weird_syntax[T: lambda: 42, U: a or b](): pass
# output # output
@ -56,3 +58,7 @@ def magic[
Comma, Comma,
](): ]():
pass pass
def weird_syntax[T: lambda: 42, U: a or b]():
pass

View File

@ -232,8 +232,6 @@ file_input
fstring fstring
FSTRING_START FSTRING_START
"f'" "f'"
FSTRING_MIDDLE
''
fstring_replacement_field fstring_replacement_field
LBRACE LBRACE
'{' '{'
@ -242,8 +240,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
fstring_replacement_field fstring_replacement_field
LBRACE LBRACE
'{' '{'
@ -252,8 +248,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
FSTRING_END FSTRING_END
"'" "'"
/fstring /fstring
@ -399,8 +393,6 @@ file_input
fstring fstring
FSTRING_START FSTRING_START
"f'" "f'"
FSTRING_MIDDLE
''
fstring_replacement_field fstring_replacement_field
LBRACE LBRACE
'{' '{'
@ -419,8 +411,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
FSTRING_END FSTRING_END
"'" "'"
/fstring /fstring
@ -549,8 +539,6 @@ file_input
fstring fstring
FSTRING_START FSTRING_START
"f'" "f'"
FSTRING_MIDDLE
''
fstring_replacement_field fstring_replacement_field
LBRACE LBRACE
'{' '{'
@ -559,8 +547,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
fstring_replacement_field fstring_replacement_field
LBRACE LBRACE
'{' '{'
@ -569,8 +555,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
FSTRING_END FSTRING_END
"'" "'"
/fstring /fstring
@ -660,8 +644,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
FSTRING_END FSTRING_END
"'" "'"
/fstring /fstring
@ -744,8 +726,6 @@ file_input
RBRACE RBRACE
'}' '}'
/fstring_replacement_field /fstring_replacement_field
FSTRING_MIDDLE
''
FSTRING_END FSTRING_END
"'" "'"
/fstring /fstring

View File

@ -14,6 +14,7 @@
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from contextlib import contextmanager, redirect_stderr from contextlib import contextmanager, redirect_stderr
from dataclasses import fields, replace from dataclasses import fields, replace
from importlib.metadata import version as imp_version
from io import BytesIO from io import BytesIO
from pathlib import Path, WindowsPath from pathlib import Path, WindowsPath
from platform import system from platform import system
@ -25,6 +26,7 @@
import pytest import pytest
from click import unstyle from click import unstyle
from click.testing import CliRunner from click.testing import CliRunner
from packaging.version import Version
from pathspec import PathSpec from pathspec import PathSpec
import black import black
@ -114,7 +116,10 @@ class BlackRunner(CliRunner):
"""Make sure STDOUT and STDERR are kept separate when testing Black via its CLI.""" """Make sure STDOUT and STDERR are kept separate when testing Black via its CLI."""
def __init__(self) -> None: def __init__(self) -> None:
super().__init__(mix_stderr=False) if Version(imp_version("click")) >= Version("8.2.0"):
super().__init__()
else:
super().__init__(mix_stderr=False) # type: ignore
def invokeBlack( def invokeBlack(
@ -187,10 +192,10 @@ def test_piping(self) -> None:
input=BytesIO(source.encode("utf-8")), input=BytesIO(source.encode("utf-8")),
) )
self.assertEqual(result.exit_code, 0) self.assertEqual(result.exit_code, 0)
self.assertFormatEqual(expected, result.output) self.assertFormatEqual(expected, result.stdout)
if source != result.output: if source != result.stdout:
black.assert_equivalent(source, result.output) black.assert_equivalent(source, result.stdout)
black.assert_stable(source, result.output, DEFAULT_MODE) black.assert_stable(source, result.stdout, DEFAULT_MODE)
def test_piping_diff(self) -> None: def test_piping_diff(self) -> None:
diff_header = re.compile( diff_header = re.compile(
@ -210,7 +215,7 @@ def test_piping_diff(self) -> None:
black.main, args, input=BytesIO(source.encode("utf-8")) black.main, args, input=BytesIO(source.encode("utf-8"))
) )
self.assertEqual(result.exit_code, 0) self.assertEqual(result.exit_code, 0)
actual = diff_header.sub(DETERMINISTIC_HEADER, result.output) actual = diff_header.sub(DETERMINISTIC_HEADER, result.stdout)
actual = actual.rstrip() + "\n" # the diff output has a trailing space actual = actual.rstrip() + "\n" # the diff output has a trailing space
self.assertEqual(expected, actual) self.assertEqual(expected, actual)
@ -295,7 +300,7 @@ def test_expression_diff(self) -> None:
self.assertEqual(result.exit_code, 0) self.assertEqual(result.exit_code, 0)
finally: finally:
os.unlink(tmp_file) os.unlink(tmp_file)
actual = result.output actual = result.stdout
actual = diff_header.sub(DETERMINISTIC_HEADER, actual) actual = diff_header.sub(DETERMINISTIC_HEADER, actual)
if expected != actual: if expected != actual:
dump = black.dump_to_file(actual) dump = black.dump_to_file(actual)
@ -404,7 +409,7 @@ def test_skip_magic_trailing_comma(self) -> None:
self.assertEqual(result.exit_code, 0) self.assertEqual(result.exit_code, 0)
finally: finally:
os.unlink(tmp_file) os.unlink(tmp_file)
actual = result.output actual = result.stdout
actual = diff_header.sub(DETERMINISTIC_HEADER, actual) actual = diff_header.sub(DETERMINISTIC_HEADER, actual)
actual = actual.rstrip() + "\n" # the diff output has a trailing space actual = actual.rstrip() + "\n" # the diff output has a trailing space
if expected != actual: if expected != actual:
@ -417,21 +422,6 @@ def test_skip_magic_trailing_comma(self) -> None:
) )
self.assertEqual(expected, actual, msg) self.assertEqual(expected, actual, msg)
@patch("black.dump_to_file", dump_to_stderr)
def test_async_as_identifier(self) -> None:
source_path = get_case_path("miscellaneous", "async_as_identifier")
_, source, expected = read_data_from_file(source_path)
actual = fs(source)
self.assertFormatEqual(expected, actual)
major, minor = sys.version_info[:2]
if major < 3 or (major <= 3 and minor < 7):
black.assert_equivalent(source, actual)
black.assert_stable(source, actual, DEFAULT_MODE)
# ensure black can parse this when the target is 3.6
self.invokeBlack([str(source_path), "--target-version", "py36"])
# but not on 3.7, because async/await is no longer an identifier
self.invokeBlack([str(source_path), "--target-version", "py37"], exit_code=123)
@patch("black.dump_to_file", dump_to_stderr) @patch("black.dump_to_file", dump_to_stderr)
def test_python37(self) -> None: def test_python37(self) -> None:
source_path = get_case_path("cases", "python37") source_path = get_case_path("cases", "python37")
@ -444,8 +434,6 @@ def test_python37(self) -> None:
black.assert_stable(source, actual, DEFAULT_MODE) black.assert_stable(source, actual, DEFAULT_MODE)
# ensure black can parse this when the target is 3.7 # ensure black can parse this when the target is 3.7
self.invokeBlack([str(source_path), "--target-version", "py37"]) self.invokeBlack([str(source_path), "--target-version", "py37"])
# but not on 3.6, because we use async as a reserved keyword
self.invokeBlack([str(source_path), "--target-version", "py36"], exit_code=123)
def test_tab_comment_indentation(self) -> None: def test_tab_comment_indentation(self) -> None:
contents_tab = "if 1:\n\tif 2:\n\t\tpass\n\t# comment\n\tpass\n" contents_tab = "if 1:\n\tif 2:\n\t\tpass\n\t# comment\n\tpass\n"
@ -458,17 +446,6 @@ def test_tab_comment_indentation(self) -> None:
self.assertFormatEqual(contents_spc, fs(contents_spc)) self.assertFormatEqual(contents_spc, fs(contents_spc))
self.assertFormatEqual(contents_spc, fs(contents_tab)) self.assertFormatEqual(contents_spc, fs(contents_tab))
# mixed tabs and spaces (valid Python 2 code)
contents_tab = "if 1:\n if 2:\n\t\tpass\n\t# comment\n pass\n"
contents_spc = "if 1:\n if 2:\n pass\n # comment\n pass\n"
self.assertFormatEqual(contents_spc, fs(contents_spc))
self.assertFormatEqual(contents_spc, fs(contents_tab))
contents_tab = "if 1:\n if 2:\n\t\tpass\n\t\t# comment\n pass\n"
contents_spc = "if 1:\n if 2:\n pass\n # comment\n pass\n"
self.assertFormatEqual(contents_spc, fs(contents_spc))
self.assertFormatEqual(contents_spc, fs(contents_tab))
def test_false_positive_symlink_output_issue_3384(self) -> None: def test_false_positive_symlink_output_issue_3384(self) -> None:
# Emulate the behavior when using the CLI (`black ./child --verbose`), which # Emulate the behavior when using the CLI (`black ./child --verbose`), which
# involves patching some `pathlib.Path` methods. In particular, `is_dir` is # involves patching some `pathlib.Path` methods. In particular, `is_dir` is
@ -1826,7 +1803,7 @@ def test_bpo_2142_workaround(self) -> None:
self.assertEqual(result.exit_code, 0) self.assertEqual(result.exit_code, 0)
finally: finally:
os.unlink(tmp_file) os.unlink(tmp_file)
actual = result.output actual = result.stdout
actual = diff_header.sub(DETERMINISTIC_HEADER, actual) actual = diff_header.sub(DETERMINISTIC_HEADER, actual)
self.assertEqual(actual, expected) self.assertEqual(actual, expected)
@ -1836,7 +1813,7 @@ def compare_results(
) -> None: ) -> None:
"""Helper method to test the value and exit code of a click Result.""" """Helper method to test the value and exit code of a click Result."""
assert ( assert (
result.output == expected_value result.stdout == expected_value
), "The output did not match the expected value." ), "The output did not match the expected value."
assert result.exit_code == expected_exit_code, "The exit code is incorrect." assert result.exit_code == expected_exit_code, "The exit code is incorrect."
@ -1913,7 +1890,8 @@ def test_code_option_safe(self) -> None:
args = ["--safe", "--code", code] args = ["--safe", "--code", code]
result = CliRunner().invoke(black.main, args) result = CliRunner().invoke(black.main, args)
self.compare_results(result, error_msg, 123) assert error_msg == result.output
assert result.exit_code == 123
def test_code_option_fast(self) -> None: def test_code_option_fast(self) -> None:
"""Test that the code option ignores errors when the sanity checks fail.""" """Test that the code option ignores errors when the sanity checks fail."""
@ -1975,7 +1953,7 @@ def test_for_handled_unexpected_eof_error(self) -> None:
with pytest.raises(black.parsing.InvalidInput) as exc_info: with pytest.raises(black.parsing.InvalidInput) as exc_info:
black.lib2to3_parse("print(", {}) black.lib2to3_parse("print(", {})
exc_info.match("Cannot parse: 2:0: EOF in multi-line statement") exc_info.match("Cannot parse: 1:6: Unexpected EOF in multi-line statement")
def test_line_ranges_with_code_option(self) -> None: def test_line_ranges_with_code_option(self) -> None:
code = textwrap.dedent("""\ code = textwrap.dedent("""\
@ -2070,6 +2048,26 @@ def test_lines_with_leading_tabs_expanded(self) -> None:
assert lines_with_leading_tabs_expanded("\t\tx") == [f"{tab}{tab}x"] assert lines_with_leading_tabs_expanded("\t\tx") == [f"{tab}{tab}x"]
assert lines_with_leading_tabs_expanded("\tx\n y") == [f"{tab}x", " y"] assert lines_with_leading_tabs_expanded("\tx\n y") == [f"{tab}x", " y"]
def test_backslash_carriage_return(self) -> None:
# These tests are here instead of in the normal cases because
# of git's newline normalization and because it's hard to
# get `\r` vs `\r\n` vs `\n` to display properly in editors
assert black.format_str("x=\\\r\n1", mode=black.FileMode()) == "x = 1\n"
assert black.format_str("x=\\\n1", mode=black.FileMode()) == "x = 1\n"
assert black.format_str("x=\\\r1", mode=black.FileMode()) == "x = 1\n"
assert (
black.format_str("class A\\\r\n:...", mode=black.FileMode())
== "class A: ...\n"
)
assert (
black.format_str("class A\\\n:...", mode=black.FileMode())
== "class A: ...\n"
)
assert (
black.format_str("class A\\\r:...", mode=black.FileMode())
== "class A: ...\n"
)
class TestCaching: class TestCaching:
def test_get_cache_dir( def test_get_cache_dir(

View File

@ -1,6 +1,5 @@
"""Tests for the blib2to3 tokenizer.""" """Tests for the blib2to3 tokenizer."""
import io
import sys import sys
import textwrap import textwrap
from dataclasses import dataclass from dataclasses import dataclass
@ -19,16 +18,10 @@ class Token:
def get_tokens(text: str) -> list[Token]: def get_tokens(text: str) -> list[Token]:
"""Return the tokens produced by the tokenizer.""" """Return the tokens produced by the tokenizer."""
readline = io.StringIO(text).readline return [
tokens: list[Token] = [] Token(token.tok_name[tok_type], string, start, end)
for tok_type, string, start, end, _ in tokenize.tokenize(text)
def tokeneater( ]
type: int, string: str, start: tokenize.Coord, end: tokenize.Coord, line: str
) -> None:
tokens.append(Token(token.tok_name[type], string, start, end))
tokenize.tokenize(readline, tokeneater)
return tokens
def assert_tokenizes(text: str, tokens: list[Token]) -> None: def assert_tokenizes(text: str, tokens: list[Token]) -> None:
@ -69,11 +62,9 @@ def test_fstring() -> None:
'f"{x}"', 'f"{x}"',
[ [
Token("FSTRING_START", 'f"', (1, 0), (1, 2)), Token("FSTRING_START", 'f"', (1, 0), (1, 2)),
Token("FSTRING_MIDDLE", "", (1, 2), (1, 2)), Token("OP", "{", (1, 2), (1, 3)),
Token("LBRACE", "{", (1, 2), (1, 3)),
Token("NAME", "x", (1, 3), (1, 4)), Token("NAME", "x", (1, 3), (1, 4)),
Token("RBRACE", "}", (1, 4), (1, 5)), Token("OP", "}", (1, 4), (1, 5)),
Token("FSTRING_MIDDLE", "", (1, 5), (1, 5)),
Token("FSTRING_END", '"', (1, 5), (1, 6)), Token("FSTRING_END", '"', (1, 5), (1, 6)),
Token("ENDMARKER", "", (2, 0), (2, 0)), Token("ENDMARKER", "", (2, 0), (2, 0)),
], ],
@ -82,13 +73,11 @@ def test_fstring() -> None:
'f"{x:y}"\n', 'f"{x:y}"\n',
[ [
Token(type="FSTRING_START", string='f"', start=(1, 0), end=(1, 2)), Token(type="FSTRING_START", string='f"', start=(1, 0), end=(1, 2)),
Token(type="FSTRING_MIDDLE", string="", start=(1, 2), end=(1, 2)), Token(type="OP", string="{", start=(1, 2), end=(1, 3)),
Token(type="LBRACE", string="{", start=(1, 2), end=(1, 3)),
Token(type="NAME", string="x", start=(1, 3), end=(1, 4)), Token(type="NAME", string="x", start=(1, 3), end=(1, 4)),
Token(type="OP", string=":", start=(1, 4), end=(1, 5)), Token(type="OP", string=":", start=(1, 4), end=(1, 5)),
Token(type="FSTRING_MIDDLE", string="y", start=(1, 5), end=(1, 6)), Token(type="FSTRING_MIDDLE", string="y", start=(1, 5), end=(1, 6)),
Token(type="RBRACE", string="}", start=(1, 6), end=(1, 7)), Token(type="OP", string="}", start=(1, 6), end=(1, 7)),
Token(type="FSTRING_MIDDLE", string="", start=(1, 7), end=(1, 7)),
Token(type="FSTRING_END", string='"', start=(1, 7), end=(1, 8)), Token(type="FSTRING_END", string='"', start=(1, 7), end=(1, 8)),
Token(type="NEWLINE", string="\n", start=(1, 8), end=(1, 9)), Token(type="NEWLINE", string="\n", start=(1, 8), end=(1, 9)),
Token(type="ENDMARKER", string="", start=(2, 0), end=(2, 0)), Token(type="ENDMARKER", string="", start=(2, 0), end=(2, 0)),
@ -99,10 +88,9 @@ def test_fstring() -> None:
[ [
Token(type="FSTRING_START", string='f"', start=(1, 0), end=(1, 2)), Token(type="FSTRING_START", string='f"', start=(1, 0), end=(1, 2)),
Token(type="FSTRING_MIDDLE", string="x\\\n", start=(1, 2), end=(2, 0)), Token(type="FSTRING_MIDDLE", string="x\\\n", start=(1, 2), end=(2, 0)),
Token(type="LBRACE", string="{", start=(2, 0), end=(2, 1)), Token(type="OP", string="{", start=(2, 0), end=(2, 1)),
Token(type="NAME", string="a", start=(2, 1), end=(2, 2)), Token(type="NAME", string="a", start=(2, 1), end=(2, 2)),
Token(type="RBRACE", string="}", start=(2, 2), end=(2, 3)), Token(type="OP", string="}", start=(2, 2), end=(2, 3)),
Token(type="FSTRING_MIDDLE", string="", start=(2, 3), end=(2, 3)),
Token(type="FSTRING_END", string='"', start=(2, 3), end=(2, 4)), Token(type="FSTRING_END", string='"', start=(2, 3), end=(2, 4)),
Token(type="NEWLINE", string="\n", start=(2, 4), end=(2, 5)), Token(type="NEWLINE", string="\n", start=(2, 4), end=(2, 5)),
Token(type="ENDMARKER", string="", start=(3, 0), end=(3, 0)), Token(type="ENDMARKER", string="", start=(3, 0), end=(3, 0)),

24
tox.ini
View File

@ -13,18 +13,16 @@ skip_install = True
recreate = True recreate = True
deps = deps =
-r{toxinidir}/test_requirements.txt -r{toxinidir}/test_requirements.txt
; parallelization is disabled on CI because pytest-dev/pytest-xdist#620 occurs too frequently
; local runs can stay parallelized since they aren't rolling the dice so many times as like on CI
commands = commands =
pip install -e .[d] pip install -e .[d]
coverage erase coverage erase
pytest tests --run-optional no_jupyter \ pytest tests --run-optional no_jupyter \
!ci: --numprocesses auto \ --numprocesses auto \
--cov {posargs} --cov {posargs}
pip install -e .[jupyter] pip install -e .[jupyter]
pytest tests --run-optional jupyter \ pytest tests --run-optional jupyter \
-m jupyter \ -m jupyter \
!ci: --numprocesses auto \ --numprocesses auto \
--cov --cov-append {posargs} --cov --cov-append {posargs}
coverage report coverage report
@ -34,20 +32,15 @@ skip_install = True
recreate = True recreate = True
deps = deps =
-r{toxinidir}/test_requirements.txt -r{toxinidir}/test_requirements.txt
; a separate worker is required in ci due to https://foss.heptapod.net/pypy/pypy/-/issues/3317
; this seems to cause tox to wait forever
; remove this when pypy releases the bugfix
commands = commands =
pip install -e .[d] pip install -e .[d]
pytest tests \ pytest tests \
--run-optional no_jupyter \ --run-optional no_jupyter \
!ci: --numprocesses auto \ --numprocesses auto
ci: --numprocesses 1
pip install -e .[jupyter] pip install -e .[jupyter]
pytest tests --run-optional jupyter \ pytest tests --run-optional jupyter \
-m jupyter \ -m jupyter \
!ci: --numprocesses auto \ --numprocesses auto
ci: --numprocesses 1
[testenv:{,ci-}311] [testenv:{,ci-}311]
setenv = setenv =
@ -59,22 +52,17 @@ deps =
; We currently need > aiohttp 3.8.1 that is on PyPI for 3.11 ; We currently need > aiohttp 3.8.1 that is on PyPI for 3.11
git+https://github.com/aio-libs/aiohttp git+https://github.com/aio-libs/aiohttp
-r{toxinidir}/test_requirements.txt -r{toxinidir}/test_requirements.txt
; a separate worker is required in ci due to https://foss.heptapod.net/pypy/pypy/-/issues/3317
; this seems to cause tox to wait forever
; remove this when pypy releases the bugfix
commands = commands =
pip install -e .[d] pip install -e .[d]
coverage erase coverage erase
pytest tests \ pytest tests \
--run-optional no_jupyter \ --run-optional no_jupyter \
!ci: --numprocesses auto \ --numprocesses auto \
ci: --numprocesses 1 \
--cov {posargs} --cov {posargs}
pip install -e .[jupyter] pip install -e .[jupyter]
pytest tests --run-optional jupyter \ pytest tests --run-optional jupyter \
-m jupyter \ -m jupyter \
!ci: --numprocesses auto \ --numprocesses auto \
ci: --numprocesses 1 \
--cov --cov-append {posargs} --cov --cov-append {posargs}
coverage report coverage report