Compare commits

..

No commits in common. "main" and "19.10b0" have entirely different histories.

457 changed files with 10705 additions and 44380 deletions

26
.appveyor.yml Normal file
View File

@ -0,0 +1,26 @@
install:
- C:\Python36\python.exe -m pip install mypy
- C:\Python36\python.exe -m pip install -e .[d]
# Not a C# project
build: off
test_script:
- C:\Python36\python.exe tests/test_black.py
- C:\Python36\python.exe -m mypy black.py blackd.py tests/test_black.py
after_test:
- C:\Python36\python.exe -m pip install pyinstaller
- "%CMD_IN_ENV% C:\\Python36\\python.exe -m PyInstaller --clean -F --add-data
blib2to3/;blib2to3 black.py"
artifacts:
- path: dist/black.exe
deploy:
provider: GitHub
description: ""
auth_token:
secure: uplI9CJ2dTGcEBCbZuIn+Qb4rC38hOoRSH9lcwuGCr5g9fSnhK1MZdNT6Cjf/mFL
on:
APPVEYOR_REPO_TAG: true

5
.coveragerc Normal file
View File

@ -0,0 +1,5 @@
[report]
omit =
blib2to3/*
tests/data/*
*/site-packages/*

View File

@ -1,8 +1,5 @@
[flake8]
# B905 should be enabled when we drop support for 3.9
ignore = E203, E266, E501, E701, E704, W503, B905, B907
# line length is intentionally set to 80 here because black uses Bugbear
# See https://black.readthedocs.io/en/stable/guides/using_black_with_other_tools.html#bugbear for more details
ignore = E203, E266, E501, W503
max-line-length = 80
max-complexity = 18
select = B,C,E,F,W,T4,B9

View File

@ -1,3 +0,0 @@
node: $Format:%H$
node-date: $Format:%cI$
describe-name: $Format:%(describe:tags=true,match=[0-9]*)$

2
.gitattributes vendored
View File

@ -1,2 +0,0 @@
.git_archival.txt export-subst
*.py diff=python

View File

@ -1,66 +1,35 @@
---
name: Bug report
about: Create a report to help us improve Black's quality
about: Create a report to help us improve
title: ""
labels: "T: bug"
labels: bug
assignees: ""
---
<!--
Please make sure that the bug is not already fixed either in newer versions or the
current development version. To confirm this, you have three options:
**Describe the bug** A clear and concise description of what the bug is.
1. Update Black's version if a newer release exists: `pip install -U black`
2. Use the online formatter at <https://black.vercel.app/?version=main>, which will use
the latest main branch. Note that the online formatter currently runs on
an older version of Python and may not support newer syntax, such as the
extended f-string syntax added in Python 3.12.
3. Or run _Black_ on your machine:
**To Reproduce** Steps to reproduce the behavior:
1. Take this file '...'
2. Run _Black_ on it with these arguments '....'
3. See error
**Expected behavior** A clear and concise description of what you expected to happen.
**Environment (please complete the following information):**
- Version: [e.g. master]
- OS and Python version: [e.g. Linux/Python 3.7.4rc1]
**Does this bug also happen on master?** To answer this, you have two options:
1. Use the online formatter at https://black.now.sh/?version=master, which will use the
latest master branch.
2. Or run _Black_ on your machine:
- create a new virtualenv (make sure it's the same Python version);
- clone this repository;
- run `pip install -e .[d]`;
- run `pip install -r test_requirements.txt`
- make sure it's sane by running `python -m pytest`; and
- run `pip install -e .`;
- make sure it's sane by running `python setup.py test`; and
- run `black` like you did last time.
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
<!--
Minimal steps to reproduce the behavior with source code and Black's configuration.
-->
For example, take this code:
```python
this = "code"
```
And run it with these arguments:
```sh
$ black file.py --target-version py39
```
The resulting error is:
> cannot format file.py: INTERNAL ERROR: ...
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Environment**
<!-- Please complete the following information: -->
- Black's version: <!-- e.g. [main] -->
- OS and Python version: <!-- e.g. [Linux/Python 3.7.4rc1] -->
**Additional context**
<!-- Add any other context about the problem here. -->
**Additional context** Add any other context about the problem here.

View File

@ -1,12 +0,0 @@
# See also: https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
# This is the default and blank issues are useful so let's keep 'em.
blank_issues_enabled: true
contact_links:
- name: Chat on Python Discord
url: https://discord.gg/RtVdv86PrH
about: |
User support, questions, and other lightweight requests can be
handled via the #black-formatter text channel we have on Python
Discord.

View File

@ -1,27 +0,0 @@
---
name: Documentation
about: Report a problem with or suggest something for the documentation
title: ""
labels: "T: documentation"
assignees: ""
---
**Is this related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is.
e.g. I'm always frustrated when [...] / I wished that [...] -->
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to
happen or see changed. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any
alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the issue
here. -->

View File

@ -2,26 +2,18 @@
name: Feature request
about: Suggest an idea for this project
title: ""
labels: "T: enhancement"
labels: enhancement
assignees: ""
---
**Is your feature request related to a problem? Please describe.**
**Is your feature request related to a problem? Please describe.** A clear and concise
description of what the problem is. Ex. I'm always frustrated when [...]
<!-- A clear and concise description of what the problem is.
e.g. I'm always frustrated when [...] -->
**Describe the solution you'd like** A clear and concise description of what you want to
happen.
**Describe the solution you'd like**
**Describe alternatives you've considered** A clear and concise description of any
alternative solutions or features you've considered.
<!-- A clear and concise description of what you want to
happen. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any
alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request
here. -->
**Additional context** Add any other context or screenshots about the feature request
here.

View File

@ -1,37 +1,29 @@
---
name: Code style issue
about: Help us improve the Black code style
name: Style issue
about: Help us improve the Black style
title: ""
labels: "T: style"
labels: design
assignees: ""
---
**Describe the style change**
**Describe the style change** A clear and concise description of how the style can be
improved.
<!-- A clear and concise description of how the style can be
improved. -->
**Examples in the current _Black_ style** Think of some short code snippets that show
how the current _Black_ style is not great:
**Examples in the current _Black_ style**
<!-- Think of some short code snippets that show
how the current _Black_ style is not great: -->
```python
```
def f():
"Make sure this code is blackened"""
pass
```
**Desired style**
**Desired style** How do you think _Black_ should format the above snippets:
<!-- How do you think _Black_ should format the above snippets: -->
```python
```
def f(
):
pass
```
**Additional context**
<!-- Add any other context about the problem here. -->
**Additional context** Add any other context about the problem here.

View File

@ -1,36 +0,0 @@
<!-- Hello! Thanks for submitting a PR. To help make things go a bit more
smoothly we would appreciate that you go through this template. -->
### Description
<!-- Good things to put here include: reasoning for the change (please link
any relevant issues!), any noteworthy (or hacky) choices to be aware of,
or what the problem resolved here looked like ... we won't mind a ranty
story :) -->
### Checklist - did you ...
<!-- If any of the following items aren't relevant for your contribution
please still tick them so we know you've gone through the checklist.
All user-facing changes should get an entry. Otherwise, signal to us
this should get the magical label to silence the CHANGELOG entry check.
Tests are required for bugfixes and new features. Documentation changes
are necessary for formatting and most enhancement changes. -->
- [ ] Add an entry in `CHANGES.md` if necessary?
- [ ] Add / update tests if necessary?
- [ ] Add new / update outdated documentation?
<!-- Just as a reminder, everyone in all psf/black spaces including PRs
must follow the PSF Code of Conduct (link below).
Finally, once again thanks for your time and effort. If you have any
feedback in regards to your experience contributing here, please
let us know!
Helpful links:
PSF COC: https://www.python.org/psf/conduct/
Contributing docs: https://black.readthedocs.io/en/latest/contributing/index.html
Chat on Python Discord: https://discord.gg/RtVdv86PrH -->

View File

@ -1,16 +0,0 @@
# https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "github-actions"
# Workflow files in .github/workflows will be checked
directory: "/"
schedule:
interval: "weekly"
labels: ["skip news", "C: dependencies"]
- package-ecosystem: "pip"
directory: "docs/"
schedule:
interval: "weekly"
labels: ["skip news", "C: dependencies", "T: documentation"]

View File

@ -1,24 +0,0 @@
name: changelog
on:
pull_request:
types: [opened, synchronize, labeled, unlabeled, reopened]
permissions:
contents: read
jobs:
build:
name: Changelog Entry Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Grep CHANGES.md for PR number
if: contains(github.event.pull_request.labels.*.name, 'skip news') != true
run: |
grep -Pz "\((\n\s*)?#${{ github.event.pull_request.number }}(\n\s*)?\)" CHANGES.md || \
(echo "Please add '(#${{ github.event.pull_request.number }})' change line to CHANGES.md (or if appropriate, ask a maintainer to add the 'skip news' label)" && \
exit 1)

View File

@ -1,155 +0,0 @@
name: diff-shades
on:
push:
branches: [main]
paths: ["src/**", "pyproject.toml", ".github/workflows/*"]
pull_request:
paths: ["src/**", "pyproject.toml", ".github/workflows/*"]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.run_id }}
cancel-in-progress: true
jobs:
configure:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.set-config.outputs.matrix }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install diff-shades and support dependencies
run: |
python -m pip install 'click>=8.1.7' packaging urllib3
python -m pip install https://github.com/ichard26/diff-shades/archive/stable.zip
- name: Calculate run configuration & metadata
id: set-config
env:
GITHUB_TOKEN: ${{ github.token }}
run: >
python scripts/diff_shades_gha_helper.py config ${{ github.event_name }}
${{ matrix.mode }}
analysis:
name: analysis / ${{ matrix.mode }}
needs: configure
runs-on: ubuntu-latest
env:
HATCH_BUILD_HOOKS_ENABLE: "1"
# Clang is less picky with the C code it's given than gcc (and may
# generate faster binaries too).
CC: clang-18
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.configure.outputs.matrix) }}
steps:
- name: Checkout this repository (full clone)
uses: actions/checkout@v4
with:
# The baseline revision could be rather old so a full clone is ideal.
fetch-depth: 0
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install diff-shades and support dependencies
run: |
python -m pip install https://github.com/ichard26/diff-shades/archive/stable.zip
python -m pip install 'click>=8.1.7' packaging urllib3
# After checking out old revisions, this might not exist so we'll use a copy.
cat scripts/diff_shades_gha_helper.py > helper.py
git config user.name "diff-shades-gha"
git config user.email "diff-shades-gha@example.com"
- name: Attempt to use cached baseline analysis
id: baseline-cache
uses: actions/cache@v4
with:
path: ${{ matrix.baseline-analysis }}
key: ${{ matrix.baseline-cache-key }}
- name: Build and install baseline revision
if: steps.baseline-cache.outputs.cache-hit != 'true'
env:
GITHUB_TOKEN: ${{ github.token }}
run: >
${{ matrix.baseline-setup-cmd }}
&& python -m pip install .
- name: Analyze baseline revision
if: steps.baseline-cache.outputs.cache-hit != 'true'
run: >
diff-shades analyze -v --work-dir projects-cache/
${{ matrix.baseline-analysis }} ${{ matrix.force-flag }}
- name: Build and install target revision
env:
GITHUB_TOKEN: ${{ github.token }}
run: >
${{ matrix.target-setup-cmd }}
&& python -m pip install .
- name: Analyze target revision
run: >
diff-shades analyze -v --work-dir projects-cache/
${{ matrix.target-analysis }} --repeat-projects-from
${{ matrix.baseline-analysis }} ${{ matrix.force-flag }}
- name: Generate HTML diff report
run: >
diff-shades --dump-html diff.html compare --diff
${{ matrix.baseline-analysis }} ${{ matrix.target-analysis }}
- name: Upload diff report
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.mode }}-diff.html
path: diff.html
- name: Upload baseline analysis
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.baseline-analysis }}
path: ${{ matrix.baseline-analysis }}
- name: Upload target analysis
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.target-analysis }}
path: ${{ matrix.target-analysis }}
- name: Generate summary file (PR only)
if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes'
run: >
python helper.py comment-body ${{ matrix.baseline-analysis }}
${{ matrix.target-analysis }} ${{ matrix.baseline-sha }}
${{ matrix.target-sha }} ${{ github.event.pull_request.number }}
- name: Upload summary file (PR only)
if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes'
uses: actions/upload-artifact@v4
with:
name: .pr-comment.json
path: .pr-comment.json
- name: Verify zero changes (PR only)
if: matrix.mode == 'assert-no-changes'
run: >
diff-shades compare --check ${{ matrix.baseline-analysis }} ${{ matrix.target-analysis }}
|| (echo "Please verify you didn't change the stable code style unintentionally!" && exit 1)
- name: Check for failed files for target revision
# Even if the previous step failed, we should still check for failed files.
if: always()
run: >
diff-shades show-failed --check --show-log ${{ matrix.target-analysis }}

View File

@ -1,49 +0,0 @@
name: diff-shades-comment
on:
workflow_run:
workflows: [diff-shades]
types: [completed]
permissions:
pull-requests: write
jobs:
comment:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "*"
- name: Install support dependencies
run: |
python -m pip install pip --upgrade
python -m pip install click packaging urllib3
- name: Get details from initial workflow run
id: metadata
env:
GITHUB_TOKEN: ${{ github.token }}
run: >
python scripts/diff_shades_gha_helper.py comment-details
${{github.event.workflow_run.id }}
- name: Try to find pre-existing PR comment
if: steps.metadata.outputs.needs-comment == 'true'
id: find-comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e
with:
issue-number: ${{ steps.metadata.outputs.pr-number }}
comment-author: "github-actions[bot]"
body-includes: "diff-shades"
- name: Create or update PR comment
if: steps.metadata.outputs.needs-comment == 'true'
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ steps.metadata.outputs.pr-number }}
body: ${{ steps.metadata.outputs.comment-body }}
edit-mode: replace

View File

@ -1,40 +0,0 @@
name: Documentation
on: [push, pull_request]
permissions:
contents: read
jobs:
build:
# We want to run on external PRs, but not on our own internal PRs as they'll be run
# by the push to the branch. Without this if check, checks are duplicated since
# internal PRs match both the push and pull_request events.
if:
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
github.repository
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- name: Set up latest Python
uses: actions/setup-python@v5
with:
python-version: "3.13"
allow-prereleases: true
- name: Install dependencies
run: |
python -m pip install uv
python -m uv venv
python -m uv pip install -e ".[d]"
python -m uv pip install -r "docs/requirements.txt"
- name: Build documentation
run: sphinx-build -a -b html -W --keep-going docs/ docs/_build

View File

@ -1,69 +0,0 @@
name: docker
on:
push:
branches:
- "main"
release:
types: [published]
permissions:
contents: read
jobs:
docker:
if: github.repository == 'psf/black'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Check + set version tag
run:
echo "GIT_TAG=$(git describe --candidates=0 --tags 2> /dev/null || echo
latest_non_release)" >> $GITHUB_ENV
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: pyfound/black:latest,pyfound/black:${{ env.GIT_TAG }}
- name: Build and push latest_release tag
if:
${{ github.event_name == 'release' && github.event.action == 'published' &&
!github.event.release.prerelease }}
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: pyfound/black:latest_release
- name: Build and push latest_prerelease tag
if:
${{ github.event_name == 'release' && github.event.action == 'published' &&
github.event.release.prerelease }}
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: pyfound/black:latest_prerelease
- name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}

View File

@ -1,43 +0,0 @@
name: Fuzz
on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
# We want to run on external PRs, but not on our own internal PRs as they'll be run
# by the push to the branch. Without this if check, checks are duplicated since
# internal PRs match both the push and pull_request events.
if:
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
github.repository
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12.4", "3.13"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
allow-prereleases: true
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade tox
- name: Run fuzz tests
run: |
tox -e fuzz

View File

@ -1,48 +0,0 @@
name: Lint + format ourselves
on: [push, pull_request]
jobs:
build:
# We want to run on external PRs, but not on our own internal PRs as they'll be run
# by the push to the branch. Without this if check, checks are duplicated since
# internal PRs match both the push and pull_request events.
if:
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
github.repository
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Assert PR target is main
if: github.event_name == 'pull_request' && github.repository == 'psf/black'
run: |
if [ "$GITHUB_BASE_REF" != "main" ]; then
echo "::error::PR targeting '$GITHUB_BASE_REF', please refile targeting 'main'." && exit 1
fi
- name: Set up latest Python
uses: actions/setup-python@v5
with:
python-version: "3.13"
allow-prereleases: true
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -e '.'
python -m pip install tox
- name: Run pre-commit hooks
uses: pre-commit/action@v3.0.1
- name: Format ourselves
run: |
tox -e run_self
- name: Regenerate schema
run: |
tox -e generate_schema
git diff --exit-code

View File

@ -1,130 +0,0 @@
name: Build and publish
on:
release:
types: [published]
pull_request:
push:
branches:
- main
permissions:
contents: read
jobs:
main:
name: sdist + pure wheel
runs-on: ubuntu-latest
if: github.event_name == 'release'
steps:
- uses: actions/checkout@v4
- name: Set up latest Python
uses: actions/setup-python@v5
with:
python-version: "3.13"
allow-prereleases: true
- name: Install latest pip, build, twine
run: |
python -m pip install --upgrade --disable-pip-version-check pip
python -m pip install --upgrade build twine
- name: Build wheel and source distributions
run: python -m build
- if: github.event_name == 'release'
name: Upload to PyPI via Twine
env:
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
run: twine upload --verbose -u '__token__' dist/*
generate_wheels_matrix:
name: generate wheels matrix
runs-on: ubuntu-latest
outputs:
include: ${{ steps.set-matrix.outputs.include }}
steps:
- uses: actions/checkout@v4
# Keep cibuildwheel version in sync with below
- name: Install cibuildwheel and pypyp
run: |
pipx install cibuildwheel==2.22.0
pipx install pypyp==1.3.0
- name: generate matrix
if: github.event_name != 'pull_request'
run: |
{
cibuildwheel --print-build-identifiers --platform linux \
| pyp 'json.dumps({"only": x, "os": "ubuntu-latest"})' \
&& cibuildwheel --print-build-identifiers --platform macos \
| pyp 'json.dumps({"only": x, "os": "macos-latest"})' \
&& cibuildwheel --print-build-identifiers --platform windows \
| pyp 'json.dumps({"only": x, "os": "windows-latest"})'
} | pyp 'json.dumps(list(map(json.loads, lines)))' > /tmp/matrix
env:
CIBW_ARCHS_LINUX: x86_64
CIBW_ARCHS_MACOS: x86_64 arm64
CIBW_ARCHS_WINDOWS: AMD64
- name: generate matrix (PR)
if: github.event_name == 'pull_request'
run: |
{
cibuildwheel --print-build-identifiers --platform linux \
| pyp 'json.dumps({"only": x, "os": "ubuntu-latest"})'
} | pyp 'json.dumps(list(map(json.loads, lines)))' > /tmp/matrix
env:
CIBW_BUILD: "cp39-* cp313-*"
CIBW_ARCHS_LINUX: x86_64
- id: set-matrix
run: echo "include=$(cat /tmp/matrix)" | tee -a $GITHUB_OUTPUT
mypyc:
name: mypyc wheels ${{ matrix.only }}
needs: generate_wheels_matrix
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.generate_wheels_matrix.outputs.include) }}
steps:
- uses: actions/checkout@v4
# Keep cibuildwheel version in sync with above
- uses: pypa/cibuildwheel@v2.23.3
with:
only: ${{ matrix.only }}
- name: Upload wheels as workflow artifacts
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.only }}-mypyc-wheels
path: ./wheelhouse/*.whl
- if: github.event_name == 'release'
name: Upload wheels to PyPI via Twine
env:
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
run: pipx run twine upload --verbose -u '__token__' wheelhouse/*.whl
update-stable-branch:
name: Update stable branch
needs: [main, mypyc]
runs-on: ubuntu-latest
if: github.event_name == 'release'
permissions:
contents: write
steps:
- name: Checkout stable branch
uses: actions/checkout@v4
with:
ref: stable
fetch-depth: 0
- if: github.event_name == 'release'
name: Update stable branch to release tag & push
run: |
git reset --hard ${{ github.event.release.tag_name }}
git push

View File

@ -1,56 +0,0 @@
name: Release tool CI
on:
push:
paths:
- .github/workflows/release_tests.yml
- release.py
- release_tests.py
pull_request:
paths:
- .github/workflows/release_tests.yml
- release.py
- release_tests.py
jobs:
build:
# We want to run on external PRs, but not on our own internal PRs as they'll be run
# by the push to the branch. Without this if check, checks are duplicated since
# internal PRs match both the push and pull_request events.
if:
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
github.repository
name: Running python ${{ matrix.python-version }} on ${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ["3.13"]
os: [macOS-latest, ubuntu-latest, windows-latest]
steps:
- uses: actions/checkout@v4
with:
# Give us all history, branches and tags
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
allow-prereleases: true
- name: Print Python Version
run: python --version --version && which python
- name: Print Git Version
run: git --version && which git
- name: Update pip, setuptools + wheels
run: |
python -m pip install --upgrade pip setuptools wheel
- name: Run unit tests via coverage + print report
run: |
python -m pip install coverage
coverage run scripts/release_tests.py
coverage report --show-missing

View File

@ -1,110 +1,30 @@
name: Test
on:
push:
paths-ignore:
- "docs/**"
- "*.md"
pull_request:
paths-ignore:
- "docs/**"
- "*.md"
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.run_id }}
cancel-in-progress: true
on: [push, pull_request]
jobs:
main:
# We want to run on external PRs, but not on our own internal PRs as they'll be run
# by the push to the branch. Without this if check, checks are duplicated since
# internal PRs match both the push and pull_request events.
if:
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
github.repository
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12.4", "3.13", "pypy-3.9"]
python-version: [3.6, 3.7]
os: [ubuntu-latest, macOS-latest, windows-latest]
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v1
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
uses: actions/setup-python@v1
with:
python-version: ${{ matrix.python-version }}
allow-prereleases: true
- name: Install tox
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade tox
python -m pip install --upgrade coverage
python -m pip install -e ".[d]"
- name: Unit tests
if: "!startsWith(matrix.python-version, 'pypy')"
run:
tox -e ci-py$(echo ${{ matrix.python-version }} | tr -d '.') -- -v --color=yes
- name: Unit tests (pypy)
if: "startsWith(matrix.python-version, 'pypy')"
run: tox -e ci-pypy3 -- -v --color=yes
- name: Upload coverage to Coveralls
# Upload coverage if we are on the main repository and
# we're running on Linux (this action only supports Linux)
if:
github.repository == 'psf/black' && matrix.os == 'ubuntu-latest' &&
!startsWith(matrix.python-version, 'pypy')
uses: AndreMiras/coveralls-python-action@ac868b9540fad490f7ca82b8ca00480fd751ed19
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
parallel: true
flag-name: py${{ matrix.python-version }}-${{ matrix.os }}
debug: true
coveralls-finish:
needs: main
if: github.repository == 'psf/black'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Send finished signal to Coveralls
uses: AndreMiras/coveralls-python-action@ac868b9540fad490f7ca82b8ca00480fd751ed19
with:
parallel-finished: true
debug: true
uvloop:
if:
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
github.repository
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macOS-latest]
steps:
- uses: actions/checkout@v4
- name: Set up latest Python
uses: actions/setup-python@v5
with:
python-version: "3.12.4"
- name: Install black with uvloop
run: |
python -m pip install pip --upgrade --disable-pip-version-check
python -m pip install -e ".[uvloop]"
- name: Format ourselves
run: python -m black --check src/ tests/
coverage run tests/test_black.py

View File

@ -1,63 +0,0 @@
name: Publish executables
on:
release:
types: [published]
permissions:
contents: write # actions/upload-release-asset needs this.
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [windows-2019, ubuntu-22.04, macos-latest]
include:
- os: windows-2019
pathsep: ";"
asset_name: black_windows.exe
executable_mime: "application/vnd.microsoft.portable-executable"
- os: ubuntu-22.04
pathsep: ":"
asset_name: black_linux
executable_mime: "application/x-executable"
- os: macos-latest
pathsep: ":"
asset_name: black_macos
executable_mime: "application/x-mach-binary"
steps:
- uses: actions/checkout@v4
- name: Set up latest Python
uses: actions/setup-python@v5
with:
python-version: "3.12.4"
- name: Install Black and PyInstaller
run: |
python -m pip install --upgrade pip wheel
python -m pip install .[colorama]
python -m pip install pyinstaller
- name: Build executable with PyInstaller
run: >
python -m PyInstaller -F --name ${{ matrix.asset_name }} --add-data
'src/blib2to3${{ matrix.pathsep }}blib2to3' src/black/__main__.py
- name: Quickly test executable
run: |
./dist/${{ matrix.asset_name }} --version
./dist/${{ matrix.asset_name }} src --verbose
- name: Upload binary as release asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ github.event.release.upload_url }}
asset_path: dist/${{ matrix.asset_name }}
asset_name: ${{ matrix.asset_name }}
asset_content_type: ${{ matrix.executable_mime }}

18
.gitignore vendored
View File

@ -1,28 +1,14 @@
.venv
.coverage
.coverage.*
_build
.DS_Store
.vscode
.python-version
docs/_static/pypi.svg
.tox
__pycache__
# Packaging artifacts
black.egg-info
black.dist-info
build/
dist/
pip-wheel-metadata/
.eggs
src/_black_version.py
_black_version.py
.idea
.dmypy.json
*.swp
.hypothesis/
venv/
.ipynb_checkpoints/
node_modules/
.eggs

View File

@ -1,83 +1,30 @@
# Note: don't use this config for your own repositories. Instead, see
# "Version control integration" in docs/integrations/source_version_control.md
exclude: ^(profiling/|tests/data/)
# "Version control integration" in README.md.
exclude: ^(blib2to3/|profiling/|tests/data/)
repos:
- repo: local
hooks:
- id: check-pre-commit-rev-in-example
name: Check pre-commit rev in example
language: python
entry: python -m scripts.check_pre_commit_rev_in_example
files: '(CHANGES\.md|source_version_control\.md)$'
additional_dependencies:
&version_check_dependencies [
commonmark==0.9.1,
pyyaml==6.0.1,
beautifulsoup4==4.9.3,
]
- id: black
name: black
language: system
entry: black
require_serial: true
types: [python]
- id: check-version-in-the-basics-example
name: Check black version in the basics example
language: python
entry: python -m scripts.check_version_in_basics_example
files: '(CHANGES\.md|the_basics\.md)$'
additional_dependencies: *version_check_dependencies
- repo: https://github.com/pycqa/isort
rev: 6.0.1
hooks:
- id: isort
- repo: https://github.com/pycqa/flake8
rev: 7.2.0
- repo: https://gitlab.com/pycqa/flake8
rev: 3.7.8
hooks:
- id: flake8
additional_dependencies:
- flake8-bugbear==24.2.6
- flake8-comprehensions
- flake8-simplify
exclude: ^src/blib2to3/
additional_dependencies: [flake8-bugbear]
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.15.0
rev: v0.740
hooks:
- id: mypy
exclude: ^(docs/conf.py|scripts/generate_schema.py)$
args: []
additional_dependencies: &mypy_deps
- types-PyYAML
- types-atheris
- tomli >= 0.2.6, < 2.0.0
- click >= 8.2.0
# Click is intentionally out-of-sync with pyproject.toml
# v8.2 has breaking changes. We work around them at runtime, but we need the newer stubs.
- packaging >= 22.0
- platformdirs >= 2.1.0
- pytokens >= 0.1.10
- pytest
- hypothesis
- aiohttp >= 3.7.4
- types-commonmark
- urllib3
- hypothesmith
- id: mypy
name: mypy (Python 3.10)
files: scripts/generate_schema.py
args: ["--python-version=3.10"]
additional_dependencies: *mypy_deps
exclude: ^docs/conf.py
- repo: https://github.com/rbubley/mirrors-prettier
rev: v3.5.3
- repo: https://github.com/prettier/prettier
rev: 1.18.2
hooks:
- id: prettier
types_or: [markdown, yaml, json]
exclude: \.github/workflows/diff_shades\.yml
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: end-of-file-fixer
- id: trailing-whitespace
ci:
autoupdate_schedule: quarterly
args: [--prose-wrap=always, --print-width=88]

View File

@ -1,20 +1,8 @@
# Note that we recommend using https://github.com/psf/black-pre-commit-mirror instead
# This will work about 2x as fast as using the hooks in this repository
- id: black
name: black
description: "Black: The uncompromising Python code formatter"
entry: black
language: python
minimum_pre_commit_version: 2.9.2
language_version: python3
require_serial: true
types_or: [python, pyi]
- id: black-jupyter
name: black-jupyter
description:
"Black: The uncompromising Python code formatter (with Jupyter Notebook support)"
entry: black
language: python
minimum_pre_commit_version: 2.9.2
require_serial: true
types_or: [python, pyi, jupyter]
additional_dependencies: [".[jupyter]"]
types: [python]

View File

@ -1,3 +0,0 @@
proseWrap: always
printWidth: 88
endOfLine: auto

View File

@ -1,21 +0,0 @@
version: 2
formats:
- htmlzip
build:
os: ubuntu-22.04
tools:
python: "3.11"
python:
install:
- requirements: docs/requirements.txt
- method: pip
path: .
extra_requirements:
- d
sphinx:
configuration: docs/conf.py

43
.travis.yml Normal file
View File

@ -0,0 +1,43 @@
language: python
cache:
pip: true
directories:
- $HOME/.cache/pre-commit
env:
- TEST_CMD="coverage run tests/test_black.py"
install:
- pip install coverage coveralls pre-commit
- pip install -e '.[d]'
script:
- $TEST_CMD
after_success:
- coveralls
notifications:
on_success: change
on_failure: always
matrix:
include:
- name: "lint"
python: 3.7
env:
- TEST_CMD="pre-commit run --all-files"
- name: "3.6"
python: 3.6
- name: "3.7"
python: 3.7
- name: "3.8"
python: 3.8
before_deploy:
- pip install pyinstaller
- pyinstaller --clean -F --add-data blib2to3/:blib2to3 black.py
deploy:
provider: releases
api_key:
secure: chYvcmnRqRKtfBcAZRj62rEv0ziWuHMl6MnfQbd1MOVQ4njntI8+CCPk118dW6MWSfwTqyMFy+t9gAgQYhjkLEHMS2aK9Z2wCWki1MkBrkMw5tYoLFvPu0KQ9rIVihxsr93a/am6Oh/Hp+1uuc4zWPUf1ubX+QlCzsxjCzVso1kTJjjdN04UxvkcFR+sY2d9Qyy9WcdifChnLwdmIJKIoVOE7Imm820nzImJHkJh8iSnjBjL98gvPPeC/nWTltsbErvf2mCv4NIjzjQZvHa87c7rSJGbliNrAxCSyyvBX+JNeS8U2fGLE83do0HieyjdPbTuc27e2nsrrihgPh+hXbiJerljclfp5hsJ5qGz5sS9MU1fR7sSLiQQ2v0TYB5RRwd34TgGiLwFAZZmgZOfMUCtefCKvP8qvELMSNd99+msfPEHiuhADF0bKPTbCUa6BgUHNr6woOLmHerjPHd6NI/a8Skz/uQB4xr3spLSmfUmX0fEqyYUDphkGPNH8IsvC1/F2isecW9kOzEWmB5oCmpMTGm4TIf3C01Nx+9PVwB2Z+30hhbfIEBxD4loRFmh/hU5TIQEpneF8yoIfe9EnMaoZbq86xhADZXvLIZvpXUdm1NQZDG6na2S1fwyOUKQsW6BWLcfoZZwZlrXrViD1jBsHBV++s+lxShTeTCszlo=
file:
- dist/black
skip_cleanup: true
on:
condition: $TRAVIS_PYTHON_VERSION == '3.6'
repo: psf/black
tags: true

View File

@ -1,197 +0,0 @@
# Authors
Glued together by [Łukasz Langa](mailto:lukasz@langa.pl).
Maintained with:
- [Carol Willing](mailto:carolcode@willingconsulting.com)
- [Carl Meyer](mailto:carl@oddbird.net)
- [Jelle Zijlstra](mailto:jelle.zijlstra@gmail.com)
- [Mika Naylor](mailto:mail@autophagy.io)
- [Zsolt Dollenstein](mailto:zsol.zsol@gmail.com)
- [Cooper Lees](mailto:me@cooperlees.com)
- [Richard Si](mailto:sichard26@gmail.com)
- [Felix Hildén](mailto:felix.hilden@gmail.com)
- [Batuhan Taskaya](mailto:batuhan@python.org)
- [Shantanu Jain](mailto:hauntsaninja@gmail.com)
Multiple contributions by:
- [Abdur-Rahmaan Janhangeer](mailto:arj.python@gmail.com)
- [Adam Johnson](mailto:me@adamj.eu)
- [Adam Williamson](mailto:adamw@happyassassin.net)
- [Alexander Huynh](mailto:ahrex-gh-psf-black@e.sc)
- [Alexandr Artemyev](mailto:mogost@gmail.com)
- [Alex Vandiver](mailto:github@chmrr.net)
- [Allan Simon](mailto:allan.simon@supinfo.com)
- Anders-Petter Ljungquist
- [Amethyst Reese](mailto:amy@n7.gg)
- [Andrew Thorp](mailto:andrew.thorp.dev@gmail.com)
- [Andrew Zhou](mailto:andrewfzhou@gmail.com)
- [Andrey](mailto:dyuuus@yandex.ru)
- [Andy Freeland](mailto:andy@andyfreeland.net)
- [Anthony Sottile](mailto:asottile@umich.edu)
- [Antonio Ossa Guerra](mailto:aaossa+black@uc.cl)
- [Arjaan Buijk](mailto:arjaan.buijk@gmail.com)
- [Arnav Borbornah](mailto:arnavborborah11@gmail.com)
- [Artem Malyshev](mailto:proofit404@gmail.com)
- [Asger Hautop Drewsen](mailto:asgerdrewsen@gmail.com)
- [Augie Fackler](mailto:raf@durin42.com)
- [Aviskar KC](mailto:aviskarkc10@gmail.com)
- Batuhan Taşkaya
- [Benjamin Wohlwend](mailto:bw@piquadrat.ch)
- [Benjamin Woodruff](mailto:github@benjam.info)
- [Bharat Raghunathan](mailto:bharatraghunthan9767@gmail.com)
- [Brandt Bucher](mailto:brandtbucher@gmail.com)
- [Brett Cannon](mailto:brett@python.org)
- [Bryan Bugyi](mailto:bryan.bugyi@rutgers.edu)
- [Bryan Forbes](mailto:bryan@reigndropsfall.net)
- [Calum Lind](mailto:calumlind@gmail.com)
- [Charles](mailto:peacech@gmail.com)
- Charles Reid
- [Christian Clauss](mailto:cclauss@bluewin.ch)
- [Christian Heimes](mailto:christian@python.org)
- [Chuck Wooters](mailto:chuck.wooters@microsoft.com)
- [Chris Rose](mailto:offline@offby1.net)
- Codey Oxley
- [Cong](mailto:congusbongus@gmail.com)
- [Cooper Ry Lees](mailto:me@cooperlees.com)
- [Dan Davison](mailto:dandavison7@gmail.com)
- [Daniel Hahler](mailto:github@thequod.de)
- [Daniel M. Capella](mailto:polycitizen@gmail.com)
- Daniele Esposti
- [David Hotham](mailto:david.hotham@metaswitch.com)
- [David Lukes](mailto:dafydd.lukes@gmail.com)
- [David Szotten](mailto:davidszotten@gmail.com)
- [Denis Laxalde](mailto:denis@laxalde.org)
- [Douglas Thor](mailto:dthor@transphormusa.com)
- dylanjblack
- [Eli Treuherz](mailto:eli@treuherz.com)
- [Emil Hessman](mailto:emil@hessman.se)
- [Felix Kohlgrüber](mailto:felix.kohlgrueber@gmail.com)
- [Florent Thiery](mailto:fthiery@gmail.com)
- Francisco
- [Giacomo Tagliabue](mailto:giacomo.tag@gmail.com)
- [Greg Gandenberger](mailto:ggandenberger@shoprunner.com)
- [Gregory P. Smith](mailto:greg@krypto.org)
- Gustavo Camargo
- hauntsaninja
- [Hadi Alqattan](mailto:alqattanhadizaki@gmail.com)
- [Hassan Abouelela](mailto:hassan@hassanamr.com)
- [Heaford](mailto:dan@heaford.com)
- [Hugo Barrera](mailto::hugo@barrera.io)
- Hugo van Kemenade
- [Hynek Schlawack](mailto:hs@ox.cx)
- [Ionite](mailto:dev@ionite.io)
- [Ivan Katanić](mailto:ivan.katanic@gmail.com)
- [Jakub Kadlubiec](mailto:jakub.kadlubiec@skyscanner.net)
- [Jakub Warczarek](mailto:jakub.warczarek@gmail.com)
- [Jan Hnátek](mailto:jan.hnatek@gmail.com)
- [Jason Fried](mailto:me@jasonfried.info)
- [Jason Friedland](mailto:jason@friedland.id.au)
- [jgirardet](mailto:ijkl@netc.fr)
- Jim Brännlund
- [Jimmy Jia](mailto:tesrin@gmail.com)
- [Joe Antonakakis](mailto:jma353@cornell.edu)
- [Jon Dufresne](mailto:jon.dufresne@gmail.com)
- [Jonas Obrist](mailto:ojiidotch@gmail.com)
- [Jonty Wareing](mailto:jonty@jonty.co.uk)
- [Jose Nazario](mailto:jose.monkey.org@gmail.com)
- [Joseph Larson](mailto:larson.joseph@gmail.com)
- [Josh Bode](mailto:joshbode@fastmail.com)
- [Josh Holland](mailto:anowlcalledjosh@gmail.com)
- [Joshua Cannon](mailto:joshdcannon@gmail.com)
- [José Padilla](mailto:jpadilla@webapplicate.com)
- [Juan Luis Cano Rodríguez](mailto:hello@juanlu.space)
- [kaiix](mailto:kvn.hou@gmail.com)
- [Katie McLaughlin](mailto:katie@glasnt.com)
- Katrin Leinweber
- [Keith Smiley](mailto:keithbsmiley@gmail.com)
- [Kenyon Ralph](mailto:kenyon@kenyonralph.com)
- [Kevin Kirsche](mailto:Kev.Kirsche+GitHub@gmail.com)
- [Kyle Hausmann](mailto:kyle.hausmann@gmail.com)
- [Kyle Sunden](mailto:sunden@wisc.edu)
- Lawrence Chan
- [Linus Groh](mailto:mail@linusgroh.de)
- [Loren Carvalho](mailto:comradeloren@gmail.com)
- [Luka Sterbic](mailto:luka.sterbic@gmail.com)
- [LukasDrude](mailto:mail@lukas-drude.de)
- Mahmoud Hossam
- Mariatta
- [Matt VanEseltine](mailto:vaneseltine@gmail.com)
- [Matthew Clapp](mailto:itsayellow+dev@gmail.com)
- [Matthew Walster](mailto:matthew@walster.org)
- Max Smolens
- [Michael Aquilina](mailto:michaelaquilina@gmail.com)
- [Michael Flaxman](mailto:michael.flaxman@gmail.com)
- [Michael J. Sullivan](mailto:sully@msully.net)
- [Michael McClimon](mailto:michael@mcclimon.org)
- [Miguel Gaiowski](mailto:miggaiowski@gmail.com)
- [Mike](mailto:roshi@fedoraproject.org)
- [mikehoyio](mailto:mikehoy@gmail.com)
- [Min ho Kim](mailto:minho42@gmail.com)
- [Miroslav Shubernetskiy](mailto:miroslav@miki725.com)
- MomIsBestFriend
- [Nathan Goldbaum](mailto:ngoldbau@illinois.edu)
- [Nathan Hunt](mailto:neighthan.hunt@gmail.com)
- [Neraste](mailto:neraste.herr10@gmail.com)
- [Nikolaus Waxweiler](mailto:madigens@gmail.com)
- [Ofek Lev](mailto:ofekmeister@gmail.com)
- [Osaetin Daniel](mailto:osaetindaniel@gmail.com)
- [otstrel](mailto:otstrel@gmail.com)
- [Pablo Galindo](mailto:Pablogsal@gmail.com)
- [Paul Ganssle](mailto:p.ganssle@gmail.com)
- [Paul Meinhardt](mailto:mnhrdt@gmail.com)
- [Peter Bengtsson](mailto:mail@peterbe.com)
- [Peter Grayson](mailto:pete@jpgrayson.net)
- [Peter Stensmyr](mailto:peter.stensmyr@gmail.com)
- pmacosta
- [Quentin Pradet](mailto:quentin@pradet.me)
- [Ralf Schmitt](mailto:ralf@systemexit.de)
- [Ramón Valles](mailto:mroutis@protonmail.com)
- [Richard Fearn](mailto:richardfearn@gmail.com)
- [Rishikesh Jha](mailto:rishijha424@gmail.com)
- [Rupert Bedford](mailto:rupert@rupertb.com)
- Russell Davis
- [Sagi Shadur](mailto:saroad2@gmail.com)
- [Rémi Verschelde](mailto:rverschelde@gmail.com)
- [Sami Salonen](mailto:sakki@iki.fi)
- [Samuel Cormier-Iijima](mailto:samuel@cormier-iijima.com)
- [Sanket Dasgupta](mailto:sanketdasgupta@gmail.com)
- Sergi
- [Scott Stevenson](mailto:scott@stevenson.io)
- Shantanu
- [shaoran](mailto:shaoran@sakuranohana.org)
- [Shinya Fujino](mailto:shf0811@gmail.com)
- springstan
- [Stavros Korokithakis](mailto:hi@stavros.io)
- [Stephen Rosen](mailto:sirosen@globus.org)
- [Steven M. Vascellaro](mailto:S.Vascellaro@gmail.com)
- [Sunil Kapil](mailto:snlkapil@gmail.com)
- [Sébastien Eustace](mailto:sebastien.eustace@gmail.com)
- [Tal Amuyal](mailto:TalAmuyal@gmail.com)
- [Terrance](mailto:git@terrance.allofti.me)
- [Thom Lu](mailto:thomas.c.lu@gmail.com)
- [Thomas Grainger](mailto:tagrain@gmail.com)
- [Tim Gates](mailto:tim.gates@iress.com)
- [Tim Swast](mailto:swast@google.com)
- [Timo](mailto:timo_tk@hotmail.com)
- Toby Fleming
- [Tom Christie](mailto:tom@tomchristie.com)
- [Tony Narlock](mailto:tony@git-pull.com)
- [Tsuyoshi Hombashi](mailto:tsuyoshi.hombashi@gmail.com)
- [Tushar Chandra](mailto:tusharchandra2018@u.northwestern.edu)
- [Tushar Sadhwani](mailto:tushar.sadhwani000@gmail.com)
- [Tzu-ping Chung](mailto:uranusjr@gmail.com)
- [Utsav Shah](mailto:ukshah2@illinois.edu)
- utsav-dbx
- vezeli
- [Ville Skyttä](mailto:ville.skytta@iki.fi)
- [Vishwas B Sharma](mailto:sharma.vishwas88@gmail.com)
- [Vlad Emelianov](mailto:volshebnyi@gmail.com)
- [williamfzc](mailto:178894043@qq.com)
- [wouter bolsterlee](mailto:wouter@bolsterl.ee)
- Yazdan
- [Yngve Høiseth](mailto:yngve@hoiseth.net)
- [Yurii Karabas](mailto:1998uriyyo@gmail.com)
- [Zac Hatfield-Dodds](mailto:zac@zhd.dev)

1997
CHANGES.md

File diff suppressed because it is too large Load Diff

View File

@ -1,22 +0,0 @@
cff-version: 1.2.0
title: "Black: The uncompromising Python code formatter"
message: >-
If you use this software, please cite it using the metadata from this file.
type: software
authors:
- family-names: Langa
given-names: Łukasz
- name: "contributors to Black"
repository-code: "https://github.com/psf/black"
url: "https://black.readthedocs.io/en/stable/"
abstract: >-
Black is the uncompromising Python code formatter. By using it, you agree to cede
control over minutiae of hand-formatting. In return, Black gives you speed,
determinism, and freedom from pycodestyle nagging about formatting. You will save time
and mental energy for more important matters.
Blackened code looks the same regardless of the project you're reading. Formatting
becomes transparent after a while and you can focus on the content instead.
Black makes code review faster by producing the smallest diffs possible.
license: MIT

View File

@ -1,13 +1,54 @@
# Contributing to _Black_
Welcome future contributor! We're happy to see you willing to make the project better.
Welcome! Happy to see you willing to make the project better. Have you read the entire
[user documentation](https://black.readthedocs.io/en/latest/) yet?
If you aren't familiar with _Black_, or are looking for documentation on something
specific, the [user documentation](https://black.readthedocs.io/en/latest/) is the best
place to look.
## Bird's eye view
For getting started on contributing, please read the
[contributing documentation](https://black.readthedocs.org/en/latest/contributing/) for
all you need to know.
In terms of inspiration, _Black_ is about as configurable as _gofmt_. This is
deliberate.
Thank you, and we look forward to your contributions!
Bug reports and fixes are always welcome! Please follow the
[issue template on GitHub](https://github.com/psf/black/issues/new) for best results.
Before you suggest a new feature or configuration knob, ask yourself why you want it. If
it enables better integration with some workflow, fixes an inconsistency, speeds things
up, and so on - go for it! On the other hand, if your answer is "because I don't like a
particular formatting" then you're not ready to embrace _Black_ yet. Such changes are
unlikely to get accepted. You can still try but prepare to be disappointed.
## Technicalities
Development on the latest version of Python is preferred. As of this writing it's 3.8.
You can use any operating system. I am using macOS myself and CentOS at work.
Install all development dependencies using:
```
$ pipenv install --dev
$ pipenv shell
$ pre-commit install
```
If you haven't used `pipenv` before but are comfortable with virtualenvs, just run
`pip install pipenv` in the virtualenv you're already using and invoke the command above
from the cloned _Black_ repo. It will do the correct thing.
Before submitting pull requests, run tests with:
```
$ python setup.py test
```
## Hygiene
If you're fixing a bug, add a test. Run it first to confirm it fails, then fix the bug,
run it again to confirm it's really fixed.
If adding a new feature, add a test. In fact, always add a test. But wait, before adding
any large feature, first open an issue for us to discuss the idea first.
## Finally
Thanks again for your interest in improving the project! You're taking action when most
people decide to sit and watch.

View File

@ -1,22 +0,0 @@
FROM python:3.12-slim AS builder
RUN mkdir /src
COPY . /src/
ENV VIRTUAL_ENV=/opt/venv
ENV HATCH_BUILD_HOOKS_ENABLE=1
# Install build tools to compile black + dependencies
RUN apt update && apt install -y build-essential git python3-dev
RUN python -m venv $VIRTUAL_ENV
RUN python -m pip install --no-cache-dir hatch hatch-fancy-pypi-readme hatch-vcs
RUN . /opt/venv/bin/activate && pip install --no-cache-dir --upgrade pip setuptools \
&& cd /src && hatch build -t wheel \
&& pip install --no-cache-dir dist/*-cp* \
&& pip install black[colorama,d,uvloop]
FROM python:3.12-slim
# copy only Python packages to limit the image size
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
CMD ["/opt/venv/bin/black"]

3
MANIFEST.in Normal file
View File

@ -0,0 +1,3 @@
include *.rst *.md LICENSE
recursive-include blib2to3 *.txt *.py LICENSE
recursive-include tests *.txt *.out *.diff *.py *.pyi *.pie *.toml

32
Pipfile Normal file
View File

@ -0,0 +1,32 @@
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"
[packages]
aiohttp = ">=3.3.2"
attrs = ">=18.1.0"
click = ">=6.5"
appdirs = "*"
toml = ">=0.9.4"
black = {path = ".",extras = ["d"],editable = true}
aiohttp-cors = "*"
typed-ast = "==1.4.0"
regex = ">=2019.8"
pathspec = ">=0.6"
[dev-packages]
pre-commit = "*"
coverage = "*"
docutils = "==0.15" # not a direct dependency, see https://github.com/pypa/pipenv/issues/3865
flake8 = "*"
flake8-bugbear = "*"
flake8-mypy = "*"
mypy = ">=0.740"
readme_renderer = "*"
recommonmark = "*"
Sphinx = "*"
setuptools = ">=39.2.0"
twine = ">=1.11.0"
wheel = ">=0.31.1"
setuptools-scm = "*"

735
Pipfile.lock generated Normal file
View File

@ -0,0 +1,735 @@
{
"_meta": {
"hash": {
"sha256": "0ae3ab3331dc4448fc4def999af38d91c13a566a482a6935c4a880f6bd9a6614"
},
"pipfile-spec": 6,
"requires": {},
"sources": [
{
"name": "pypi",
"url": "https://pypi.python.org/simple",
"verify_ssl": true
}
]
},
"default": {
"aiohttp": {
"hashes": [
"sha256:1e984191d1ec186881ffaed4581092ba04f7c61582a177b187d3a2f07ed9719e",
"sha256:259ab809ff0727d0e834ac5e8a283dc5e3e0ecc30c4d80b3cd17a4139ce1f326",
"sha256:2f4d1a4fdce595c947162333353d4a44952a724fba9ca3205a3df99a33d1307a",
"sha256:32e5f3b7e511aa850829fbe5aa32eb455e5534eaa4b1ce93231d00e2f76e5654",
"sha256:344c780466b73095a72c616fac5ea9c4665add7fc129f285fbdbca3cccf4612a",
"sha256:460bd4237d2dbecc3b5ed57e122992f60188afe46e7319116da5eb8a9dfedba4",
"sha256:4c6efd824d44ae697814a2a85604d8e992b875462c6655da161ff18fd4f29f17",
"sha256:50aaad128e6ac62e7bf7bd1f0c0a24bc968a0c0590a726d5a955af193544bcec",
"sha256:6206a135d072f88da3e71cc501c59d5abffa9d0bb43269a6dcd28d66bfafdbdd",
"sha256:65f31b622af739a802ca6fd1a3076fd0ae523f8485c52924a89561ba10c49b48",
"sha256:ae55bac364c405caa23a4f2d6cfecc6a0daada500274ffca4a9230e7129eac59",
"sha256:b778ce0c909a2653741cb4b1ac7015b5c130ab9c897611df43ae6a58523cb965"
],
"index": "pypi",
"version": "==3.6.2"
},
"aiohttp-cors": {
"hashes": [
"sha256:0451ba59fdf6909d0e2cd21e4c0a43752bc0703d33fc78ae94d9d9321710193e",
"sha256:4d39c6d7100fd9764ed1caf8cebf0eb01bf5e3f24e2e073fda6234bc48b19f5d"
],
"index": "pypi",
"version": "==0.7.0"
},
"appdirs": {
"hashes": [
"sha256:9e5896d1372858f8dd3344faf4e5014d21849c756c8d5701f78f8a103b372d92",
"sha256:d8b24664561d0d34ddfaec54636d502d7cea6e29c3eaf68f3df6180863e2166e"
],
"index": "pypi",
"version": "==1.4.3"
},
"async-timeout": {
"hashes": [
"sha256:0c3c816a028d47f659d6ff5c745cb2acf1f966da1fe5c19c77a70282b25f4c5f",
"sha256:4291ca197d287d274d0b6cb5d6f8f8f82d434ed288f962539ff18cc9012f9ea3"
],
"version": "==3.0.1"
},
"attrs": {
"hashes": [
"sha256:08a96c641c3a74e44eb59afb61a24f2cb9f4d7188748e76ba4bb5edfa3cb7d1c",
"sha256:f7b7ce16570fe9965acd6d30101a28f62fb4a7f9e926b3bbc9b61f8b04247e72"
],
"index": "pypi",
"version": "==19.3.0"
},
"black": {
"editable": true,
"extras": [
"d"
],
"path": "."
},
"chardet": {
"hashes": [
"sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae",
"sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691"
],
"version": "==3.0.4"
},
"click": {
"hashes": [
"sha256:2335065e6395b9e67ca716de5f7526736bfa6ceead690adf616d925bdc622b13",
"sha256:5b94b49521f6456670fdb30cd82a4eca9412788a93fa6dd6df72c94d5a8ff2d7"
],
"index": "pypi",
"version": "==7.0"
},
"idna": {
"hashes": [
"sha256:c357b3f628cf53ae2c4c05627ecc484553142ca23264e593d327bcde5e9c3407",
"sha256:ea8b7f6188e6fa117537c3df7da9fc686d485087abf6ac197f9c46432f7e4a3c"
],
"version": "==2.8"
},
"multidict": {
"hashes": [
"sha256:024b8129695a952ebd93373e45b5d341dbb87c17ce49637b34000093f243dd4f",
"sha256:041e9442b11409be5e4fc8b6a97e4bcead758ab1e11768d1e69160bdde18acc3",
"sha256:045b4dd0e5f6121e6f314d81759abd2c257db4634260abcfe0d3f7083c4908ef",
"sha256:047c0a04e382ef8bd74b0de01407e8d8632d7d1b4db6f2561106af812a68741b",
"sha256:068167c2d7bbeebd359665ac4fff756be5ffac9cda02375b5c5a7c4777038e73",
"sha256:148ff60e0fffa2f5fad2eb25aae7bef23d8f3b8bdaf947a65cdbe84a978092bc",
"sha256:1d1c77013a259971a72ddaa83b9f42c80a93ff12df6a4723be99d858fa30bee3",
"sha256:1d48bc124a6b7a55006d97917f695effa9725d05abe8ee78fd60d6588b8344cd",
"sha256:31dfa2fc323097f8ad7acd41aa38d7c614dd1960ac6681745b6da124093dc351",
"sha256:34f82db7f80c49f38b032c5abb605c458bac997a6c3142e0d6c130be6fb2b941",
"sha256:3d5dd8e5998fb4ace04789d1d008e2bb532de501218519d70bb672c4c5a2fc5d",
"sha256:4a6ae52bd3ee41ee0f3acf4c60ceb3f44e0e3bc52ab7da1c2b2aa6703363a3d1",
"sha256:4b02a3b2a2f01d0490dd39321c74273fed0568568ea0e7ea23e02bd1fb10a10b",
"sha256:4b843f8e1dd6a3195679d9838eb4670222e8b8d01bc36c9894d6c3538316fa0a",
"sha256:5de53a28f40ef3c4fd57aeab6b590c2c663de87a5af76136ced519923d3efbb3",
"sha256:61b2b33ede821b94fa99ce0b09c9ece049c7067a33b279f343adfe35108a4ea7",
"sha256:6a3a9b0f45fd75dc05d8e93dc21b18fc1670135ec9544d1ad4acbcf6b86781d0",
"sha256:76ad8e4c69dadbb31bad17c16baee61c0d1a4a73bed2590b741b2e1a46d3edd0",
"sha256:7ba19b777dc00194d1b473180d4ca89a054dd18de27d0ee2e42a103ec9b7d014",
"sha256:7c1b7eab7a49aa96f3db1f716f0113a8a2e93c7375dd3d5d21c4941f1405c9c5",
"sha256:7fc0eee3046041387cbace9314926aa48b681202f8897f8bff3809967a049036",
"sha256:8ccd1c5fff1aa1427100ce188557fc31f1e0a383ad8ec42c559aabd4ff08802d",
"sha256:8e08dd76de80539d613654915a2f5196dbccc67448df291e69a88712ea21e24a",
"sha256:c18498c50c59263841862ea0501da9f2b3659c00db54abfbf823a80787fde8ce",
"sha256:c49db89d602c24928e68c0d510f4fcf8989d77defd01c973d6cbe27e684833b1",
"sha256:ce20044d0317649ddbb4e54dab3c1bcc7483c78c27d3f58ab3d0c7e6bc60d26a",
"sha256:d1071414dd06ca2eafa90c85a079169bfeb0e5f57fd0b45d44c092546fcd6fd9",
"sha256:d3be11ac43ab1a3e979dac80843b42226d5d3cccd3986f2e03152720a4297cd7",
"sha256:db603a1c235d110c860d5f39988ebc8218ee028f07a7cbc056ba6424372ca31b"
],
"version": "==4.5.2"
},
"pathspec": {
"hashes": [
"sha256:e285ccc8b0785beadd4c18e5708b12bb8fcf529a1e61215b3feff1d1e559ea5c"
],
"index": "pypi",
"version": "==0.6.0"
},
"regex": {
"hashes": [
"sha256:1e9f9bc44ca195baf0040b1938e6801d2f3409661c15fe57f8164c678cfc663f",
"sha256:2079f83ac094b436a8078988e38af44286c67eb5a259becfc563e8a81cb34a11",
"sha256:43dbcc90e1bd42d905abc8dac75a64768d4b621c9e01adfcd8a57df1af65793e",
"sha256:587b62d48ca359d2d4f02d486f1f0aa9a20fbaf23a9d4198c4bed72ab2f6c849",
"sha256:835ccdcdc612821edf132c20aef3eaaecfb884c9454fdc480d5887562594ac61",
"sha256:93f6c9da57e704e128d90736430c5c59dd733327882b371b0cae8833106c2a21",
"sha256:a46f27d267665016acb3ec8c6046ec5eae8cf80befe85ba47f43c6f5ec636dcd",
"sha256:c5c8999b3a341b21ac2c6ec704cfcccbc50f1fedd61b6a8ee915ca7fd4b0a557",
"sha256:d4d1829cf97632673aa49f378b0a2c3925acd795148c5ace8ef854217abbee89",
"sha256:d96479257e8e4d1d7800adb26bf9c5ca5bab1648a1eddcac84d107b73dc68327",
"sha256:f20f4912daf443220436759858f96fefbfc6c6ba9e67835fd6e4e9b73582791a",
"sha256:f2b37b5b2c2a9d56d9e88efef200ec09c36c7f323f9d58d0b985a90923df386d",
"sha256:fe765b809a1f7ce642c2edeee351e7ebd84391640031ba4b60af8d91a9045890"
],
"index": "pypi",
"version": "==2019.8.19"
},
"toml": {
"hashes": [
"sha256:229f81c57791a41d65e399fc06bf0848bab550a9dfd5ed66df18ce5f05e73d5c",
"sha256:235682dd292d5899d361a811df37e04a8828a5b1da3115886b73cf81ebc9100e"
],
"index": "pypi",
"version": "==0.10.0"
},
"typed-ast": {
"hashes": [
"sha256:1170afa46a3799e18b4c977777ce137bb53c7485379d9706af8a59f2ea1aa161",
"sha256:18511a0b3e7922276346bcb47e2ef9f38fb90fd31cb9223eed42c85d1312344e",
"sha256:262c247a82d005e43b5b7f69aff746370538e176131c32dda9cb0f324d27141e",
"sha256:2b907eb046d049bcd9892e3076c7a6456c93a25bebfe554e931620c90e6a25b0",
"sha256:354c16e5babd09f5cb0ee000d54cfa38401d8b8891eefa878ac772f827181a3c",
"sha256:48e5b1e71f25cfdef98b013263a88d7145879fbb2d5185f2a0c79fa7ebbeae47",
"sha256:4e0b70c6fc4d010f8107726af5fd37921b666f5b31d9331f0bd24ad9a088e631",
"sha256:630968c5cdee51a11c05a30453f8cd65e0cc1d2ad0d9192819df9978984529f4",
"sha256:66480f95b8167c9c5c5c87f32cf437d585937970f3fc24386f313a4c97b44e34",
"sha256:71211d26ffd12d63a83e079ff258ac9d56a1376a25bc80b1cdcdf601b855b90b",
"sha256:7954560051331d003b4e2b3eb822d9dd2e376fa4f6d98fee32f452f52dd6ebb2",
"sha256:838997f4310012cf2e1ad3803bce2f3402e9ffb71ded61b5ee22617b3a7f6b6e",
"sha256:95bd11af7eafc16e829af2d3df510cecfd4387f6453355188342c3e79a2ec87a",
"sha256:bc6c7d3fa1325a0c6613512a093bc2a2a15aeec350451cbdf9e1d4bffe3e3233",
"sha256:cc34a6f5b426748a507dd5d1de4c1978f2eb5626d51326e43280941206c209e1",
"sha256:d755f03c1e4a51e9b24d899561fec4ccaf51f210d52abdf8c07ee2849b212a36",
"sha256:d7c45933b1bdfaf9f36c579671fec15d25b06c8398f113dab64c18ed1adda01d",
"sha256:d896919306dd0aa22d0132f62a1b78d11aaf4c9fc5b3410d3c666b818191630a",
"sha256:fdc1c9bbf79510b76408840e009ed65958feba92a88833cdceecff93ae8fff66",
"sha256:ffde2fbfad571af120fcbfbbc61c72469e72f550d676c3342492a9dfdefb8f12"
],
"index": "pypi",
"version": "==1.4.0"
},
"yarl": {
"hashes": [
"sha256:024ecdc12bc02b321bc66b41327f930d1c2c543fa9a561b39861da9388ba7aa9",
"sha256:2f3010703295fbe1aec51023740871e64bb9664c789cba5a6bdf404e93f7568f",
"sha256:3890ab952d508523ef4881457c4099056546593fa05e93da84c7250516e632eb",
"sha256:3e2724eb9af5dc41648e5bb304fcf4891adc33258c6e14e2a7414ea32541e320",
"sha256:5badb97dd0abf26623a9982cd448ff12cb39b8e4c94032ccdedf22ce01a64842",
"sha256:73f447d11b530d860ca1e6b582f947688286ad16ca42256413083d13f260b7a0",
"sha256:7ab825726f2940c16d92aaec7d204cfc34ac26c0040da727cf8ba87255a33829",
"sha256:b25de84a8c20540531526dfbb0e2d2b648c13fd5dd126728c496d7c3fea33310",
"sha256:c6e341f5a6562af74ba55205dbd56d248daf1b5748ec48a0200ba227bb9e33f4",
"sha256:c9bb7c249c4432cd47e75af3864bc02d26c9594f49c82e2a28624417f0ae63b8",
"sha256:e060906c0c585565c718d1c3841747b61c5439af2211e185f6739a9412dfbde1"
],
"version": "==1.3.0"
}
},
"develop": {
"alabaster": {
"hashes": [
"sha256:446438bdcca0e05bd45ea2de1668c1d9b032e1a9154c2c259092d77031ddd359",
"sha256:a661d72d58e6ea8a57f7a86e37d86716863ee5e92788398526d58b26a4e4dc02"
],
"version": "==0.7.12"
},
"aspy.yaml": {
"hashes": [
"sha256:463372c043f70160a9ec950c3f1e4c3a82db5fca01d334b6bc89c7164d744bdc",
"sha256:e7c742382eff2caed61f87a39d13f99109088e5e93f04d76eb8d4b28aa143f45"
],
"version": "==1.3.0"
},
"attrs": {
"hashes": [
"sha256:08a96c641c3a74e44eb59afb61a24f2cb9f4d7188748e76ba4bb5edfa3cb7d1c",
"sha256:f7b7ce16570fe9965acd6d30101a28f62fb4a7f9e926b3bbc9b61f8b04247e72"
],
"index": "pypi",
"version": "==19.3.0"
},
"babel": {
"hashes": [
"sha256:af92e6106cb7c55286b25b38ad7695f8b4efb36a90ba483d7f7a6628c46158ab",
"sha256:e86135ae101e31e2c8ec20a4e0c5220f4eed12487d5cf3f78be7e98d3a57fc28"
],
"version": "==2.7.0"
},
"bleach": {
"hashes": [
"sha256:213336e49e102af26d9cde77dd2d0397afabc5a6bf2fed985dc35b5d1e285a16",
"sha256:3fdf7f77adcf649c9911387df51254b813185e32b2c6619f690b593a617e19fa"
],
"version": "==3.1.0"
},
"certifi": {
"hashes": [
"sha256:e4f3620cfea4f83eedc95b24abd9cd56f3c4b146dd0177e83a21b4eb49e21e50",
"sha256:fd7c7c74727ddcf00e9acd26bba8da604ffec95bf1c2144e67aff7a8b50e6cef"
],
"version": "==2019.9.11"
},
"cfgv": {
"hashes": [
"sha256:edb387943b665bf9c434f717bf630fa78aecd53d5900d2e05da6ad6048553144",
"sha256:fbd93c9ab0a523bf7daec408f3be2ed99a980e20b2d19b50fc184ca6b820d289"
],
"version": "==2.0.1"
},
"chardet": {
"hashes": [
"sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae",
"sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691"
],
"version": "==3.0.4"
},
"commonmark": {
"hashes": [
"sha256:452f9dc859be7f06631ddcb328b6919c67984aca654e5fefb3914d54691aed60",
"sha256:da2f38c92590f83de410ba1a3cbceafbc74fee9def35f9251ba9a971d6d66fd9"
],
"version": "==0.9.1"
},
"coverage": {
"hashes": [
"sha256:08907593569fe59baca0bf152c43f3863201efb6113ecb38ce7e97ce339805a6",
"sha256:0be0f1ed45fc0c185cfd4ecc19a1d6532d72f86a2bac9de7e24541febad72650",
"sha256:141f08ed3c4b1847015e2cd62ec06d35e67a3ac185c26f7635f4406b90afa9c5",
"sha256:19e4df788a0581238e9390c85a7a09af39c7b539b29f25c89209e6c3e371270d",
"sha256:23cc09ed395b03424d1ae30dcc292615c1372bfba7141eb85e11e50efaa6b351",
"sha256:245388cda02af78276b479f299bbf3783ef0a6a6273037d7c60dc73b8d8d7755",
"sha256:331cb5115673a20fb131dadd22f5bcaf7677ef758741312bee4937d71a14b2ef",
"sha256:386e2e4090f0bc5df274e720105c342263423e77ee8826002dcffe0c9533dbca",
"sha256:3a794ce50daee01c74a494919d5ebdc23d58873747fa0e288318728533a3e1ca",
"sha256:60851187677b24c6085248f0a0b9b98d49cba7ecc7ec60ba6b9d2e5574ac1ee9",
"sha256:63a9a5fc43b58735f65ed63d2cf43508f462dc49857da70b8980ad78d41d52fc",
"sha256:6b62544bb68106e3f00b21c8930e83e584fdca005d4fffd29bb39fb3ffa03cb5",
"sha256:6ba744056423ef8d450cf627289166da65903885272055fb4b5e113137cfa14f",
"sha256:7494b0b0274c5072bddbfd5b4a6c6f18fbbe1ab1d22a41e99cd2d00c8f96ecfe",
"sha256:826f32b9547c8091679ff292a82aca9c7b9650f9fda3e2ca6bf2ac905b7ce888",
"sha256:93715dffbcd0678057f947f496484e906bf9509f5c1c38fc9ba3922893cda5f5",
"sha256:9a334d6c83dfeadae576b4d633a71620d40d1c379129d587faa42ee3e2a85cce",
"sha256:af7ed8a8aa6957aac47b4268631fa1df984643f07ef00acd374e456364b373f5",
"sha256:bf0a7aed7f5521c7ca67febd57db473af4762b9622254291fbcbb8cd0ba5e33e",
"sha256:bf1ef9eb901113a9805287e090452c05547578eaab1b62e4ad456fcc049a9b7e",
"sha256:c0afd27bc0e307a1ffc04ca5ec010a290e49e3afbe841c5cafc5c5a80ecd81c9",
"sha256:dd579709a87092c6dbee09d1b7cfa81831040705ffa12a1b248935274aee0437",
"sha256:df6712284b2e44a065097846488f66840445eb987eb81b3cc6e4149e7b6982e1",
"sha256:e07d9f1a23e9e93ab5c62902833bf3e4b1f65502927379148b6622686223125c",
"sha256:e2ede7c1d45e65e209d6093b762e98e8318ddeff95317d07a27a2140b80cfd24",
"sha256:e4ef9c164eb55123c62411f5936b5c2e521b12356037b6e1c2617cef45523d47",
"sha256:eca2b7343524e7ba246cab8ff00cab47a2d6d54ada3b02772e908a45675722e2",
"sha256:eee64c616adeff7db37cc37da4180a3a5b6177f5c46b187894e633f088fb5b28",
"sha256:ef824cad1f980d27f26166f86856efe11eff9912c4fed97d3804820d43fa550c",
"sha256:efc89291bd5a08855829a3c522df16d856455297cf35ae827a37edac45f466a7",
"sha256:fa964bae817babece5aa2e8c1af841bebb6d0b9add8e637548809d040443fee0",
"sha256:ff37757e068ae606659c28c3bd0d923f9d29a85de79bf25b2b34b148473b5025"
],
"index": "pypi",
"version": "==4.5.4"
},
"docutils": {
"hashes": [
"sha256:54a349c622ff31c91cbec43b0b512f113b5b24daf00e2ea530bb1bd9aac14849",
"sha256:d2ddba74835cb090a1b627d3de4e7835c628d07ee461f7b4480f51af2fe4d448"
],
"index": "pypi",
"version": "==0.15"
},
"entrypoints": {
"hashes": [
"sha256:589f874b313739ad35be6e0cd7efde2a4e9b6fea91edcc34e58ecbb8dbe56d19",
"sha256:c70dd71abe5a8c85e55e12c19bd91ccfeec11a6e99044204511f9ed547d48451"
],
"version": "==0.3"
},
"flake8": {
"hashes": [
"sha256:19241c1cbc971b9962473e4438a2ca19749a7dd002dd1a946eaba171b4114548",
"sha256:8e9dfa3cecb2400b3738a42c54c3043e821682b9c840b0448c0503f781130696"
],
"index": "pypi",
"version": "==3.7.8"
},
"flake8-bugbear": {
"hashes": [
"sha256:d8c466ea79d5020cb20bf9f11cf349026e09517a42264f313d3f6fddb83e0571",
"sha256:ded4d282778969b5ab5530ceba7aa1a9f1b86fa7618fc96a19a1d512331640f8"
],
"index": "pypi",
"version": "==19.8.0"
},
"flake8-mypy": {
"hashes": [
"sha256:47120db63aff631ee1f84bac6fe8e64731dc66da3efc1c51f85e15ade4a3ba18",
"sha256:cff009f4250e8391bf48990093cff85802778c345c8449d6498b62efefeebcbc"
],
"index": "pypi",
"version": "==17.8.0"
},
"identify": {
"hashes": [
"sha256:4f1fe9a59df4e80fcb0213086fcf502bc1765a01ea4fe8be48da3b65afd2a017",
"sha256:d8919589bd2a5f99c66302fec0ef9027b12ae150b0b0213999ad3f695fc7296e"
],
"version": "==1.4.7"
},
"idna": {
"hashes": [
"sha256:c357b3f628cf53ae2c4c05627ecc484553142ca23264e593d327bcde5e9c3407",
"sha256:ea8b7f6188e6fa117537c3df7da9fc686d485087abf6ac197f9c46432f7e4a3c"
],
"version": "==2.8"
},
"imagesize": {
"hashes": [
"sha256:3f349de3eb99145973fefb7dbe38554414e5c30abd0c8e4b970a7c9d09f3a1d8",
"sha256:f3832918bc3c66617f92e35f5d70729187676313caa60c187eb0f28b8fe5e3b5"
],
"version": "==1.1.0"
},
"importlib-metadata": {
"hashes": [
"sha256:aa18d7378b00b40847790e7c27e11673d7fed219354109d0e7b9e5b25dc3ad26",
"sha256:d5f18a79777f3aa179c145737780282e27b508fc8fd688cb17c7a813e8bd39af"
],
"markers": "python_version < '3.8'",
"version": "==0.23"
},
"jinja2": {
"hashes": [
"sha256:74320bb91f31270f9551d46522e33af46a80c3d619f4a4bf42b3164d30b5911f",
"sha256:9fe95f19286cfefaa917656583d020be14e7859c6b0252588391e47db34527de"
],
"version": "==2.10.3"
},
"markupsafe": {
"hashes": [
"sha256:00bc623926325b26bb9605ae9eae8a215691f33cae5df11ca5424f06f2d1f473",
"sha256:09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161",
"sha256:09c4b7f37d6c648cb13f9230d847adf22f8171b1ccc4d5682398e77f40309235",
"sha256:1027c282dad077d0bae18be6794e6b6b8c91d58ed8a8d89a89d59693b9131db5",
"sha256:24982cc2533820871eba85ba648cd53d8623687ff11cbb805be4ff7b4c971aff",
"sha256:29872e92839765e546828bb7754a68c418d927cd064fd4708fab9fe9c8bb116b",
"sha256:43a55c2930bbc139570ac2452adf3d70cdbb3cfe5912c71cdce1c2c6bbd9c5d1",
"sha256:46c99d2de99945ec5cb54f23c8cd5689f6d7177305ebff350a58ce5f8de1669e",
"sha256:500d4957e52ddc3351cabf489e79c91c17f6e0899158447047588650b5e69183",
"sha256:535f6fc4d397c1563d08b88e485c3496cf5784e927af890fb3c3aac7f933ec66",
"sha256:62fe6c95e3ec8a7fad637b7f3d372c15ec1caa01ab47926cfdf7a75b40e0eac1",
"sha256:6dd73240d2af64df90aa7c4e7481e23825ea70af4b4922f8ede5b9e35f78a3b1",
"sha256:717ba8fe3ae9cc0006d7c451f0bb265ee07739daf76355d06366154ee68d221e",
"sha256:79855e1c5b8da654cf486b830bd42c06e8780cea587384cf6545b7d9ac013a0b",
"sha256:7c1699dfe0cf8ff607dbdcc1e9b9af1755371f92a68f706051cc8c37d447c905",
"sha256:88e5fcfb52ee7b911e8bb6d6aa2fd21fbecc674eadd44118a9cc3863f938e735",
"sha256:8defac2f2ccd6805ebf65f5eeb132adcf2ab57aa11fdf4c0dd5169a004710e7d",
"sha256:98c7086708b163d425c67c7a91bad6e466bb99d797aa64f965e9d25c12111a5e",
"sha256:9add70b36c5666a2ed02b43b335fe19002ee5235efd4b8a89bfcf9005bebac0d",
"sha256:9bf40443012702a1d2070043cb6291650a0841ece432556f784f004937f0f32c",
"sha256:ade5e387d2ad0d7ebf59146cc00c8044acbd863725f887353a10df825fc8ae21",
"sha256:b00c1de48212e4cc9603895652c5c410df699856a2853135b3967591e4beebc2",
"sha256:b1282f8c00509d99fef04d8ba936b156d419be841854fe901d8ae224c59f0be5",
"sha256:b2051432115498d3562c084a49bba65d97cf251f5a331c64a12ee7e04dacc51b",
"sha256:ba59edeaa2fc6114428f1637ffff42da1e311e29382d81b339c1817d37ec93c6",
"sha256:c8716a48d94b06bb3b2524c2b77e055fb313aeb4ea620c8dd03a105574ba704f",
"sha256:cd5df75523866410809ca100dc9681e301e3c27567cf498077e8551b6d20e42f",
"sha256:e249096428b3ae81b08327a63a485ad0878de3fb939049038579ac0ef61e17e7"
],
"version": "==1.1.1"
},
"mccabe": {
"hashes": [
"sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42",
"sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"
],
"version": "==0.6.1"
},
"more-itertools": {
"hashes": [
"sha256:409cd48d4db7052af495b09dec721011634af3753ae1ef92d2b32f73a745f832",
"sha256:92b8c4b06dac4f0611c0729b2f2ede52b2e1bac1ab48f089c7ddc12e26bb60c4"
],
"version": "==7.2.0"
},
"mypy": {
"hashes": [
"sha256:1521c186a3d200c399bd5573c828ea2db1362af7209b2adb1bb8532cea2fb36f",
"sha256:31a046ab040a84a0fc38bc93694876398e62bc9f35eca8ccbf6418b7297f4c00",
"sha256:3b1a411909c84b2ae9b8283b58b48541654b918e8513c20a400bb946aa9111ae",
"sha256:48c8bc99380575deb39f5d3400ebb6a8a1cb5cc669bbba4d3bb30f904e0a0e7d",
"sha256:540c9caa57a22d0d5d3c69047cc9dd0094d49782603eb03069821b41f9e970e9",
"sha256:672e418425d957e276c291930a3921b4a6413204f53fe7c37cad7bc57b9a3391",
"sha256:6ed3b9b3fdc7193ea7aca6f3c20549b377a56f28769783a8f27191903a54170f",
"sha256:9371290aa2cad5ad133e4cdc43892778efd13293406f7340b9ffe99d5ec7c1d9",
"sha256:ace6ac1d0f87d4072f05b5468a084a45b4eda970e4d26704f201e06d47ab2990",
"sha256:b428f883d2b3fe1d052c630642cc6afddd07d5cd7873da948644508be3b9d4a7",
"sha256:d5bf0e6ec8ba346a2cf35cb55bf4adfddbc6b6576fcc9e10863daa523e418dbb",
"sha256:d7574e283f83c08501607586b3167728c58e8442947e027d2d4c7dcd6d82f453",
"sha256:dc889c84241a857c263a2b1cd1121507db7d5b5f5e87e77147097230f374d10b",
"sha256:f4748697b349f373002656bf32fede706a0e713d67bfdcf04edf39b1f61d46eb"
],
"index": "pypi",
"version": "==0.740"
},
"mypy-extensions": {
"hashes": [
"sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d",
"sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"
],
"version": "==0.4.3"
},
"nodeenv": {
"hashes": [
"sha256:ad8259494cf1c9034539f6cced78a1da4840a4b157e23640bc4a0c0546b0cb7a"
],
"version": "==1.3.3"
},
"packaging": {
"hashes": [
"sha256:28b924174df7a2fa32c1953825ff29c61e2f5e082343165438812f00d3a7fc47",
"sha256:d9551545c6d761f3def1677baf08ab2a3ca17c56879e70fecba2fc4dde4ed108"
],
"version": "==19.2"
},
"pkginfo": {
"hashes": [
"sha256:7424f2c8511c186cd5424bbf31045b77435b37a8d604990b79d4e70d741148bb",
"sha256:a6d9e40ca61ad3ebd0b72fbadd4fba16e4c0e4df0428c041e01e06eb6ee71f32"
],
"version": "==1.5.0.1"
},
"pre-commit": {
"hashes": [
"sha256:2080b7a375e54e4fdc41bd0606193d0bad06b4d4f23526ffb73f0b512b537353",
"sha256:8dbdad6e79fa438b9ae4dafc91c85baf93a5ff29aeff5d6899396d5aebe2d338"
],
"index": "pypi",
"version": "==1.19.0"
},
"pycodestyle": {
"hashes": [
"sha256:95a2219d12372f05704562a14ec30bc76b05a5b297b21a5dfe3f6fac3491ae56",
"sha256:e40a936c9a450ad81df37f549d676d127b1b66000a6c500caa2b085bc0ca976c"
],
"version": "==2.5.0"
},
"pyflakes": {
"hashes": [
"sha256:17dbeb2e3f4d772725c777fabc446d5634d1038f234e77343108ce445ea69ce0",
"sha256:d976835886f8c5b31d47970ed689944a0262b5f3afa00a5a7b4dc81e5449f8a2"
],
"version": "==2.1.1"
},
"pygments": {
"hashes": [
"sha256:71e430bc85c88a430f000ac1d9b331d2407f681d6f6aec95e8bcfbc3df5b0127",
"sha256:881c4c157e45f30af185c1ffe8d549d48ac9127433f2c380c24b84572ad66297"
],
"version": "==2.4.2"
},
"pyparsing": {
"hashes": [
"sha256:6f98a7b9397e206d78cc01df10131398f1c8b8510a2f4d97d9abd82e1aacdd80",
"sha256:d9338df12903bbf5d65a0e4e87c2161968b10d2e489652bb47001d82a9b028b4"
],
"version": "==2.4.2"
},
"pytz": {
"hashes": [
"sha256:1c557d7d0e871de1f5ccd5833f60fb2550652da6be2693c1e02300743d21500d",
"sha256:b02c06db6cf09c12dd25137e563b31700d3b80fcc4ad23abb7a315f2789819be"
],
"version": "==2019.3"
},
"pyyaml": {
"hashes": [
"sha256:0113bc0ec2ad727182326b61326afa3d1d8280ae1122493553fd6f4397f33df9",
"sha256:01adf0b6c6f61bd11af6e10ca52b7d4057dd0be0343eb9283c878cf3af56aee4",
"sha256:5124373960b0b3f4aa7df1707e63e9f109b5263eca5976c66e08b1c552d4eaf8",
"sha256:5ca4f10adbddae56d824b2c09668e91219bb178a1eee1faa56af6f99f11bf696",
"sha256:7907be34ffa3c5a32b60b95f4d95ea25361c951383a894fec31be7252b2b6f34",
"sha256:7ec9b2a4ed5cad025c2278a1e6a19c011c80a3caaac804fd2d329e9cc2c287c9",
"sha256:87ae4c829bb25b9fe99cf71fbb2140c448f534e24c998cc60f39ae4f94396a73",
"sha256:9de9919becc9cc2ff03637872a440195ac4241c80536632fffeb6a1e25a74299",
"sha256:a5a85b10e450c66b49f98846937e8cfca1db3127a9d5d1e31ca45c3d0bef4c5b",
"sha256:b0997827b4f6a7c286c01c5f60384d218dca4ed7d9efa945c3e1aa623d5709ae",
"sha256:b631ef96d3222e62861443cc89d6563ba3eeb816eeb96b2629345ab795e53681",
"sha256:bf47c0607522fdbca6c9e817a6e81b08491de50f3766a7a0e6a5be7905961b41",
"sha256:f81025eddd0327c7d4cfe9b62cf33190e1e736cc6e97502b3ec425f574b3e7a8"
],
"version": "==5.1.2"
},
"readme-renderer": {
"hashes": [
"sha256:bb16f55b259f27f75f640acf5e00cf897845a8b3e4731b5c1a436e4b8529202f",
"sha256:c8532b79afc0375a85f10433eca157d6b50f7d6990f337fa498c96cd4bfc203d"
],
"index": "pypi",
"version": "==24.0"
},
"recommonmark": {
"hashes": [
"sha256:29cd4faeb6c5268c633634f2d69aef9431e0f4d347f90659fd0aab20e541efeb",
"sha256:2ec4207a574289355d5b6ae4ae4abb29043346ca12cdd5f07d374dc5987d2852"
],
"index": "pypi",
"version": "==0.6.0"
},
"requests": {
"hashes": [
"sha256:11e007a8a2aa0323f5a921e9e6a2d7e4e67d9877e85773fba9ba6419025cbeb4",
"sha256:9cf5292fcd0f598c671cfc1e0d7d1a7f13bb8085e9a590f48c010551dc6c4b31"
],
"version": "==2.22.0"
},
"requests-toolbelt": {
"hashes": [
"sha256:380606e1d10dc85c3bd47bf5a6095f815ec007be7a8b69c878507068df059e6f",
"sha256:968089d4584ad4ad7c171454f0a5c6dac23971e9472521ea3b6d49d610aa6fc0"
],
"version": "==0.9.1"
},
"setuptools-scm": {
"hashes": [
"sha256:1f11cb2eea431346d46589c2dafcafe2e7dc1c7b2c70bc4c3752d2048ad5c148",
"sha256:bd25e1fb5e4d603dcf490f1fde40fb4c595b357795674c3e5cb7f6217ab39ea5"
],
"index": "pypi",
"version": "==3.3.3"
},
"six": {
"hashes": [
"sha256:3350809f0555b11f552448330d0b52d5f24c91a322ea4a15ef22629740f3761c",
"sha256:d16a0141ec1a18405cd4ce8b4613101da75da0e9a7aec5bdd4fa804d0e0eba73"
],
"version": "==1.12.0"
},
"snowballstemmer": {
"hashes": [
"sha256:209f257d7533fdb3cb73bdbd24f436239ca3b2fa67d56f6ff88e86be08cc5ef0",
"sha256:df3bac3df4c2c01363f3dd2cfa78cce2840a79b9f1c2d2de9ce8d31683992f52"
],
"version": "==2.0.0"
},
"sphinx": {
"hashes": [
"sha256:31088dfb95359384b1005619827eaee3056243798c62724fd3fa4b84ee4d71bd",
"sha256:52286a0b9d7caa31efee301ec4300dbdab23c3b05da1c9024b4e84896fb73d79"
],
"index": "pypi",
"version": "==2.2.1"
},
"sphinxcontrib-applehelp": {
"hashes": [
"sha256:edaa0ab2b2bc74403149cb0209d6775c96de797dfd5b5e2a71981309efab3897",
"sha256:fb8dee85af95e5c30c91f10e7eb3c8967308518e0f7488a2828ef7bc191d0d5d"
],
"version": "==1.0.1"
},
"sphinxcontrib-devhelp": {
"hashes": [
"sha256:6c64b077937330a9128a4da74586e8c2130262f014689b4b89e2d08ee7294a34",
"sha256:9512ecb00a2b0821a146736b39f7aeb90759834b07e81e8cc23a9c70bacb9981"
],
"version": "==1.0.1"
},
"sphinxcontrib-htmlhelp": {
"hashes": [
"sha256:4670f99f8951bd78cd4ad2ab962f798f5618b17675c35c5ac3b2132a14ea8422",
"sha256:d4fd39a65a625c9df86d7fa8a2d9f3cd8299a3a4b15db63b50aac9e161d8eff7"
],
"version": "==1.0.2"
},
"sphinxcontrib-jsmath": {
"hashes": [
"sha256:2ec2eaebfb78f3f2078e73666b1415417a116cc848b72e5172e596c871103178",
"sha256:a9925e4a4587247ed2191a22df5f6970656cb8ca2bd6284309578f2153e0c4b8"
],
"version": "==1.0.1"
},
"sphinxcontrib-qthelp": {
"hashes": [
"sha256:513049b93031beb1f57d4daea74068a4feb77aa5630f856fcff2e50de14e9a20",
"sha256:79465ce11ae5694ff165becda529a600c754f4bc459778778c7017374d4d406f"
],
"version": "==1.0.2"
},
"sphinxcontrib-serializinghtml": {
"hashes": [
"sha256:c0efb33f8052c04fd7a26c0a07f1678e8512e0faec19f4aa8f2473a8b81d5227",
"sha256:db6615af393650bf1151a6cd39120c29abaf93cc60db8c48eb2dddbfdc3a9768"
],
"version": "==1.1.3"
},
"toml": {
"hashes": [
"sha256:229f81c57791a41d65e399fc06bf0848bab550a9dfd5ed66df18ce5f05e73d5c",
"sha256:235682dd292d5899d361a811df37e04a8828a5b1da3115886b73cf81ebc9100e"
],
"index": "pypi",
"version": "==0.10.0"
},
"tqdm": {
"hashes": [
"sha256:abc25d0ce2397d070ef07d8c7e706aede7920da163c64997585d42d3537ece3d",
"sha256:dd3fcca8488bb1d416aa7469d2f277902f26260c45aa86b667b074cd44b3b115"
],
"version": "==4.36.1"
},
"twine": {
"hashes": [
"sha256:5319dd3e02ac73fcddcd94f035b9631589ab5d23e1f4699d57365199d85261e1",
"sha256:9fe7091715c7576df166df8ef6654e61bada39571783f2fd415bdcba867c6993"
],
"index": "pypi",
"version": "==2.0.0"
},
"typed-ast": {
"hashes": [
"sha256:1170afa46a3799e18b4c977777ce137bb53c7485379d9706af8a59f2ea1aa161",
"sha256:18511a0b3e7922276346bcb47e2ef9f38fb90fd31cb9223eed42c85d1312344e",
"sha256:262c247a82d005e43b5b7f69aff746370538e176131c32dda9cb0f324d27141e",
"sha256:2b907eb046d049bcd9892e3076c7a6456c93a25bebfe554e931620c90e6a25b0",
"sha256:354c16e5babd09f5cb0ee000d54cfa38401d8b8891eefa878ac772f827181a3c",
"sha256:48e5b1e71f25cfdef98b013263a88d7145879fbb2d5185f2a0c79fa7ebbeae47",
"sha256:4e0b70c6fc4d010f8107726af5fd37921b666f5b31d9331f0bd24ad9a088e631",
"sha256:630968c5cdee51a11c05a30453f8cd65e0cc1d2ad0d9192819df9978984529f4",
"sha256:66480f95b8167c9c5c5c87f32cf437d585937970f3fc24386f313a4c97b44e34",
"sha256:71211d26ffd12d63a83e079ff258ac9d56a1376a25bc80b1cdcdf601b855b90b",
"sha256:7954560051331d003b4e2b3eb822d9dd2e376fa4f6d98fee32f452f52dd6ebb2",
"sha256:838997f4310012cf2e1ad3803bce2f3402e9ffb71ded61b5ee22617b3a7f6b6e",
"sha256:95bd11af7eafc16e829af2d3df510cecfd4387f6453355188342c3e79a2ec87a",
"sha256:bc6c7d3fa1325a0c6613512a093bc2a2a15aeec350451cbdf9e1d4bffe3e3233",
"sha256:cc34a6f5b426748a507dd5d1de4c1978f2eb5626d51326e43280941206c209e1",
"sha256:d755f03c1e4a51e9b24d899561fec4ccaf51f210d52abdf8c07ee2849b212a36",
"sha256:d7c45933b1bdfaf9f36c579671fec15d25b06c8398f113dab64c18ed1adda01d",
"sha256:d896919306dd0aa22d0132f62a1b78d11aaf4c9fc5b3410d3c666b818191630a",
"sha256:fdc1c9bbf79510b76408840e009ed65958feba92a88833cdceecff93ae8fff66",
"sha256:ffde2fbfad571af120fcbfbbc61c72469e72f550d676c3342492a9dfdefb8f12"
],
"index": "pypi",
"version": "==1.4.0"
},
"typing-extensions": {
"hashes": [
"sha256:2ed632b30bb54fc3941c382decfd0ee4148f5c591651c9272473fea2c6397d95",
"sha256:b1edbbf0652660e32ae780ac9433f4231e7339c7f9a8057d0f042fcbcea49b87",
"sha256:d8179012ec2c620d3791ca6fe2bf7979d979acdbef1fca0bc56b37411db682ed"
],
"version": "==3.7.4"
},
"urllib3": {
"hashes": [
"sha256:3de946ffbed6e6746608990594d08faac602528ac7015ac28d33cee6a45b7398",
"sha256:9a107b99a5393caf59c7aa3c1249c16e6879447533d0887f4336dde834c7be86"
],
"version": "==1.25.6"
},
"virtualenv": {
"hashes": [
"sha256:11cb4608930d5fd3afb545ecf8db83fa50e1f96fc4fca80c94b07d2c83146589",
"sha256:d257bb3773e48cac60e475a19b608996c73f4d333b3ba2e4e57d5ac6134e0136"
],
"version": "==16.7.7"
},
"webencodings": {
"hashes": [
"sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78",
"sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"
],
"version": "==0.5.1"
},
"wheel": {
"hashes": [
"sha256:10c9da68765315ed98850f8e048347c3eb06dd81822dc2ab1d4fde9dc9702646",
"sha256:f4da1763d3becf2e2cd92a14a7c920f0f00eca30fdde9ea992c836685b9faf28"
],
"index": "pypi",
"version": "==0.33.6"
},
"zipp": {
"hashes": [
"sha256:3718b1cbcd963c7d4c5511a8240812904164b7f381b647143a89d3b98f9bcd8e",
"sha256:f06903e9f1f43b12d371004b4ac7b06ab39a44adc747266928ae6debfa7b3335"
],
"version": "==0.6.0"
}
}
}

1491
README.md

File diff suppressed because it is too large Load Diff

View File

@ -1,11 +0,0 @@
# Security Policy
## Supported Versions
Only the latest non-prerelease version is supported.
## Security contact information
To report a security vulnerability, please use the
[Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the
fix and disclosure.

View File

@ -1,79 +0,0 @@
name: "Black"
description: "The uncompromising Python code formatter."
author: "Łukasz Langa and contributors to Black"
inputs:
options:
description:
"Options passed to Black. Use `black --help` to see available options. Default:
'--check --diff'"
required: false
default: "--check --diff"
src:
description: "Source to run Black. Default: '.'"
required: false
default: "."
jupyter:
description:
"Set this option to true to include Jupyter Notebook files. Default: false"
required: false
default: false
black_args:
description: "[DEPRECATED] Black input arguments."
required: false
default: ""
deprecationMessage:
"Input `with.black_args` is deprecated. Use `with.options` and `with.src` instead."
version:
description: 'Python Version specifier (PEP440) - e.g. "21.5b1"'
required: false
default: ""
use_pyproject:
description: Read Black version specifier from pyproject.toml if `true`.
required: false
default: "false"
summary:
description: "Whether to add the output to the workflow summary"
required: false
default: true
branding:
color: "black"
icon: "check-circle"
runs:
using: composite
steps:
- name: black
run: |
# Even when black fails, do not close the shell
set +e
if [ "$RUNNER_OS" == "Windows" ]; then
runner="python"
else
runner="python3"
fi
out=$(${runner} $GITHUB_ACTION_PATH/action/main.py)
exit_code=$?
# Display the raw output in the step
echo "${out}"
if [ "${{ inputs.summary }}" == "true" ]; then
# Display the Markdown output in the job summary
echo "\`\`\`python" >> $GITHUB_STEP_SUMMARY
echo "${out}" >> $GITHUB_STEP_SUMMARY
echo "\`\`\`" >> $GITHUB_STEP_SUMMARY
fi
# Exit with the exit-code returned by Black
exit ${exit_code}
env:
# TODO: Remove once https://github.com/actions/runner/issues/665 is fixed.
INPUT_OPTIONS: ${{ inputs.options }}
INPUT_SRC: ${{ inputs.src }}
INPUT_JUPYTER: ${{ inputs.jupyter }}
INPUT_BLACK_ARGS: ${{ inputs.black_args }}
INPUT_VERSION: ${{ inputs.version }}
INPUT_USE_PYPROJECT: ${{ inputs.use_pyproject }}
pythonioencoding: utf-8
shell: bash

View File

@ -1,182 +0,0 @@
import os
import re
import shlex
import shutil
import sys
from pathlib import Path
from subprocess import PIPE, STDOUT, run
from typing import Union
ACTION_PATH = Path(os.environ["GITHUB_ACTION_PATH"])
ENV_PATH = ACTION_PATH / ".black-env"
ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
OPTIONS = os.getenv("INPUT_OPTIONS", default="")
SRC = os.getenv("INPUT_SRC", default="")
JUPYTER = os.getenv("INPUT_JUPYTER") == "true"
BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
VERSION = os.getenv("INPUT_VERSION", default="")
USE_PYPROJECT = os.getenv("INPUT_USE_PYPROJECT") == "true"
BLACK_VERSION_RE = re.compile(r"^black([^A-Z0-9._-]+.*)$", re.IGNORECASE)
EXTRAS_RE = re.compile(r"\[.*\]")
EXPORT_SUBST_FAIL_RE = re.compile(r"\$Format:.*\$")
def determine_version_specifier() -> str:
"""Determine the version of Black to install.
The version can be specified either via the `with.version` input or via the
pyproject.toml file if `with.use_pyproject` is set to `true`.
"""
if USE_PYPROJECT and VERSION:
print(
"::error::'with.version' and 'with.use_pyproject' inputs are "
"mutually exclusive.",
file=sys.stderr,
flush=True,
)
sys.exit(1)
if USE_PYPROJECT:
return read_version_specifier_from_pyproject()
elif VERSION and VERSION[0] in "0123456789":
return f"=={VERSION}"
else:
return VERSION
def read_version_specifier_from_pyproject() -> str:
if sys.version_info < (3, 11):
print(
"::error::'with.use_pyproject' input requires Python 3.11 or later.",
file=sys.stderr,
flush=True,
)
sys.exit(1)
import tomllib # type: ignore[import-not-found,unreachable]
try:
with Path("pyproject.toml").open("rb") as fp:
pyproject = tomllib.load(fp)
except FileNotFoundError:
print(
"::error::'with.use_pyproject' input requires a pyproject.toml file.",
file=sys.stderr,
flush=True,
)
sys.exit(1)
version = pyproject.get("tool", {}).get("black", {}).get("required-version")
if version is not None:
return f"=={version}"
arrays = [
*pyproject.get("dependency-groups", {}).values(),
pyproject.get("project", {}).get("dependencies"),
*pyproject.get("project", {}).get("optional-dependencies", {}).values(),
]
for array in arrays:
version = find_black_version_in_array(array)
if version is not None:
break
if version is None:
print(
"::error::'black' dependency missing from pyproject.toml.",
file=sys.stderr,
flush=True,
)
sys.exit(1)
return version
def find_black_version_in_array(array: object) -> Union[str, None]:
if not isinstance(array, list):
return None
try:
for item in array:
# Rudimentary PEP 508 parsing.
item = item.split(";")[0]
item = EXTRAS_RE.sub("", item).strip()
if item == "black":
print(
"::error::Version specifier missing for 'black' dependency in "
"pyproject.toml.",
file=sys.stderr,
flush=True,
)
sys.exit(1)
elif m := BLACK_VERSION_RE.match(item):
return m.group(1).strip()
except TypeError:
pass
return None
run([sys.executable, "-m", "venv", str(ENV_PATH)], check=True)
version_specifier = determine_version_specifier()
if JUPYTER:
extra_deps = "[colorama,jupyter]"
else:
extra_deps = "[colorama]"
if version_specifier:
req = f"black{extra_deps}{version_specifier}"
else:
describe_name = ""
with open(ACTION_PATH / ".git_archival.txt", encoding="utf-8") as fp:
for line in fp:
if line.startswith("describe-name: "):
describe_name = line[len("describe-name: ") :].rstrip()
break
if not describe_name:
print("::error::Failed to detect action version.", file=sys.stderr, flush=True)
sys.exit(1)
# expected format is one of:
# - 23.1.0
# - 23.1.0-51-g448bba7
# - $Format:%(describe:tags=true,match=*[0-9]*)$ (if export-subst fails)
if (
describe_name.count("-") < 2
and EXPORT_SUBST_FAIL_RE.match(describe_name) is None
):
# the action's commit matches a tag exactly, install exact version from PyPI
req = f"black{extra_deps}=={describe_name}"
else:
# the action's commit does not match any tag, install from the local git repo
req = f".{extra_deps}"
print(f"Installing {req}...", flush=True)
pip_proc = run(
[str(ENV_BIN / "python"), "-m", "pip", "install", req],
stdout=PIPE,
stderr=STDOUT,
encoding="utf-8",
cwd=ACTION_PATH,
)
if pip_proc.returncode:
print(pip_proc.stdout)
print("::error::Failed to install Black.", file=sys.stderr, flush=True)
sys.exit(pip_proc.returncode)
base_cmd = [str(ENV_BIN / "black")]
if BLACK_ARGS:
# TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.
proc = run(
[*base_cmd, *shlex.split(BLACK_ARGS)],
stdout=PIPE,
stderr=STDOUT,
encoding="utf-8",
)
else:
proc = run(
[*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)],
stdout=PIPE,
stderr=STDOUT,
encoding="utf-8",
)
shutil.rmtree(ENV_PATH, ignore_errors=True)
print(proc.stdout)
sys.exit(proc.returncode)

View File

@ -1,243 +0,0 @@
python3 << EndPython3
import collections
import os
import sys
import vim
def strtobool(text):
if text.lower() in ['y', 'yes', 't', 'true', 'on', '1']:
return True
if text.lower() in ['n', 'no', 'f', 'false', 'off', '0']:
return False
raise ValueError(f"{text} is not convertible to boolean")
class Flag(collections.namedtuple("FlagBase", "name, cast")):
@property
def var_name(self):
return self.name.replace("-", "_")
@property
def vim_rc_name(self):
name = self.var_name
if name == "line_length":
name = name.replace("_", "")
return "g:black_" + name
FLAGS = [
Flag(name="line_length", cast=int),
Flag(name="fast", cast=strtobool),
Flag(name="skip_string_normalization", cast=strtobool),
Flag(name="quiet", cast=strtobool),
Flag(name="skip_magic_trailing_comma", cast=strtobool),
Flag(name="preview", cast=strtobool),
]
def _get_python_binary(exec_prefix, pyver):
try:
default = vim.eval("g:pymode_python").strip()
except vim.error:
default = ""
if default and os.path.exists(default):
return default
if sys.platform[:3] == "win":
return exec_prefix / 'python.exe'
bin_path = exec_prefix / "bin"
exec_path = (bin_path / f"python{pyver[0]}.{pyver[1]}").resolve()
if exec_path.exists():
return exec_path
# It is possible that some environments may only have python3
exec_path = (bin_path / f"python3").resolve()
if exec_path.exists():
return exec_path
raise ValueError("python executable not found")
def _get_pip(venv_path):
if sys.platform[:3] == "win":
return venv_path / 'Scripts' / 'pip.exe'
return venv_path / 'bin' / 'pip'
def _get_virtualenv_site_packages(venv_path, pyver):
if sys.platform[:3] == "win":
return venv_path / 'Lib' / 'site-packages'
return venv_path / 'lib' / f'python{pyver[0]}.{pyver[1]}' / 'site-packages'
def _initialize_black_env(upgrade=False):
if vim.eval("g:black_use_virtualenv ? 'true' : 'false'") == "false":
if upgrade:
print("Upgrade disabled due to g:black_use_virtualenv being disabled.")
print("Either use your system package manager (or pip) to upgrade black separately,")
print("or modify your vimrc to have 'let g:black_use_virtualenv = 1'.")
return False
else:
# Nothing needed to be done.
return True
pyver = sys.version_info[:3]
if pyver < (3, 9):
print("Sorry, Black requires Python 3.9+ to run.")
return False
from pathlib import Path
import subprocess
import venv
virtualenv_path = Path(vim.eval("g:black_virtualenv")).expanduser()
virtualenv_site_packages = str(_get_virtualenv_site_packages(virtualenv_path, pyver))
first_install = False
if not virtualenv_path.is_dir():
print('Please wait, one time setup for Black.')
_executable = sys.executable
_base_executable = getattr(sys, "_base_executable", _executable)
try:
executable = str(_get_python_binary(Path(sys.exec_prefix), pyver))
sys.executable = executable
sys._base_executable = executable
print(f'Creating a virtualenv in {virtualenv_path}...')
print('(this path can be customized in .vimrc by setting g:black_virtualenv)')
venv.create(virtualenv_path, with_pip=True)
except Exception:
print('Encountered exception while creating virtualenv (see traceback below).')
print(f'Removing {virtualenv_path}...')
import shutil
shutil.rmtree(virtualenv_path)
raise
finally:
sys.executable = _executable
sys._base_executable = _base_executable
first_install = True
if first_install:
print('Installing Black with pip...')
if upgrade:
print('Upgrading Black with pip...')
if first_install or upgrade:
subprocess.run([str(_get_pip(virtualenv_path)), 'install', '-U', 'black'], stdout=subprocess.PIPE)
print('DONE! You are all set, thanks for waiting ✨ 🍰 ✨')
if first_install:
print('Pro-tip: to upgrade Black in the future, use the :BlackUpgrade command and restart Vim.\n')
if virtualenv_site_packages not in sys.path:
sys.path.insert(0, virtualenv_site_packages)
return True
if _initialize_black_env():
import black
import time
def get_target_version(tv):
if isinstance(tv, black.TargetVersion):
return tv
ret = None
try:
ret = black.TargetVersion[tv.upper()]
except KeyError:
print(f"WARNING: Target version {tv!r} not recognized by Black, using default target")
return ret
def Black(**kwargs):
"""
kwargs allows you to override ``target_versions`` argument of
``black.FileMode``.
``target_version`` needs to be cleaned because ``black.FileMode``
expects the ``target_versions`` argument to be a set of TargetVersion enums.
Allow kwargs["target_version"] to be a string to allow
to type it more quickly.
Using also target_version instead of target_versions to remain
consistent to Black's documentation of the structure of pyproject.toml.
"""
start = time.time()
configs = get_configs()
black_kwargs = {}
if "target_version" in kwargs:
target_version = kwargs["target_version"]
if not isinstance(target_version, (list, set)):
target_version = [target_version]
target_version = set(filter(lambda x: x, map(lambda tv: get_target_version(tv), target_version)))
black_kwargs["target_versions"] = target_version
mode = black.FileMode(
line_length=configs["line_length"],
string_normalization=not configs["skip_string_normalization"],
is_pyi=vim.current.buffer.name.endswith('.pyi'),
magic_trailing_comma=not configs["skip_magic_trailing_comma"],
preview=configs["preview"],
**black_kwargs,
)
quiet = configs["quiet"]
buffer_str = '\n'.join(vim.current.buffer) + '\n'
try:
new_buffer_str = black.format_file_contents(
buffer_str,
fast=configs["fast"],
mode=mode,
)
except black.NothingChanged:
if not quiet:
print(f'Black: already well formatted, good job. (took {time.time() - start:.4f}s)')
except Exception as exc:
print(f'Black: {exc}')
else:
current_buffer = vim.current.window.buffer
cursors = []
for i, tabpage in enumerate(vim.tabpages):
if tabpage.valid:
for j, window in enumerate(tabpage.windows):
if window.valid and window.buffer == current_buffer:
cursors.append((i, j, window.cursor))
vim.current.buffer[:] = new_buffer_str.split('\n')[:-1]
for i, j, cursor in cursors:
window = vim.tabpages[i].windows[j]
try:
window.cursor = cursor
except vim.error:
window.cursor = (len(window.buffer), 0)
if not quiet:
print(f'Black: reformatted in {time.time() - start:.4f}s.')
def get_configs():
filename = vim.eval("@%")
path_pyproject_toml = black.find_pyproject_toml((filename,))
if path_pyproject_toml:
toml_config = black.parse_pyproject_toml(path_pyproject_toml)
else:
toml_config = {}
return {
flag.var_name: toml_config.get(flag.name, flag.cast(vim.eval(flag.vim_rc_name)))
for flag in FLAGS
}
def BlackUpgrade():
_initialize_black_env(upgrade=True)
def BlackVersion():
print(f'Black, version {black.__version__} on Python {sys.version}.')
EndPython3
function black#Black(...)
let kwargs = {}
for arg in a:000
let arg_list = split(arg, '=')
let kwargs[arg_list[0]] = arg_list[1]
endfor
python3 << EOF
import vim
kwargs = vim.eval("kwargs")
EOF
:py3 Black(**kwargs)
endfunction
function black#BlackUpgrade()
:py3 BlackUpgrade()
endfunction
function black#BlackVersion()
:py3 BlackVersion()
endfunction

4139
black.py Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,27 +1,17 @@
import asyncio
import logging
from concurrent.futures import Executor, ProcessPoolExecutor
from datetime import datetime, timezone
from functools import cache, partial
from datetime import datetime
from functools import partial
import logging
from multiprocessing import freeze_support
from typing import Set, Tuple
try:
from aiohttp import web
from multidict import MultiMapping
from .middlewares import cors
except ImportError as ie:
raise ImportError(
f"aiohttp dependency is not installed: {ie}. "
+ "Please re-install black with the '[d]' extra install "
+ "to obtain aiohttp_cors: `pip install black[d]`"
) from None
from aiohttp import web
import aiohttp_cors
import black
import click
import black
from _black_version import version as __version__
from black.concurrency import maybe_install_uvloop
# This is used internally by tests to shut down the server prematurely
_stop_signal = asyncio.Event()
@ -30,12 +20,7 @@
PROTOCOL_VERSION_HEADER = "X-Protocol-Version"
LINE_LENGTH_HEADER = "X-Line-Length"
PYTHON_VARIANT_HEADER = "X-Python-Variant"
SKIP_SOURCE_FIRST_LINE = "X-Skip-Source-First-Line"
SKIP_STRING_NORMALIZATION_HEADER = "X-Skip-String-Normalization"
SKIP_MAGIC_TRAILING_COMMA = "X-Skip-Magic-Trailing-Comma"
PREVIEW = "X-Preview"
UNSTABLE = "X-Unstable"
ENABLE_UNSTABLE_FEATURE = "X-Enable-Unstable-Feature"
FAST_OR_SAFE_HEADER = "X-Fast-Or-Safe"
DIFF_HEADER = "X-Diff"
@ -43,12 +28,7 @@
PROTOCOL_VERSION_HEADER,
LINE_LENGTH_HEADER,
PYTHON_VARIANT_HEADER,
SKIP_SOURCE_FIRST_LINE,
SKIP_STRING_NORMALIZATION_HEADER,
SKIP_MAGIC_TRAILING_COMMA,
PREVIEW,
UNSTABLE,
ENABLE_UNSTABLE_FEATURE,
FAST_OR_SAFE_HEADER,
DIFF_HEADER,
]
@ -57,25 +37,15 @@
BLACK_VERSION_HEADER = "X-Black-Version"
class HeaderError(Exception):
pass
class InvalidVariantHeader(Exception):
pass
@click.command(context_settings={"help_option_names": ["-h", "--help"]})
@click.option(
"--bind-host",
type=str,
help="Address to bind the server to.",
default="localhost",
show_default=True,
)
@click.option(
"--bind-port", type=int, help="Port to listen on", default=45484, show_default=True
"--bind-host", type=str, help="Address to bind the server to.", default="localhost"
)
@click.option("--bind-port", type=int, help="Port to listen on", default=45484)
@click.version_option(version=black.__version__)
def main(bind_host: str, bind_port: int) -> None:
logging.basicConfig(level=logging.INFO)
@ -85,16 +55,21 @@ def main(bind_host: str, bind_port: int) -> None:
web.run_app(app, host=bind_host, port=bind_port, handle_signals=True, print=None)
@cache
def executor() -> Executor:
return ProcessPoolExecutor()
def make_app() -> web.Application:
app = web.Application(
middlewares=[cors(allow_headers=(*BLACK_HEADERS, "Content-Type"))]
app = web.Application()
executor = ProcessPoolExecutor()
cors = aiohttp_cors.setup(app)
resource = cors.add(app.router.add_resource("/"))
cors.add(
resource.add_route("POST", partial(handle, executor=executor)),
{
"*": aiohttp_cors.ResourceOptions(
allow_headers=(*BLACK_HEADERS, "Content-Type"), expose_headers="*"
)
},
)
app.add_routes([web.post("/", partial(handle, executor=executor()))])
return app
@ -105,48 +80,54 @@ async def handle(request: web.Request, executor: Executor) -> web.Response:
return web.Response(
status=501, text="This server only supports protocol version 1"
)
try:
line_length = int(
request.headers.get(LINE_LENGTH_HEADER, black.DEFAULT_LINE_LENGTH)
)
except ValueError:
return web.Response(status=400, text="Invalid line length header value")
if PYTHON_VARIANT_HEADER in request.headers:
value = request.headers[PYTHON_VARIANT_HEADER]
try:
pyi, versions = parse_python_variant_header(value)
except InvalidVariantHeader as e:
return web.Response(
status=400,
text=f"Invalid value for {PYTHON_VARIANT_HEADER}: {e.args[0]}",
)
else:
pyi = False
versions = set()
skip_string_normalization = bool(
request.headers.get(SKIP_STRING_NORMALIZATION_HEADER, False)
)
fast = False
if request.headers.get(FAST_OR_SAFE_HEADER, "safe") == "fast":
fast = True
try:
mode = parse_mode(request.headers)
except HeaderError as e:
return web.Response(status=400, text=e.args[0])
mode = black.FileMode(
target_versions=versions,
is_pyi=pyi,
line_length=line_length,
string_normalization=not skip_string_normalization,
)
req_bytes = await request.content.read()
charset = request.charset if request.charset is not None else "utf8"
req_str = req_bytes.decode(charset)
then = datetime.now(timezone.utc)
header = ""
if mode.skip_source_first_line:
first_newline_position: int = req_str.find("\n") + 1
header = req_str[:first_newline_position]
req_str = req_str[first_newline_position:]
then = datetime.utcnow()
loop = asyncio.get_event_loop()
formatted_str = await loop.run_in_executor(
executor, partial(black.format_file_contents, req_str, fast=fast, mode=mode)
)
# Preserve CRLF line endings
nl = req_str.find("\n")
if nl > 0 and req_str[nl - 1] == "\r":
formatted_str = formatted_str.replace("\n", "\r\n")
# If, after swapping line endings, nothing changed, then say so
if formatted_str == req_str:
raise black.NothingChanged
# Put the source first line back
req_str = header + req_str
formatted_str = header + formatted_str
# Only output the diff in the HTTP response
only_diff = bool(request.headers.get(DIFF_HEADER, False))
if only_diff:
now = datetime.now(timezone.utc)
src_name = f"In\t{then}"
dst_name = f"Out\t{now}"
now = datetime.utcnow()
src_name = f"In\t{then} +0000"
dst_name = f"Out\t{now} +0000"
loop = asyncio.get_event_loop()
formatted_str = await loop.run_in_executor(
executor,
@ -168,58 +149,7 @@ async def handle(request: web.Request, executor: Executor) -> web.Response:
return web.Response(status=500, headers=headers, text=str(e))
def parse_mode(headers: MultiMapping[str]) -> black.Mode:
try:
line_length = int(headers.get(LINE_LENGTH_HEADER, black.DEFAULT_LINE_LENGTH))
except ValueError:
raise HeaderError("Invalid line length header value") from None
if PYTHON_VARIANT_HEADER in headers:
value = headers[PYTHON_VARIANT_HEADER]
try:
pyi, versions = parse_python_variant_header(value)
except InvalidVariantHeader as e:
raise HeaderError(
f"Invalid value for {PYTHON_VARIANT_HEADER}: {e.args[0]}",
) from None
else:
pyi = False
versions = set()
skip_string_normalization = bool(
headers.get(SKIP_STRING_NORMALIZATION_HEADER, False)
)
skip_magic_trailing_comma = bool(headers.get(SKIP_MAGIC_TRAILING_COMMA, False))
skip_source_first_line = bool(headers.get(SKIP_SOURCE_FIRST_LINE, False))
preview = bool(headers.get(PREVIEW, False))
unstable = bool(headers.get(UNSTABLE, False))
enable_features: set[black.Preview] = set()
enable_unstable_features = headers.get(ENABLE_UNSTABLE_FEATURE, "").split(",")
for piece in enable_unstable_features:
piece = piece.strip()
if piece:
try:
enable_features.add(black.Preview[piece])
except KeyError:
raise HeaderError(
f"Invalid value for {ENABLE_UNSTABLE_FEATURE}: {piece}",
) from None
return black.FileMode(
target_versions=versions,
is_pyi=pyi,
line_length=line_length,
skip_source_first_line=skip_source_first_line,
string_normalization=not skip_string_normalization,
magic_trailing_comma=not skip_magic_trailing_comma,
preview=preview,
unstable=unstable,
enabled_features=enable_features,
)
def parse_python_variant_header(value: str) -> tuple[bool, set[black.TargetVersion]]:
def parse_python_variant_header(value: str) -> Tuple[bool, Set[black.TargetVersion]]:
if value == "pyi":
return True, set()
else:
@ -238,8 +168,10 @@ def parse_python_variant_header(value: str) -> tuple[bool, set[black.TargetVersi
raise InvalidVariantHeader("major version must be 2 or 3")
if len(rest) > 0:
minor = int(rest[0])
if major == 2:
raise InvalidVariantHeader("Python 2 is not supported")
if major == 2 and minor != 7:
raise InvalidVariantHeader(
"minor version must be 7 for Python 2"
)
else:
# Default to lowest supported minor version.
minor = 7 if major == 2 else 3
@ -248,13 +180,13 @@ def parse_python_variant_header(value: str) -> tuple[bool, set[black.TargetVersi
raise InvalidVariantHeader(f"3.{minor} is not supported")
versions.add(black.TargetVersion[version_str])
except (KeyError, ValueError):
raise InvalidVariantHeader("expected e.g. '3.7', 'py3.5'") from None
raise InvalidVariantHeader("expected e.g. '3.7', 'py3.5'")
return False, versions
def patched_main() -> None:
maybe_install_uvloop()
freeze_support()
black.patch_click()
main()

View File

@ -12,17 +12,11 @@ file_input: (NEWLINE | stmt)* ENDMARKER
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
eval_input: testlist NEWLINE* ENDMARKER
typevar: NAME [':' test] ['=' test]
paramspec: '**' NAME ['=' test]
typevartuple: '*' NAME ['=' (test|star_expr)]
typeparam: typevar | paramspec | typevartuple
typeparams: '[' typeparam (',' typeparam)* [','] ']'
decorator: '@' namedexpr_test NEWLINE
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
decorated: decorators (classdef | funcdef | async_funcdef)
async_funcdef: ASYNC funcdef
funcdef: 'def' NAME [typeparams] parameters ['->' test] ':' suite
funcdef: 'def' NAME parameters ['->' test] ':' suite
parameters: '(' [typedargslist] ')'
# The following definition for typedarglist is equivalent to this set of rules:
@ -30,7 +24,7 @@ parameters: '(' [typedargslist] ')'
# arguments = argument (',' argument)*
# argument = tfpdef ['=' test]
# kwargs = '**' tname [',']
# args = '*' [tname_star]
# args = '*' [tname]
# kwonly_kwargs = (',' argument)* [',' [kwargs]]
# args_kwonly_kwargs = args kwonly_kwargs | kwargs
# poskeyword_args_kwonly_kwargs = arguments [',' [args_kwonly_kwargs]]
@ -40,15 +34,14 @@ parameters: '(' [typedargslist] ')'
# It needs to be fully expanded to allow our LL(1) parser to work on it.
typedargslist: tfpdef ['=' test] (',' tfpdef ['=' test])* ',' '/' [
',' [((tfpdef ['=' test] ',')* ('*' [tname_star] (',' tname ['=' test])*
',' [((tfpdef ['=' test] ',')* ('*' [tname] (',' tname ['=' test])*
[',' ['**' tname [',']]] | '**' tname [','])
| tfpdef ['=' test] (',' tfpdef ['=' test])* [','])]
] | ((tfpdef ['=' test] ',')* ('*' [tname_star] (',' tname ['=' test])*
] | ((tfpdef ['=' test] ',')* ('*' [tname] (',' tname ['=' test])*
[',' ['**' tname [',']]] | '**' tname [','])
| tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
tname: NAME [':' test]
tname_star: NAME [':' (test|star_expr)]
tfpdef: tname | '(' tfplist ')'
tfplist: tfpdef (',' tfpdef)* [',']
@ -80,21 +73,23 @@ vfplist: vfpdef (',' vfpdef)* [',']
stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (type_stmt | expr_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | assert_stmt)
small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | exec_stmt | assert_stmt)
expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |
('=' (yield_expr|testlist_star_expr))*)
annassign: ':' test ['=' (yield_expr|testlist_star_expr)]
annassign: ':' test ['=' test]
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
# For normal and annotated assignments, additional restrictions enforced by the interpreter
print_stmt: 'print' ( [ test (',' test)* [','] ] |
'>>' test [ (',' test)+ [','] ] )
del_stmt: 'del' exprlist
pass_stmt: 'pass'
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
break_stmt: 'break'
continue_stmt: 'continue'
return_stmt: 'return' [testlist_star_expr]
return_stmt: 'return' [testlist]
yield_stmt: yield_expr
raise_stmt: 'raise' [test ['from' test | ',' test [',' test]]]
import_stmt: import_name | import_from
@ -107,23 +102,24 @@ import_as_names: import_as_name (',' import_as_name)* [',']
dotted_as_names: dotted_as_name (',' dotted_as_name)*
dotted_name: NAME ('.' NAME)*
global_stmt: ('global' | 'nonlocal') NAME (',' NAME)*
exec_stmt: 'exec' expr ['in' test [',' test]]
assert_stmt: 'assert' test [',' test]
type_stmt: "type" NAME [typeparams] '=' test
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt | match_stmt
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt
async_stmt: ASYNC (funcdef | with_stmt | for_stmt)
if_stmt: 'if' namedexpr_test ':' suite ('elif' namedexpr_test ':' suite)* ['else' ':' suite]
while_stmt: 'while' namedexpr_test ':' suite ['else' ':' suite]
for_stmt: 'for' exprlist 'in' testlist_star_expr ':' suite ['else' ':' suite]
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
try_stmt: ('try' ':' suite
((except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite] |
'finally' ':' suite))
with_stmt: 'with' asexpr_test (',' asexpr_test)* ':' suite
with_stmt: 'with' with_item (',' with_item)* ':' suite
with_item: test ['as' expr]
with_var: 'as' expr
# NB compile.c makes sure that the default except clause is last
except_clause: 'except' ['*'] [test [(',' | 'as') test]]
except_clause: 'except' [test [(',' | 'as') test]]
suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
# Backward compatibility cruft to support:
@ -135,15 +131,7 @@ testlist_safe: old_test [(',' old_test)+ [',']]
old_test: or_test | old_lambdef
old_lambdef: 'lambda' [varargslist] ':' old_test
namedexpr_test: asexpr_test [':=' asexpr_test]
# This is actually not a real rule, though since the parser is very
# limited in terms of the strategy about match/case rules, we are inserting
# a virtual case (<expr> as <expr>) as a valid expression. Unless a better
# approach is thought, the only side effect of this seem to be just allowing
# more stuff to be parser (which would fail on the ast).
asexpr_test: test ['as' test]
namedexpr_test: test [':=' test]
test: or_test ['if' or_test 'else' test] | lambdef
or_test: and_test ('or' and_test)*
and_test: not_test ('and' not_test)*
@ -163,22 +151,22 @@ atom: ('(' [yield_expr|testlist_gexp] ')' |
'[' [listmaker] ']' |
'{' [dictsetmaker] '}' |
'`' testlist1 '`' |
NAME | NUMBER | (STRING | fstring)+ | '.' '.' '.')
NAME | NUMBER | STRING+ | '.' '.' '.')
listmaker: (namedexpr_test|star_expr) ( old_comp_for | (',' (namedexpr_test|star_expr))* [','] )
testlist_gexp: (namedexpr_test|star_expr) ( old_comp_for | (',' (namedexpr_test|star_expr))* [','] )
lambdef: 'lambda' [varargslist] ':' test
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: (subscript|star_expr) (',' (subscript|star_expr))* [',']
subscript: test [':=' test] | [test] ':' [test] [sliceop]
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
testlist: test (',' test)* [',']
dictsetmaker: ( ((test ':' asexpr_test | '**' expr)
(comp_for | (',' (test ':' asexpr_test | '**' expr))* [','])) |
((test [':=' test] | star_expr)
(comp_for | (',' (test [':=' test] | star_expr))* [','])) )
dictsetmaker: ( ((test ':' test | '**' expr)
(comp_for | (',' (test ':' test | '**' expr))* [','])) |
((test | star_expr)
(comp_for | (',' (test | star_expr))* [','])) )
classdef: 'class' NAME [typeparams] ['(' [arglist] ')'] ':' suite
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
arglist: argument (',' argument)* [',']
@ -190,9 +178,8 @@ arglist: argument (',' argument)* [',']
# multiple (test comp_for) arguments are blocked; keyword unpackings
# that precede iterable unpackings are blocked; etc.
argument: ( test [comp_for] |
test ':=' test [comp_for] |
test 'as' test |
test '=' asexpr_test |
test ':=' test |
test '=' test |
'**' test |
'*' test )
@ -225,37 +212,4 @@ testlist1: test (',' test)*
encoding_decl: NAME
yield_expr: 'yield' [yield_arg]
yield_arg: 'from' test | testlist_star_expr
# 3.10 match statement definition
# PS: normally the grammar is much much more restricted, but
# at this moment for not trying to bother much with encoding the
# exact same DSL in a LL(1) parser, we will just accept an expression
# and let the ast.parse() step of the safe mode to reject invalid
# grammar.
# The reason why it is more restricted is that, patterns are some
# sort of a DSL (more advanced than our LHS on assignments, but
# still in a very limited python subset). They are not really
# expressions, but who cares. If we can parse them, that is enough
# to reformat them.
match_stmt: "match" subject_expr ':' NEWLINE INDENT case_block+ DEDENT
# This is more permissive than the actual version. For example it
# accepts `match *something:`, even though single-item starred expressions
# are forbidden.
subject_expr: (namedexpr_test|star_expr) (',' (namedexpr_test|star_expr))* [',']
# cases
case_block: "case" patterns [guard] ':' suite
guard: 'if' namedexpr_test
patterns: pattern (',' pattern)* [',']
pattern: (expr|star_expr) ['as' expr]
fstring: FSTRING_START fstring_middle* FSTRING_END
fstring_middle: fstring_replacement_field | FSTRING_MIDDLE
fstring_replacement_field: '{' (yield_expr | testlist_star_expr) ['='] [ "!" NAME ] [ ':' fstring_format_spec* ] '}'
fstring_format_spec: FSTRING_MIDDLE | fstring_replacement_field
yield_arg: 'from' test | testlist

16
blib2to3/README Normal file
View File

@ -0,0 +1,16 @@
A subset of lib2to3 taken from Python 3.7.0b2.
Commit hash: 9c17e3a1987004b8bcfbe423953aad84493a7984
Reasons for forking:
- consistent handling of f-strings for users of Python < 3.6.2
- backport of BPO-33064 that fixes parsing files with trailing commas after
*args and **kwargs
- backport of GH-6143 that restores the ability to reformat legacy usage of
`async`
- support all types of string literals
- better ability to debug (better reprs)
- INDENT and DEDENT don't hold whitespace and comment prefixes
- ability to Cythonize
Change Log:
- Changes default logger used by Driver

1
blib2to3/__init__.pyi Normal file
View File

@ -0,0 +1 @@
# Stubs for lib2to3 (Python 3.6)

View File

@ -0,0 +1,10 @@
# Stubs for lib2to3.pgen2 (Python 3.6)
import os
import sys
from typing import Text, Union
if sys.version_info >= (3, 6):
_Path = Union[Text, os.PathLike]
else:
_Path = Text

View File

@ -1,8 +1,6 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# mypy: ignore-errors
"""Convert graminit.[ch] spit out by pgen to Python code.
Pgen is the Python parser generator. It is useful to quickly create a
@ -29,7 +27,7 @@
"""
# Python imports
import re
import regex as re
# Local imports
from pgen2 import grammar, token
@ -63,7 +61,7 @@ def parse_graminit_h(self, filename):
try:
f = open(filename)
except OSError as err:
print(f"Can't open {filename}: {err}")
print("Can't open %s: %s" % (filename, err))
return False
self.symbol2number = {}
self.number2symbol = {}
@ -72,7 +70,7 @@ def parse_graminit_h(self, filename):
lineno += 1
mo = re.match(r"^#define\s+(\w+)\s+(\d+)$", line)
if not mo and line.strip():
print(f"{filename}({lineno}): can't parse {line.strip()}")
print("%s(%s): can't parse %s" % (filename, lineno, line.strip()))
else:
symbol, number = mo.groups()
number = int(number)
@ -113,7 +111,7 @@ def parse_graminit_c(self, filename):
try:
f = open(filename)
except OSError as err:
print(f"Can't open {filename}: {err}")
print("Can't open %s: %s" % (filename, err))
return False
# The code below essentially uses f's iterator-ness!
lineno = 0

View File

@ -16,117 +16,36 @@
__all__ = ["Driver", "load_grammar"]
# Python imports
import codecs
import io
import logging
import os
import logging
import pkgutil
import sys
from collections.abc import Iterable, Iterator
from contextlib import contextmanager
from dataclasses import dataclass, field
from logging import Logger
from typing import IO, Any, Optional, Union, cast
from blib2to3.pgen2.grammar import Grammar
from blib2to3.pgen2.tokenize import TokenInfo
from blib2to3.pytree import NL
# Pgen imports
from . import grammar, parse, pgen, token, tokenize
Path = Union[str, "os.PathLike[str]"]
from . import grammar, parse, token, tokenize, pgen
@dataclass
class ReleaseRange:
start: int
end: Optional[int] = None
tokens: list[Any] = field(default_factory=list)
def lock(self) -> None:
total_eaten = len(self.tokens)
self.end = self.start + total_eaten
class TokenProxy:
def __init__(self, generator: Any) -> None:
self._tokens = generator
self._counter = 0
self._release_ranges: list[ReleaseRange] = []
@contextmanager
def release(self) -> Iterator["TokenProxy"]:
release_range = ReleaseRange(self._counter)
self._release_ranges.append(release_range)
try:
yield self
finally:
# Lock the last release range to the final position that
# has been eaten.
release_range.lock()
def eat(self, point: int) -> Any:
eaten_tokens = self._release_ranges[-1].tokens
if point < len(eaten_tokens):
return eaten_tokens[point]
else:
while point >= len(eaten_tokens):
token = next(self._tokens)
eaten_tokens.append(token)
return token
def __iter__(self) -> "TokenProxy":
return self
def __next__(self) -> Any:
# If the current position is already compromised (looked up)
# return the eaten token, if not just go further on the given
# token producer.
for release_range in self._release_ranges:
assert release_range.end is not None
start, end = release_range.start, release_range.end
if start <= self._counter < end:
token = release_range.tokens[self._counter - start]
break
else:
token = next(self._tokens)
self._counter += 1
return token
def can_advance(self, to: int) -> bool:
# Try to eat, fail if it can't. The eat operation is cached
# so there won't be any additional cost of eating here
try:
self.eat(to)
except StopIteration:
return False
else:
return True
class Driver:
def __init__(self, grammar: Grammar, logger: Optional[Logger] = None) -> None:
class Driver(object):
def __init__(self, grammar, convert=None, logger=None):
self.grammar = grammar
if logger is None:
logger = logging.getLogger(__name__)
self.logger = logger
self.convert = convert
def parse_tokens(self, tokens: Iterable[TokenInfo], debug: bool = False) -> NL:
def parse_tokens(self, tokens, debug=False):
"""Parse a series of tokens and return the syntax tree."""
# XXX Move the prefix computation into a wrapper around tokenize.
proxy = TokenProxy(tokens)
p = parse.Parser(self.grammar)
p.setup(proxy=proxy)
p = parse.Parser(self.grammar, self.convert)
p.setup()
lineno = 1
column = 0
indent_columns: list[int] = []
indent_columns = []
type = value = start = end = line_text = None
prefix = ""
for quintuple in proxy:
for quintuple in tokens:
type, value, start, end, line_text = quintuple
if start != (lineno, column):
assert (lineno, column) <= start, ((lineno, column), start)
@ -148,7 +67,6 @@ def parse_tokens(self, tokens: Iterable[TokenInfo], debug: bool = False) -> NL:
if type == token.OP:
type = grammar.opmap[value]
if debug:
assert type is not None
self.logger.debug(
"%s %r (prefix=%r)", token.tok_name[type], value, prefix
)
@ -160,7 +78,7 @@ def parse_tokens(self, tokens: Iterable[TokenInfo], debug: bool = False) -> NL:
elif type == token.DEDENT:
_indent_col = indent_columns.pop()
prefix, _prefix = self._partially_consume_prefix(prefix, _indent_col)
if p.addtoken(cast(int, type), value, (prefix, start)):
if p.addtoken(type, value, (prefix, start)):
if debug:
self.logger.debug("Stop.")
break
@ -168,33 +86,37 @@ def parse_tokens(self, tokens: Iterable[TokenInfo], debug: bool = False) -> NL:
if type in {token.INDENT, token.DEDENT}:
prefix = _prefix
lineno, column = end
# FSTRING_MIDDLE is the only token that can end with a newline, and
# `end` will point to the next line. For that case, don't increment lineno.
if value.endswith("\n") and type != token.FSTRING_MIDDLE:
if value.endswith("\n"):
lineno += 1
column = 0
else:
# We never broke out -- EOF is too soon (how can this happen???)
assert start is not None
raise parse.ParseError("incomplete input", type, value, (prefix, start))
assert p.rootnode is not None
return p.rootnode
def parse_file(
self, filename: Path, encoding: Optional[str] = None, debug: bool = False
) -> NL:
"""Parse a file and return the syntax tree."""
with open(filename, encoding=encoding) as stream:
text = stream.read()
return self.parse_string(text, debug)
def parse_string(self, text: str, debug: bool = False) -> NL:
"""Parse a string and return the syntax tree."""
tokens = tokenize.tokenize(text, grammar=self.grammar)
def parse_stream_raw(self, stream, debug=False):
"""Parse a stream and return the syntax tree."""
tokens = tokenize.generate_tokens(stream.readline, grammar=self.grammar)
return self.parse_tokens(tokens, debug)
def _partially_consume_prefix(self, prefix: str, column: int) -> tuple[str, str]:
lines: list[str] = []
def parse_stream(self, stream, debug=False):
"""Parse a stream and return the syntax tree."""
return self.parse_stream_raw(stream, debug)
def parse_file(self, filename, encoding=None, debug=False):
"""Parse a file and return the syntax tree."""
with io.open(filename, "r", encoding=encoding) as stream:
return self.parse_stream(stream, debug)
def parse_string(self, text, debug=False):
"""Parse a string and return the syntax tree."""
tokens = tokenize.generate_tokens(
io.StringIO(text).readline, grammar=self.grammar
)
return self.parse_tokens(tokens, debug)
def _partially_consume_prefix(self, prefix, column):
lines = []
current_line = ""
current_column = 0
wait_for_nl = False
@ -215,15 +137,13 @@ def _partially_consume_prefix(self, prefix: str, column: int) -> tuple[str, str]
elif char == "\n":
# unexpected empty line
current_column = 0
elif char == "\f":
current_column = 0
else:
# indent is finished
wait_for_nl = True
return "".join(lines), current_line
def _generate_pickle_name(gt: Path, cache_dir: Optional[Path] = None) -> str:
def _generate_pickle_name(gt, cache_dir=None):
head, tail = os.path.splitext(gt)
if tail == ".txt":
tail = ""
@ -234,32 +154,27 @@ def _generate_pickle_name(gt: Path, cache_dir: Optional[Path] = None) -> str:
return name
def load_grammar(
gt: str = "Grammar.txt",
gp: Optional[str] = None,
save: bool = True,
force: bool = False,
logger: Optional[Logger] = None,
) -> Grammar:
def load_grammar(gt="Grammar.txt", gp=None, save=True, force=False, logger=None):
"""Load the grammar (maybe from a pickle)."""
if logger is None:
logger = logging.getLogger(__name__)
gp = _generate_pickle_name(gt) if gp is None else gp
if force or not _newer(gp, gt):
g: grammar.Grammar = pgen.generate_grammar(gt)
logger.info("Generating grammar tables from %s", gt)
g = pgen.generate_grammar(gt)
if save:
logger.info("Writing grammar tables to %s", gp)
try:
g.dump(gp)
except OSError:
# Ignore error, caching is not vital.
pass
except OSError as e:
logger.info("Writing failed: %s", e)
else:
g = grammar.Grammar()
g.load(gp)
return g
def _newer(a: str, b: str) -> bool:
def _newer(a, b):
"""Inquire whether file a was written since file b."""
if not os.path.exists(a):
return False
@ -268,9 +183,7 @@ def _newer(a: str, b: str) -> bool:
return os.path.getmtime(a) >= os.path.getmtime(b)
def load_packaged_grammar(
package: str, grammar_source: str, cache_dir: Optional[Path] = None
) -> grammar.Grammar:
def load_packaged_grammar(package, grammar_source, cache_dir=None):
"""Normally, loads a pickled grammar by doing
pkgutil.get_data(package, pickled_grammar)
where *pickled_grammar* is computed from *grammar_source* by adding the
@ -286,19 +199,18 @@ def load_packaged_grammar(
return load_grammar(grammar_source, gp=gp)
pickled_name = _generate_pickle_name(os.path.basename(grammar_source), cache_dir)
data = pkgutil.get_data(package, pickled_name)
assert data is not None
g = grammar.Grammar()
g.loads(data)
return g
def main(*args: str) -> bool:
def main(*args):
"""Main program, when run as a script: produce grammar pickle files.
Calls load_grammar for each argument, a path to a grammar text file.
"""
if not args:
args = tuple(sys.argv[1:])
args = sys.argv[1:]
logging.basicConfig(level=logging.INFO, stream=sys.stdout, format="%(message)s")
for gt in args:
load_grammar(gt, save=True, force=True)

24
blib2to3/pgen2/driver.pyi Normal file
View File

@ -0,0 +1,24 @@
# Stubs for lib2to3.pgen2.driver (Python 3.6)
import os
import sys
from typing import Any, Callable, IO, Iterable, List, Optional, Text, Tuple, Union
from logging import Logger
from blib2to3.pytree import _Convert, _NL
from blib2to3.pgen2 import _Path
from blib2to3.pgen2.grammar import Grammar
class Driver:
grammar: Grammar
logger: Logger
convert: _Convert
def __init__(self, grammar: Grammar, convert: Optional[_Convert] = ..., logger: Optional[Logger] = ...) -> None: ...
def parse_tokens(self, tokens: Iterable[Any], debug: bool = ...) -> _NL: ...
def parse_stream_raw(self, stream: IO[Text], debug: bool = ...) -> _NL: ...
def parse_stream(self, stream: IO[Text], debug: bool = ...) -> _NL: ...
def parse_file(self, filename: _Path, encoding: Optional[Text] = ..., debug: bool = ...) -> _NL: ...
def parse_string(self, text: Text, debug: bool = ...) -> _NL: ...
def load_grammar(gt: Text = ..., gp: Optional[Text] = ..., save: bool = ..., force: bool = ..., logger: Optional[Logger] = ...) -> Grammar: ...

View File

@ -16,19 +16,12 @@
import os
import pickle
import tempfile
from typing import Any, Optional, TypeVar, Union
# Local imports
from . import token
_P = TypeVar("_P", bound="Grammar")
Label = tuple[int, Optional[str]]
DFA = list[list[tuple[int, int]]]
DFAS = tuple[DFA, dict[int, int]]
Path = Union[str, "os.PathLike[str]"]
class Grammar:
class Grammar(object):
"""Pgen parsing tables conversion class.
Once initialized, this class supplies the grammar tables for the
@ -82,53 +75,38 @@ class Grammar:
"""
def __init__(self) -> None:
self.symbol2number: dict[str, int] = {}
self.number2symbol: dict[int, str] = {}
self.states: list[DFA] = []
self.dfas: dict[int, DFAS] = {}
self.labels: list[Label] = [(0, "EMPTY")]
self.keywords: dict[str, int] = {}
self.soft_keywords: dict[str, int] = {}
self.tokens: dict[int, int] = {}
self.symbol2label: dict[str, int] = {}
self.version: tuple[int, int] = (0, 0)
def __init__(self):
self.symbol2number = {}
self.number2symbol = {}
self.states = []
self.dfas = {}
self.labels = [(0, "EMPTY")]
self.keywords = {}
self.tokens = {}
self.symbol2label = {}
self.start = 256
# Python 3.7+ parses async as a keyword, not an identifier
self.async_keywords = False
def dump(self, filename: Path) -> None:
def dump(self, filename):
"""Dump the grammar tables to a pickle file."""
# mypyc generates objects that don't have a __dict__, but they
# do have __getstate__ methods that will return an equivalent
# dictionary
if hasattr(self, "__dict__"):
d = self.__dict__
else:
d = self.__getstate__() # type: ignore
with tempfile.NamedTemporaryFile(
dir=os.path.dirname(filename), delete=False
) as f:
pickle.dump(d, f, pickle.HIGHEST_PROTOCOL)
pickle.dump(self.__dict__, f, pickle.HIGHEST_PROTOCOL)
os.replace(f.name, filename)
def _update(self, attrs: dict[str, Any]) -> None:
for k, v in attrs.items():
setattr(self, k, v)
def load(self, filename: Path) -> None:
def load(self, filename):
"""Load the grammar tables from a pickle file."""
with open(filename, "rb") as f:
d = pickle.load(f)
self._update(d)
self.__dict__.update(d)
def loads(self, pkl: bytes) -> None:
def loads(self, pkl):
"""Load the grammar tables from a pickle bytes object."""
self._update(pickle.loads(pkl))
self.__dict__.update(pickle.loads(pkl))
def copy(self: _P) -> _P:
def copy(self):
"""
Copy the grammar.
"""
@ -138,7 +116,6 @@ def copy(self: _P) -> _P:
"number2symbol",
"dfas",
"keywords",
"soft_keywords",
"tokens",
"symbol2label",
):
@ -146,11 +123,10 @@ def copy(self: _P) -> _P:
new.labels = self.labels[:]
new.states = self.states[:]
new.start = self.start
new.version = self.version
new.async_keywords = self.async_keywords
return new
def report(self) -> None:
def report(self):
"""Dump the grammar tables to standard output, for debugging."""
from pprint import pprint
@ -218,7 +194,6 @@ def report(self) -> None:
//= DOUBLESLASHEQUAL
-> RARROW
:= COLONEQUAL
! BANG
"""
opmap = {}

View File

@ -0,0 +1,30 @@
# Stubs for lib2to3.pgen2.grammar (Python 3.6)
from blib2to3.pgen2 import _Path
from typing import Any, Dict, List, Optional, Text, Tuple, TypeVar
_P = TypeVar('_P')
_Label = Tuple[int, Optional[Text]]
_DFA = List[List[Tuple[int, int]]]
_DFAS = Tuple[_DFA, Dict[int, int]]
class Grammar:
symbol2number: Dict[Text, int]
number2symbol: Dict[int, Text]
states: List[_DFA]
dfas: Dict[int, _DFAS]
labels: List[_Label]
keywords: Dict[Text, int]
tokens: Dict[int, int]
symbol2label: Dict[Text, int]
start: int
async_keywords: bool
def __init__(self) -> None: ...
def dump(self, filename: _Path) -> None: ...
def load(self, filename: _Path) -> None: ...
def copy(self: _P) -> _P: ...
def report(self) -> None: ...
opmap_raw: Text
opmap: Dict[Text, Text]

View File

@ -3,9 +3,9 @@
"""Safely evaluate Python string literals without using eval()."""
import re
import regex as re
simple_escapes: dict[str, str] = {
simple_escapes = {
"a": "\a",
"b": "\b",
"f": "\f",
@ -19,7 +19,7 @@
}
def escape(m: re.Match[str]) -> str:
def escape(m):
all, tail = m.group(0, 1)
assert all.startswith("\\")
esc = simple_escapes.get(tail)
@ -28,20 +28,20 @@ def escape(m: re.Match[str]) -> str:
if tail.startswith("x"):
hexes = tail[1:]
if len(hexes) < 2:
raise ValueError(f"invalid hex string escape ('\\{tail}')")
raise ValueError("invalid hex string escape ('\\%s')" % tail)
try:
i = int(hexes, 16)
except ValueError:
raise ValueError(f"invalid hex string escape ('\\{tail}')") from None
raise ValueError("invalid hex string escape ('\\%s')" % tail) from None
else:
try:
i = int(tail, 8)
except ValueError:
raise ValueError(f"invalid octal string escape ('\\{tail}')") from None
raise ValueError("invalid octal string escape ('\\%s')" % tail) from None
return chr(i)
def evalString(s: str) -> str:
def evalString(s):
assert s.startswith("'") or s.startswith('"'), repr(s[:1])
q = s[0]
if s[:3] == q * 3:
@ -52,7 +52,7 @@ def evalString(s: str) -> str:
return re.sub(r"\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})", escape, s)
def test() -> None:
def test():
for i in range(256):
c = chr(i)
s = repr(c)

View File

@ -0,0 +1,9 @@
# Stubs for lib2to3.pgen2.literals (Python 3.6)
from typing import Dict, Match, Text
simple_escapes: Dict[Text, Text]
def escape(m: Match) -> Text: ...
def evalString(s: Text) -> Text: ...
def test() -> None: ...

203
blib2to3/pgen2/parse.py Normal file
View File

@ -0,0 +1,203 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
"""Parser engine for the grammar tables generated by pgen.
The grammar table must be loaded first.
See Parser/parser.c in the Python distribution for additional info on
how this parsing engine works.
"""
# Local imports
from . import token
class ParseError(Exception):
"""Exception to signal the parser is stuck."""
def __init__(self, msg, type, value, context):
Exception.__init__(
self, "%s: type=%r, value=%r, context=%r" % (msg, type, value, context)
)
self.msg = msg
self.type = type
self.value = value
self.context = context
class Parser(object):
"""Parser engine.
The proper usage sequence is:
p = Parser(grammar, [converter]) # create instance
p.setup([start]) # prepare for parsing
<for each input token>:
if p.addtoken(...): # parse a token; may raise ParseError
break
root = p.rootnode # root of abstract syntax tree
A Parser instance may be reused by calling setup() repeatedly.
A Parser instance contains state pertaining to the current token
sequence, and should not be used concurrently by different threads
to parse separate token sequences.
See driver.py for how to get input tokens by tokenizing a file or
string.
Parsing is complete when addtoken() returns True; the root of the
abstract syntax tree can then be retrieved from the rootnode
instance variable. When a syntax error occurs, addtoken() raises
the ParseError exception. There is no error recovery; the parser
cannot be used after a syntax error was reported (but it can be
reinitialized by calling setup()).
"""
def __init__(self, grammar, convert=None):
"""Constructor.
The grammar argument is a grammar.Grammar instance; see the
grammar module for more information.
The parser is not ready yet for parsing; you must call the
setup() method to get it started.
The optional convert argument is a function mapping concrete
syntax tree nodes to abstract syntax tree nodes. If not
given, no conversion is done and the syntax tree produced is
the concrete syntax tree. If given, it must be a function of
two arguments, the first being the grammar (a grammar.Grammar
instance), and the second being the concrete syntax tree node
to be converted. The syntax tree is converted from the bottom
up.
A concrete syntax tree node is a (type, value, context, nodes)
tuple, where type is the node type (a token or symbol number),
value is None for symbols and a string for tokens, context is
None or an opaque value used for error reporting (typically a
(lineno, offset) pair), and nodes is a list of children for
symbols, and None for tokens.
An abstract syntax tree node may be anything; this is entirely
up to the converter function.
"""
self.grammar = grammar
self.convert = convert or (lambda grammar, node: node)
def setup(self, start=None):
"""Prepare for parsing.
This *must* be called before starting to parse.
The optional argument is an alternative start symbol; it
defaults to the grammar's start symbol.
You can use a Parser instance to parse any number of programs;
each time you call setup() the parser is reset to an initial
state determined by the (implicit or explicit) start symbol.
"""
if start is None:
start = self.grammar.start
# Each stack entry is a tuple: (dfa, state, node).
# A node is a tuple: (type, value, context, children),
# where children is a list of nodes or None, and context may be None.
newnode = (start, None, None, [])
stackentry = (self.grammar.dfas[start], 0, newnode)
self.stack = [stackentry]
self.rootnode = None
self.used_names = set() # Aliased to self.rootnode.used_names in pop()
def addtoken(self, type, value, context):
"""Add a token; return True iff this is the end of the program."""
# Map from token to label
ilabel = self.classify(type, value, context)
# Loop until the token is shifted; may raise exceptions
while True:
dfa, state, node = self.stack[-1]
states, first = dfa
arcs = states[state]
# Look for a state with this label
for i, newstate in arcs:
t, v = self.grammar.labels[i]
if ilabel == i:
# Look it up in the list of labels
assert t < 256
# Shift a token; we're done with it
self.shift(type, value, newstate, context)
# Pop while we are in an accept-only state
state = newstate
while states[state] == [(0, state)]:
self.pop()
if not self.stack:
# Done parsing!
return True
dfa, state, node = self.stack[-1]
states, first = dfa
# Done with this token
return False
elif t >= 256:
# See if it's a symbol and if we're in its first set
itsdfa = self.grammar.dfas[t]
itsstates, itsfirst = itsdfa
if ilabel in itsfirst:
# Push a symbol
self.push(t, self.grammar.dfas[t], newstate, context)
break # To continue the outer while loop
else:
if (0, state) in arcs:
# An accepting state, pop it and try something else
self.pop()
if not self.stack:
# Done parsing, but another token is input
raise ParseError("too much input", type, value, context)
else:
# No success finding a transition
raise ParseError("bad input", type, value, context)
def classify(self, type, value, context):
"""Turn a token into a label. (Internal)"""
if type == token.NAME:
# Keep a listing of all used names
self.used_names.add(value)
# Check for reserved words
ilabel = self.grammar.keywords.get(value)
if ilabel is not None:
return ilabel
ilabel = self.grammar.tokens.get(type)
if ilabel is None:
raise ParseError("bad token", type, value, context)
return ilabel
def shift(self, type, value, newstate, context):
"""Shift a token. (Internal)"""
dfa, state, node = self.stack[-1]
newnode = (type, value, context, None)
newnode = self.convert(self.grammar, newnode)
if newnode is not None:
node[-1].append(newnode)
self.stack[-1] = (dfa, newstate, node)
def push(self, type, newdfa, newstate, context):
"""Push a nonterminal. (Internal)"""
dfa, state, node = self.stack[-1]
newnode = (type, None, context, [])
self.stack[-1] = (dfa, newstate, node)
self.stack.append((newdfa, 0, newnode))
def pop(self):
"""Pop a nonterminal. (Internal)"""
popdfa, popstate, popnode = self.stack.pop()
newnode = self.convert(self.grammar, popnode)
if newnode is not None:
if self.stack:
dfa, state, node = self.stack[-1]
node[-1].append(newnode)
else:
self.rootnode = newnode
self.rootnode.used_names = self.used_names

29
blib2to3/pgen2/parse.pyi Normal file
View File

@ -0,0 +1,29 @@
# Stubs for lib2to3.pgen2.parse (Python 3.6)
from typing import Any, Dict, List, Optional, Sequence, Set, Text, Tuple
from blib2to3.pgen2.grammar import Grammar, _DFAS
from blib2to3.pytree import _NL, _Convert, _RawNode
_Context = Sequence[Any]
class ParseError(Exception):
msg: Text
type: int
value: Optional[Text]
context: _Context
def __init__(self, msg: Text, type: int, value: Optional[Text], context: _Context) -> None: ...
class Parser:
grammar: Grammar
convert: _Convert
stack: List[Tuple[_DFAS, int, _RawNode]]
rootnode: Optional[_NL]
used_names: Set[Text]
def __init__(self, grammar: Grammar, convert: Optional[_Convert] = ...) -> None: ...
def setup(self, start: Optional[int] = ...) -> None: ...
def addtoken(self, type: int, value: Optional[Text], context: _Context) -> bool: ...
def classify(self, type: int, value: Optional[Text], context: _Context) -> int: ...
def shift(self, type: int, value: Optional[Text], newstate: int, context: _Context) -> None: ...
def push(self, type: int, newdfa: _DFAS, newstate: int, context: _Context) -> None: ...
def pop(self) -> None: ...

View File

@ -1,33 +1,23 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
import os
from collections.abc import Iterator, Sequence
from typing import IO, Any, NoReturn, Optional, Union
from blib2to3.pgen2 import grammar, token, tokenize
from blib2to3.pgen2.tokenize import TokenInfo
Path = Union[str, "os.PathLike[str]"]
# Pgen imports
from . import grammar, token, tokenize
class PgenGrammar(grammar.Grammar):
pass
class ParserGenerator:
filename: Path
stream: IO[str]
generator: Iterator[TokenInfo]
first: dict[str, Optional[dict[str, int]]]
def __init__(self, filename: Path, stream: Optional[IO[str]] = None) -> None:
class ParserGenerator(object):
def __init__(self, filename, stream=None):
close_stream = None
if stream is None:
stream = open(filename, encoding="utf-8")
stream = open(filename)
close_stream = stream.close
self.filename = filename
self.generator = tokenize.tokenize(stream.read())
self.stream = stream
self.generator = tokenize.generate_tokens(stream.readline)
self.gettoken() # Initialize lookahead
self.dfas, self.startsymbol = self.parse()
if close_stream is not None:
@ -35,7 +25,7 @@ def __init__(self, filename: Path, stream: Optional[IO[str]] = None) -> None:
self.first = {} # map from symbol name to set of tokens
self.addfirstsets()
def make_grammar(self) -> PgenGrammar:
def make_grammar(self):
c = PgenGrammar()
names = list(self.dfas.keys())
names.sort()
@ -60,9 +50,8 @@ def make_grammar(self) -> PgenGrammar:
c.start = c.symbol2number[self.startsymbol]
return c
def make_first(self, c: PgenGrammar, name: str) -> dict[int, int]:
def make_first(self, c, name):
rawfirst = self.first[name]
assert rawfirst is not None
first = {}
for label in sorted(rawfirst):
ilabel = self.make_label(c, label)
@ -70,7 +59,7 @@ def make_first(self, c: PgenGrammar, name: str) -> dict[int, int]:
first[ilabel] = 1
return first
def make_label(self, c: PgenGrammar, label: str) -> int:
def make_label(self, c, label):
# XXX Maybe this should be a method on a subclass of converter?
ilabel = len(c.labels)
if label[0].isalpha():
@ -99,17 +88,12 @@ def make_label(self, c: PgenGrammar, label: str) -> int:
assert label[0] in ('"', "'"), label
value = eval(label)
if value[0].isalpha():
if label[0] == '"':
keywords = c.soft_keywords
else:
keywords = c.keywords
# A keyword
if value in keywords:
return keywords[value]
if value in c.keywords:
return c.keywords[value]
else:
c.labels.append((token.NAME, value))
keywords[value] = ilabel
c.keywords[value] = ilabel
return ilabel
else:
# An operator (any non-numeric token)
@ -121,7 +105,7 @@ def make_label(self, c: PgenGrammar, label: str) -> int:
c.tokens[itoken] = ilabel
return ilabel
def addfirstsets(self) -> None:
def addfirstsets(self):
names = list(self.dfas.keys())
names.sort()
for name in names:
@ -129,41 +113,41 @@ def addfirstsets(self) -> None:
self.calcfirst(name)
# print name, self.first[name].keys()
def calcfirst(self, name: str) -> None:
def calcfirst(self, name):
dfa = self.dfas[name]
self.first[name] = None # dummy to detect left recursion
state = dfa[0]
totalset: dict[str, int] = {}
totalset = {}
overlapcheck = {}
for label in state.arcs:
for label, next in state.arcs.items():
if label in self.dfas:
if label in self.first:
fset = self.first[label]
if fset is None:
raise ValueError(f"recursion for rule {name!r}")
raise ValueError("recursion for rule %r" % name)
else:
self.calcfirst(label)
fset = self.first[label]
assert fset is not None
totalset.update(fset)
overlapcheck[label] = fset
else:
totalset[label] = 1
overlapcheck[label] = {label: 1}
inverse: dict[str, str] = {}
inverse = {}
for label, itsfirst in overlapcheck.items():
for symbol in itsfirst:
if symbol in inverse:
raise ValueError(
f"rule {name} is ambiguous; {symbol} is in the first sets of"
f" {label} as well as {inverse[symbol]}"
"rule %s is ambiguous; %s is in the"
" first sets of %s as well as %s"
% (name, symbol, label, inverse[symbol])
)
inverse[symbol] = label
self.first[name] = totalset
def parse(self) -> tuple[dict[str, list["DFAState"]], str]:
def parse(self):
dfas = {}
startsymbol: Optional[str] = None
startsymbol = None
# MSTART: (NEWLINE | RULE)* ENDMARKER
while self.type != token.ENDMARKER:
while self.type == token.NEWLINE:
@ -176,17 +160,16 @@ def parse(self) -> tuple[dict[str, list["DFAState"]], str]:
# self.dump_nfa(name, a, z)
dfa = self.make_dfa(a, z)
# self.dump_dfa(name, dfa)
# oldlen = len(dfa)
oldlen = len(dfa)
self.simplify_dfa(dfa)
# newlen = len(dfa)
newlen = len(dfa)
dfas[name] = dfa
# print name, oldlen, newlen
if startsymbol is None:
startsymbol = name
assert startsymbol is not None
return dfas, startsymbol
def make_dfa(self, start: "NFAState", finish: "NFAState") -> list["DFAState"]:
def make_dfa(self, start, finish):
# To turn an NFA into a DFA, we define the states of the DFA
# to correspond to *sets* of states of the NFA. Then do some
# state reduction. Let's represent sets as dicts with 1 for
@ -194,12 +177,12 @@ def make_dfa(self, start: "NFAState", finish: "NFAState") -> list["DFAState"]:
assert isinstance(start, NFAState)
assert isinstance(finish, NFAState)
def closure(state: NFAState) -> dict[NFAState, int]:
base: dict[NFAState, int] = {}
def closure(state):
base = {}
addclosure(state, base)
return base
def addclosure(state: NFAState, base: dict[NFAState, int]) -> None:
def addclosure(state, base):
assert isinstance(state, NFAState)
if state in base:
return
@ -210,7 +193,7 @@ def addclosure(state: NFAState, base: dict[NFAState, int]) -> None:
states = [DFAState(closure(start), finish)]
for state in states: # NB states grows while we're iterating
arcs: dict[str, dict[NFAState, int]] = {}
arcs = {}
for nfastate in state.nfaset:
for label, next in nfastate.arcs:
if label is not None:
@ -225,7 +208,7 @@ def addclosure(state: NFAState, base: dict[NFAState, int]) -> None:
state.addarc(st, label)
return states # List of DFAState instances; first one is start
def dump_nfa(self, name: str, start: "NFAState", finish: "NFAState") -> None:
def dump_nfa(self, name, start, finish):
print("Dump of NFA for", name)
todo = [start]
for i, state in enumerate(todo):
@ -237,18 +220,18 @@ def dump_nfa(self, name: str, start: "NFAState", finish: "NFAState") -> None:
j = len(todo)
todo.append(next)
if label is None:
print(f" -> {j}")
print(" -> %d" % j)
else:
print(f" {label} -> {j}")
print(" %s -> %d" % (label, j))
def dump_dfa(self, name: str, dfa: Sequence["DFAState"]) -> None:
def dump_dfa(self, name, dfa):
print("Dump of DFA for", name)
for i, state in enumerate(dfa):
print(" State", i, state.isfinal and "(final)" or "")
for label, next in sorted(state.arcs.items()):
print(f" {label} -> {dfa.index(next)}")
print(" %s -> %d" % (label, dfa.index(next)))
def simplify_dfa(self, dfa: list["DFAState"]) -> None:
def simplify_dfa(self, dfa):
# This is not theoretically optimal, but works well enough.
# Algorithm: repeatedly look for two states that have the same
# set of arcs (same labels pointing to the same nodes) and
@ -269,7 +252,7 @@ def simplify_dfa(self, dfa: list["DFAState"]) -> None:
changes = True
break
def parse_rhs(self) -> tuple["NFAState", "NFAState"]:
def parse_rhs(self):
# RHS: ALT ('|' ALT)*
a, z = self.parse_alt()
if self.value != "|":
@ -286,7 +269,7 @@ def parse_rhs(self) -> tuple["NFAState", "NFAState"]:
z.addarc(zz)
return aa, zz
def parse_alt(self) -> tuple["NFAState", "NFAState"]:
def parse_alt(self):
# ALT: ITEM+
a, b = self.parse_item()
while self.value in ("(", "[") or self.type in (token.NAME, token.STRING):
@ -295,7 +278,7 @@ def parse_alt(self) -> tuple["NFAState", "NFAState"]:
b = d
return a, b
def parse_item(self) -> tuple["NFAState", "NFAState"]:
def parse_item(self):
# ITEM: '[' RHS ']' | ATOM ['+' | '*']
if self.value == "[":
self.gettoken()
@ -315,7 +298,7 @@ def parse_item(self) -> tuple["NFAState", "NFAState"]:
else:
return a, a
def parse_atom(self) -> tuple["NFAState", "NFAState"]:
def parse_atom(self):
# ATOM: '(' RHS ')' | NAME | STRING
if self.value == "(":
self.gettoken()
@ -330,47 +313,46 @@ def parse_atom(self) -> tuple["NFAState", "NFAState"]:
return a, z
else:
self.raise_error(
f"expected (...) or NAME or STRING, got {self.type}/{self.value}"
"expected (...) or NAME or STRING, got %s/%s", self.type, self.value
)
def expect(self, type: int, value: Optional[Any] = None) -> str:
def expect(self, type, value=None):
if self.type != type or (value is not None and self.value != value):
self.raise_error(f"expected {type}/{value}, got {self.type}/{self.value}")
self.raise_error(
"expected %s/%s, got %s/%s", type, value, self.type, self.value
)
value = self.value
self.gettoken()
return value
def gettoken(self) -> None:
def gettoken(self):
tup = next(self.generator)
while tup[0] in (tokenize.COMMENT, tokenize.NL):
tup = next(self.generator)
self.type, self.value, self.begin, self.end, self.line = tup
# print token.tok_name[self.type], repr(self.value)
def raise_error(self, msg: str) -> NoReturn:
raise SyntaxError(
msg, (str(self.filename), self.end[0], self.end[1], self.line)
)
def raise_error(self, msg, *args):
if args:
try:
msg = msg % args
except:
msg = " ".join([msg] + list(map(str, args)))
raise SyntaxError(msg, (self.filename, self.end[0], self.end[1], self.line))
class NFAState:
arcs: list[tuple[Optional[str], "NFAState"]]
def __init__(self) -> None:
class NFAState(object):
def __init__(self):
self.arcs = [] # list of (label, NFAState) pairs
def addarc(self, next: "NFAState", label: Optional[str] = None) -> None:
def addarc(self, next, label=None):
assert label is None or isinstance(label, str)
assert isinstance(next, NFAState)
self.arcs.append((label, next))
class DFAState:
nfaset: dict[NFAState, Any]
isfinal: bool
arcs: dict[str, "DFAState"]
def __init__(self, nfaset: dict[NFAState, Any], final: NFAState) -> None:
class DFAState(object):
def __init__(self, nfaset, final):
assert isinstance(nfaset, dict)
assert isinstance(next(iter(nfaset)), NFAState)
assert isinstance(final, NFAState)
@ -378,18 +360,18 @@ def __init__(self, nfaset: dict[NFAState, Any], final: NFAState) -> None:
self.isfinal = final in nfaset
self.arcs = {} # map from label to DFAState
def addarc(self, next: "DFAState", label: str) -> None:
def addarc(self, next, label):
assert isinstance(label, str)
assert label not in self.arcs
assert isinstance(next, DFAState)
self.arcs[label] = next
def unifystate(self, old: "DFAState", new: "DFAState") -> None:
def unifystate(self, old, new):
for label, next in self.arcs.items():
if next is old:
self.arcs[label] = new
def __eq__(self, other: Any) -> bool:
def __eq__(self, other):
# Equality test -- ignore the nfaset instance variable
assert isinstance(other, DFAState)
if self.isfinal != other.isfinal:
@ -403,9 +385,9 @@ def __eq__(self, other: Any) -> bool:
return False
return True
__hash__: Any = None # For Py3 compatibility.
__hash__ = None # For Py3 compatibility.
def generate_grammar(filename: Path = "Grammar.txt") -> PgenGrammar:
def generate_grammar(filename="Grammar.txt"):
p = ParserGenerator(filename)
return p.make_grammar()

49
blib2to3/pgen2/pgen.pyi Normal file
View File

@ -0,0 +1,49 @@
# Stubs for lib2to3.pgen2.pgen (Python 3.6)
from typing import Any, Dict, IO, Iterable, Iterator, List, Optional, Text, Tuple
from mypy_extensions import NoReturn
from blib2to3.pgen2 import _Path, grammar
from blib2to3.pgen2.tokenize import _TokenInfo
class PgenGrammar(grammar.Grammar): ...
class ParserGenerator:
filename: _Path
stream: IO[Text]
generator: Iterator[_TokenInfo]
first: Dict[Text, Dict[Text, int]]
def __init__(self, filename: _Path, stream: Optional[IO[Text]] = ...) -> None: ...
def make_grammar(self) -> PgenGrammar: ...
def make_first(self, c: PgenGrammar, name: Text) -> Dict[int, int]: ...
def make_label(self, c: PgenGrammar, label: Text) -> int: ...
def addfirstsets(self) -> None: ...
def calcfirst(self, name: Text) -> None: ...
def parse(self) -> Tuple[Dict[Text, List[DFAState]], Text]: ...
def make_dfa(self, start: NFAState, finish: NFAState) -> List[DFAState]: ...
def dump_nfa(self, name: Text, start: NFAState, finish: NFAState) -> List[DFAState]: ...
def dump_dfa(self, name: Text, dfa: Iterable[DFAState]) -> None: ...
def simplify_dfa(self, dfa: List[DFAState]) -> None: ...
def parse_rhs(self) -> Tuple[NFAState, NFAState]: ...
def parse_alt(self) -> Tuple[NFAState, NFAState]: ...
def parse_item(self) -> Tuple[NFAState, NFAState]: ...
def parse_atom(self) -> Tuple[NFAState, NFAState]: ...
def expect(self, type: int, value: Optional[Any] = ...) -> Text: ...
def gettoken(self) -> None: ...
def raise_error(self, msg: str, *args: Any) -> NoReturn: ...
class NFAState:
arcs: List[Tuple[Optional[Text], NFAState]]
def __init__(self) -> None: ...
def addarc(self, next: NFAState, label: Optional[Text] = ...) -> None: ...
class DFAState:
nfaset: Dict[NFAState, Any]
isfinal: bool
arcs: Dict[Text, DFAState]
def __init__(self, nfaset: Dict[NFAState, Any], final: NFAState) -> None: ...
def addarc(self, next: DFAState, label: Text) -> None: ...
def unifystate(self, old: DFAState, new: DFAState) -> None: ...
def __eq__(self, other: Any) -> bool: ...
def generate_grammar(filename: _Path = ...) -> PgenGrammar: ...

86
blib2to3/pgen2/token.py Normal file
View File

@ -0,0 +1,86 @@
"""Token constants (from "token.h")."""
# Taken from Python (r53757) and modified to include some tokens
# originally monkeypatched in by pgen2.tokenize
# --start constants--
ENDMARKER = 0
NAME = 1
NUMBER = 2
STRING = 3
NEWLINE = 4
INDENT = 5
DEDENT = 6
LPAR = 7
RPAR = 8
LSQB = 9
RSQB = 10
COLON = 11
COMMA = 12
SEMI = 13
PLUS = 14
MINUS = 15
STAR = 16
SLASH = 17
VBAR = 18
AMPER = 19
LESS = 20
GREATER = 21
EQUAL = 22
DOT = 23
PERCENT = 24
BACKQUOTE = 25
LBRACE = 26
RBRACE = 27
EQEQUAL = 28
NOTEQUAL = 29
LESSEQUAL = 30
GREATEREQUAL = 31
TILDE = 32
CIRCUMFLEX = 33
LEFTSHIFT = 34
RIGHTSHIFT = 35
DOUBLESTAR = 36
PLUSEQUAL = 37
MINEQUAL = 38
STAREQUAL = 39
SLASHEQUAL = 40
PERCENTEQUAL = 41
AMPEREQUAL = 42
VBAREQUAL = 43
CIRCUMFLEXEQUAL = 44
LEFTSHIFTEQUAL = 45
RIGHTSHIFTEQUAL = 46
DOUBLESTAREQUAL = 47
DOUBLESLASH = 48
DOUBLESLASHEQUAL = 49
AT = 50
ATEQUAL = 51
OP = 52
COMMENT = 53
NL = 54
RARROW = 55
AWAIT = 56
ASYNC = 57
ERRORTOKEN = 58
COLONEQUAL = 59
N_TOKENS = 60
NT_OFFSET = 256
# --end constants--
tok_name = {}
for _name, _value in list(globals().items()):
if type(_value) is type(0):
tok_name[_value] = _name
def ISTERMINAL(x):
return x < NT_OFFSET
def ISNONTERMINAL(x):
return x >= NT_OFFSET
def ISEOF(x):
return x == ENDMARKER

74
blib2to3/pgen2/token.pyi Normal file
View File

@ -0,0 +1,74 @@
# Stubs for lib2to3.pgen2.token (Python 3.6)
import sys
from typing import Dict, Text
ENDMARKER: int
NAME: int
NUMBER: int
STRING: int
NEWLINE: int
INDENT: int
DEDENT: int
LPAR: int
RPAR: int
LSQB: int
RSQB: int
COLON: int
COMMA: int
SEMI: int
PLUS: int
MINUS: int
STAR: int
SLASH: int
VBAR: int
AMPER: int
LESS: int
GREATER: int
EQUAL: int
DOT: int
PERCENT: int
BACKQUOTE: int
LBRACE: int
RBRACE: int
EQEQUAL: int
NOTEQUAL: int
LESSEQUAL: int
GREATEREQUAL: int
TILDE: int
CIRCUMFLEX: int
LEFTSHIFT: int
RIGHTSHIFT: int
DOUBLESTAR: int
PLUSEQUAL: int
MINEQUAL: int
STAREQUAL: int
SLASHEQUAL: int
PERCENTEQUAL: int
AMPEREQUAL: int
VBAREQUAL: int
CIRCUMFLEXEQUAL: int
LEFTSHIFTEQUAL: int
RIGHTSHIFTEQUAL: int
DOUBLESTAREQUAL: int
DOUBLESLASH: int
DOUBLESLASHEQUAL: int
OP: int
COMMENT: int
NL: int
if sys.version_info >= (3,):
RARROW: int
if sys.version_info >= (3, 5):
AT: int
ATEQUAL: int
AWAIT: int
ASYNC: int
ERRORTOKEN: int
COLONEQUAL: int
N_TOKENS: int
NT_OFFSET: int
tok_name: Dict[int, Text]
def ISTERMINAL(x: int) -> bool: ...
def ISNONTERMINAL(x: int) -> bool: ...
def ISEOF(x: int) -> bool: ...

652
blib2to3/pgen2/tokenize.py Normal file
View File

@ -0,0 +1,652 @@
# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation.
# All rights reserved.
"""Tokenization help for Python programs.
generate_tokens(readline) is a generator that breaks a stream of
text into Python tokens. It accepts a readline-like method which is called
repeatedly to get the next line of input (or "" for EOF). It generates
5-tuples with these members:
the token type (see token.py)
the token (a string)
the starting (row, column) indices of the token (a 2-tuple of ints)
the ending (row, column) indices of the token (a 2-tuple of ints)
the original line (string)
It is designed to match the working of the Python tokenizer exactly, except
that it produces COMMENT tokens for comments and gives type OP for all
operators
Older entry points
tokenize_loop(readline, tokeneater)
tokenize(readline, tokeneater=printtoken)
are the same, except instead of generating tokens, tokeneater is a callback
function to which the 5 fields described above are passed as 5 arguments,
each time a new token is found."""
__author__ = "Ka-Ping Yee <ping@lfw.org>"
__credits__ = "GvR, ESR, Tim Peters, Thomas Wouters, Fred Drake, Skip Montanaro"
import regex as re
from codecs import BOM_UTF8, lookup
from blib2to3.pgen2.token import *
from . import token
__all__ = [x for x in dir(token) if x[0] != "_"] + [
"tokenize",
"generate_tokens",
"untokenize",
]
del token
try:
bytes
except NameError:
# Support bytes type in Python <= 2.5, so 2to3 turns itself into
# valid Python 3 code.
bytes = str
def group(*choices):
return "(" + "|".join(choices) + ")"
def any(*choices):
return group(*choices) + "*"
def maybe(*choices):
return group(*choices) + "?"
def _combinations(*l):
return set(x + y for x in l for y in l + ("",) if x.casefold() != y.casefold())
Whitespace = r"[ \f\t]*"
Comment = r"#[^\r\n]*"
Ignore = Whitespace + any(r"\\\r?\n" + Whitespace) + maybe(Comment)
Name = r"\w+" # this is invalid but it's fine because Name comes after Number in all groups
Binnumber = r"0[bB]_?[01]+(?:_[01]+)*"
Hexnumber = r"0[xX]_?[\da-fA-F]+(?:_[\da-fA-F]+)*[lL]?"
Octnumber = r"0[oO]?_?[0-7]+(?:_[0-7]+)*[lL]?"
Decnumber = group(r"[1-9]\d*(?:_\d+)*[lL]?", "0[lL]?")
Intnumber = group(Binnumber, Hexnumber, Octnumber, Decnumber)
Exponent = r"[eE][-+]?\d+(?:_\d+)*"
Pointfloat = group(r"\d+(?:_\d+)*\.(?:\d+(?:_\d+)*)?", r"\.\d+(?:_\d+)*") + maybe(
Exponent
)
Expfloat = r"\d+(?:_\d+)*" + Exponent
Floatnumber = group(Pointfloat, Expfloat)
Imagnumber = group(r"\d+(?:_\d+)*[jJ]", Floatnumber + r"[jJ]")
Number = group(Imagnumber, Floatnumber, Intnumber)
# Tail end of ' string.
Single = r"[^'\\]*(?:\\.[^'\\]*)*'"
# Tail end of " string.
Double = r'[^"\\]*(?:\\.[^"\\]*)*"'
# Tail end of ''' string.
Single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''"
# Tail end of """ string.
Double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""'
_litprefix = r"(?:[uUrRbBfF]|[rR][fFbB]|[fFbBuU][rR])?"
Triple = group(_litprefix + "'''", _litprefix + '"""')
# Single-line ' or " string.
String = group(
_litprefix + r"'[^\n'\\]*(?:\\.[^\n'\\]*)*'",
_litprefix + r'"[^\n"\\]*(?:\\.[^\n"\\]*)*"',
)
# Because of leftmost-then-longest match semantics, be sure to put the
# longest operators first (e.g., if = came before ==, == would get
# recognized as two instances of =).
Operator = group(
r"\*\*=?",
r">>=?",
r"<<=?",
r"<>",
r"!=",
r"//=?",
r"->",
r"[+\-*/%&@|^=<>:]=?",
r"~",
)
Bracket = "[][(){}]"
Special = group(r"\r?\n", r"[:;.,`@]")
Funny = group(Operator, Bracket, Special)
PlainToken = group(Number, Funny, String, Name)
Token = Ignore + PlainToken
# First (or only) line of ' or " string.
ContStr = group(
_litprefix + r"'[^\n'\\]*(?:\\.[^\n'\\]*)*" + group("'", r"\\\r?\n"),
_litprefix + r'"[^\n"\\]*(?:\\.[^\n"\\]*)*' + group('"', r"\\\r?\n"),
)
PseudoExtras = group(r"\\\r?\n", Comment, Triple)
PseudoToken = Whitespace + group(PseudoExtras, Number, Funny, ContStr, Name)
tokenprog = re.compile(Token, re.UNICODE)
pseudoprog = re.compile(PseudoToken, re.UNICODE)
single3prog = re.compile(Single3)
double3prog = re.compile(Double3)
_strprefixes = (
_combinations("r", "R", "f", "F")
| _combinations("r", "R", "b", "B")
| {"u", "U", "ur", "uR", "Ur", "UR"}
)
endprogs = {
"'": re.compile(Single),
'"': re.compile(Double),
"'''": single3prog,
'"""': double3prog,
**{f"{prefix}'''": single3prog for prefix in _strprefixes},
**{f'{prefix}"""': double3prog for prefix in _strprefixes},
**{prefix: None for prefix in _strprefixes},
}
triple_quoted = (
{"'''", '"""'}
| {f"{prefix}'''" for prefix in _strprefixes}
| {f'{prefix}"""' for prefix in _strprefixes}
)
single_quoted = (
{"'", '"'}
| {f"{prefix}'" for prefix in _strprefixes}
| {f'{prefix}"' for prefix in _strprefixes}
)
tabsize = 8
class TokenError(Exception):
pass
class StopTokenizing(Exception):
pass
def printtoken(type, token, xxx_todo_changeme, xxx_todo_changeme1, line): # for testing
(srow, scol) = xxx_todo_changeme
(erow, ecol) = xxx_todo_changeme1
print(
"%d,%d-%d,%d:\t%s\t%s" % (srow, scol, erow, ecol, tok_name[type], repr(token))
)
def tokenize(readline, tokeneater=printtoken):
"""
The tokenize() function accepts two parameters: one representing the
input stream, and one providing an output mechanism for tokenize().
The first parameter, readline, must be a callable object which provides
the same interface as the readline() method of built-in file objects.
Each call to the function should return one line of input as a string.
The second parameter, tokeneater, must also be a callable object. It is
called once for each token, with five arguments, corresponding to the
tuples generated by generate_tokens().
"""
try:
tokenize_loop(readline, tokeneater)
except StopTokenizing:
pass
# backwards compatible interface
def tokenize_loop(readline, tokeneater):
for token_info in generate_tokens(readline):
tokeneater(*token_info)
class Untokenizer:
def __init__(self):
self.tokens = []
self.prev_row = 1
self.prev_col = 0
def add_whitespace(self, start):
row, col = start
assert row <= self.prev_row
col_offset = col - self.prev_col
if col_offset:
self.tokens.append(" " * col_offset)
def untokenize(self, iterable):
for t in iterable:
if len(t) == 2:
self.compat(t, iterable)
break
tok_type, token, start, end, line = t
self.add_whitespace(start)
self.tokens.append(token)
self.prev_row, self.prev_col = end
if tok_type in (NEWLINE, NL):
self.prev_row += 1
self.prev_col = 0
return "".join(self.tokens)
def compat(self, token, iterable):
startline = False
indents = []
toks_append = self.tokens.append
toknum, tokval = token
if toknum in (NAME, NUMBER):
tokval += " "
if toknum in (NEWLINE, NL):
startline = True
for tok in iterable:
toknum, tokval = tok[:2]
if toknum in (NAME, NUMBER, ASYNC, AWAIT):
tokval += " "
if toknum == INDENT:
indents.append(tokval)
continue
elif toknum == DEDENT:
indents.pop()
continue
elif toknum in (NEWLINE, NL):
startline = True
elif startline and indents:
toks_append(indents[-1])
startline = False
toks_append(tokval)
cookie_re = re.compile(r"^[ \t\f]*#.*?coding[:=][ \t]*([-\w.]+)", re.ASCII)
blank_re = re.compile(br"^[ \t\f]*(?:[#\r\n]|$)", re.ASCII)
def _get_normal_name(orig_enc):
"""Imitates get_normal_name in tokenizer.c."""
# Only care about the first 12 characters.
enc = orig_enc[:12].lower().replace("_", "-")
if enc == "utf-8" or enc.startswith("utf-8-"):
return "utf-8"
if enc in ("latin-1", "iso-8859-1", "iso-latin-1") or enc.startswith(
("latin-1-", "iso-8859-1-", "iso-latin-1-")
):
return "iso-8859-1"
return orig_enc
def detect_encoding(readline):
"""
The detect_encoding() function is used to detect the encoding that should
be used to decode a Python source file. It requires one argument, readline,
in the same way as the tokenize() generator.
It will call readline a maximum of twice, and return the encoding used
(as a string) and a list of any lines (left as bytes) it has read
in.
It detects the encoding from the presence of a utf-8 bom or an encoding
cookie as specified in pep-0263. If both a bom and a cookie are present, but
disagree, a SyntaxError will be raised. If the encoding cookie is an invalid
charset, raise a SyntaxError. Note that if a utf-8 bom is found,
'utf-8-sig' is returned.
If no encoding is specified, then the default of 'utf-8' will be returned.
"""
bom_found = False
encoding = None
default = "utf-8"
def read_or_stop():
try:
return readline()
except StopIteration:
return bytes()
def find_cookie(line):
try:
line_string = line.decode("ascii")
except UnicodeDecodeError:
return None
match = cookie_re.match(line_string)
if not match:
return None
encoding = _get_normal_name(match.group(1))
try:
codec = lookup(encoding)
except LookupError:
# This behaviour mimics the Python interpreter
raise SyntaxError("unknown encoding: " + encoding)
if bom_found:
if codec.name != "utf-8":
# This behaviour mimics the Python interpreter
raise SyntaxError("encoding problem: utf-8")
encoding += "-sig"
return encoding
first = read_or_stop()
if first.startswith(BOM_UTF8):
bom_found = True
first = first[3:]
default = "utf-8-sig"
if not first:
return default, []
encoding = find_cookie(first)
if encoding:
return encoding, [first]
if not blank_re.match(first):
return default, [first]
second = read_or_stop()
if not second:
return default, [first]
encoding = find_cookie(second)
if encoding:
return encoding, [first, second]
return default, [first, second]
def untokenize(iterable):
"""Transform tokens back into Python source code.
Each element returned by the iterable must be a token sequence
with at least two elements, a token number and token value. If
only two tokens are passed, the resulting output is poor.
Round-trip invariant for full input:
Untokenized source will match input source exactly
Round-trip invariant for limited intput:
# Output text will tokenize the back to the input
t1 = [tok[:2] for tok in generate_tokens(f.readline)]
newcode = untokenize(t1)
readline = iter(newcode.splitlines(1)).next
t2 = [tok[:2] for tokin generate_tokens(readline)]
assert t1 == t2
"""
ut = Untokenizer()
return ut.untokenize(iterable)
def generate_tokens(readline, grammar=None):
"""
The generate_tokens() generator requires one argument, readline, which
must be a callable object which provides the same interface as the
readline() method of built-in file objects. Each call to the function
should return one line of input as a string. Alternately, readline
can be a callable function terminating with StopIteration:
readline = open(myfile).next # Example of alternate readline
The generator produces 5-tuples with these members: the token type; the
token string; a 2-tuple (srow, scol) of ints specifying the row and
column where the token begins in the source; a 2-tuple (erow, ecol) of
ints specifying the row and column where the token ends in the source;
and the line on which the token was found. The line passed is the
logical line; continuation lines are included.
"""
lnum = parenlev = continued = 0
numchars = "0123456789"
contstr, needcont = "", 0
contline = None
indents = [0]
# If we know we're parsing 3.7+, we can unconditionally parse `async` and
# `await` as keywords.
async_keywords = False if grammar is None else grammar.async_keywords
# 'stashed' and 'async_*' are used for async/await parsing
stashed = None
async_def = False
async_def_indent = 0
async_def_nl = False
while 1: # loop over lines in stream
try:
line = readline()
except StopIteration:
line = ""
lnum = lnum + 1
pos, max = 0, len(line)
if contstr: # continued string
if not line:
raise TokenError("EOF in multi-line string", strstart)
endmatch = endprog.match(line)
if endmatch:
pos = end = endmatch.end(0)
yield (
STRING,
contstr + line[:end],
strstart,
(lnum, end),
contline + line,
)
contstr, needcont = "", 0
contline = None
elif needcont and line[-2:] != "\\\n" and line[-3:] != "\\\r\n":
yield (
ERRORTOKEN,
contstr + line,
strstart,
(lnum, len(line)),
contline,
)
contstr = ""
contline = None
continue
else:
contstr = contstr + line
contline = contline + line
continue
elif parenlev == 0 and not continued: # new statement
if not line:
break
column = 0
while pos < max: # measure leading whitespace
if line[pos] == " ":
column = column + 1
elif line[pos] == "\t":
column = (column // tabsize + 1) * tabsize
elif line[pos] == "\f":
column = 0
else:
break
pos = pos + 1
if pos == max:
break
if stashed:
yield stashed
stashed = None
if line[pos] in "\r\n": # skip blank lines
yield (NL, line[pos:], (lnum, pos), (lnum, len(line)), line)
continue
if line[pos] == "#": # skip comments
comment_token = line[pos:].rstrip("\r\n")
nl_pos = pos + len(comment_token)
yield (
COMMENT,
comment_token,
(lnum, pos),
(lnum, pos + len(comment_token)),
line,
)
yield (NL, line[nl_pos:], (lnum, nl_pos), (lnum, len(line)), line)
continue
if column > indents[-1]: # count indents
indents.append(column)
yield (INDENT, line[:pos], (lnum, 0), (lnum, pos), line)
while column < indents[-1]: # count dedents
if column not in indents:
raise IndentationError(
"unindent does not match any outer indentation level",
("<tokenize>", lnum, pos, line),
)
indents = indents[:-1]
if async_def and async_def_indent >= indents[-1]:
async_def = False
async_def_nl = False
async_def_indent = 0
yield (DEDENT, "", (lnum, pos), (lnum, pos), line)
if async_def and async_def_nl and async_def_indent >= indents[-1]:
async_def = False
async_def_nl = False
async_def_indent = 0
else: # continued statement
if not line:
raise TokenError("EOF in multi-line statement", (lnum, 0))
continued = 0
while pos < max:
pseudomatch = pseudoprog.match(line, pos)
if pseudomatch: # scan for tokens
start, end = pseudomatch.span(1)
spos, epos, pos = (lnum, start), (lnum, end), end
token, initial = line[start:end], line[start]
if initial in numchars or (
initial == "." and token != "."
): # ordinary number
yield (NUMBER, token, spos, epos, line)
elif initial in "\r\n":
newline = NEWLINE
if parenlev > 0:
newline = NL
elif async_def:
async_def_nl = True
if stashed:
yield stashed
stashed = None
yield (newline, token, spos, epos, line)
elif initial == "#":
assert not token.endswith("\n")
if stashed:
yield stashed
stashed = None
yield (COMMENT, token, spos, epos, line)
elif token in triple_quoted:
endprog = endprogs[token]
endmatch = endprog.match(line, pos)
if endmatch: # all on one line
pos = endmatch.end(0)
token = line[start:pos]
if stashed:
yield stashed
stashed = None
yield (STRING, token, spos, (lnum, pos), line)
else:
strstart = (lnum, start) # multiple lines
contstr = line[start:]
contline = line
break
elif (
initial in single_quoted
or token[:2] in single_quoted
or token[:3] in single_quoted
):
if token[-1] == "\n": # continued string
strstart = (lnum, start)
endprog = (
endprogs[initial]
or endprogs[token[1]]
or endprogs[token[2]]
)
contstr, needcont = line[start:], 1
contline = line
break
else: # ordinary string
if stashed:
yield stashed
stashed = None
yield (STRING, token, spos, epos, line)
elif initial.isidentifier(): # ordinary name
if token in ("async", "await"):
if async_keywords or async_def:
yield (
ASYNC if token == "async" else AWAIT,
token,
spos,
epos,
line,
)
continue
tok = (NAME, token, spos, epos, line)
if token == "async" and not stashed:
stashed = tok
continue
if token in ("def", "for"):
if stashed and stashed[0] == NAME and stashed[1] == "async":
if token == "def":
async_def = True
async_def_indent = indents[-1]
yield (
ASYNC,
stashed[1],
stashed[2],
stashed[3],
stashed[4],
)
stashed = None
if stashed:
yield stashed
stashed = None
yield tok
elif initial == "\\": # continued stmt
# This yield is new; needed for better idempotency:
if stashed:
yield stashed
stashed = None
yield (NL, token, spos, (lnum, pos), line)
continued = 1
else:
if initial in "([{":
parenlev = parenlev + 1
elif initial in ")]}":
parenlev = parenlev - 1
if stashed:
yield stashed
stashed = None
yield (OP, token, spos, epos, line)
else:
yield (ERRORTOKEN, line[pos], (lnum, pos), (lnum, pos + 1), line)
pos = pos + 1
if stashed:
yield stashed
stashed = None
for indent in indents[1:]: # pop remaining indent levels
yield (DEDENT, "", (lnum, 0), (lnum, 0), "")
yield (ENDMARKER, "", (lnum, 0), (lnum, 0), "")
if __name__ == "__main__": # testing
import sys
if len(sys.argv) > 1:
tokenize(open(sys.argv[1]).readline)
else:
tokenize(sys.stdin.readline)

View File

@ -0,0 +1,32 @@
# Stubs for lib2to3.pgen2.tokenize (Python 3.6)
# NOTE: Only elements from __all__ are present.
from typing import Callable, Iterable, Iterator, List, Optional, Text, Tuple
from blib2to3.pgen2.token import * # noqa
from blib2to3.pygram import Grammar
_Coord = Tuple[int, int]
_TokenEater = Callable[[int, Text, _Coord, _Coord, Text], None]
_TokenInfo = Tuple[int, Text, _Coord, _Coord, Text]
class TokenError(Exception): ...
class StopTokenizing(Exception): ...
def tokenize(readline: Callable[[], Text], tokeneater: _TokenEater = ...) -> None: ...
class Untokenizer:
tokens: List[Text]
prev_row: int
prev_col: int
def __init__(self) -> None: ...
def add_whitespace(self, start: _Coord) -> None: ...
def untokenize(self, iterable: Iterable[_TokenInfo]) -> Text: ...
def compat(self, token: Tuple[int, Text], iterable: Iterable[_TokenInfo]) -> None: ...
def untokenize(iterable: Iterable[_TokenInfo]) -> Text: ...
def generate_tokens(
readline: Callable[[], Text],
grammar: Optional[Grammar] = ...
) -> Iterator[_TokenInfo]: ...

63
blib2to3/pygram.py Normal file
View File

@ -0,0 +1,63 @@
# Copyright 2006 Google, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
"""Export the Python grammar and symbols."""
# Python imports
import os
# Local imports
from .pgen2 import token
from .pgen2 import driver
# The grammar file
_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "Grammar.txt")
_PATTERN_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "PatternGrammar.txt")
class Symbols(object):
def __init__(self, grammar):
"""Initializer.
Creates an attribute for each grammar symbol (nonterminal),
whose value is the symbol's type (an int >= 256).
"""
for name, symbol in grammar.symbol2number.items():
setattr(self, name, symbol)
def initialize(cache_dir=None):
global python_grammar
global python_grammar_no_print_statement
global python_grammar_no_print_statement_no_exec_statement
global python_grammar_no_print_statement_no_exec_statement_async_keywords
global python_symbols
global pattern_grammar
global pattern_symbols
# Python 2
python_grammar = driver.load_packaged_grammar("blib2to3", _GRAMMAR_FILE, cache_dir)
python_symbols = Symbols(python_grammar)
# Python 2 + from __future__ import print_function
python_grammar_no_print_statement = python_grammar.copy()
del python_grammar_no_print_statement.keywords["print"]
# Python 3.0-3.6
python_grammar_no_print_statement_no_exec_statement = python_grammar.copy()
del python_grammar_no_print_statement_no_exec_statement.keywords["print"]
del python_grammar_no_print_statement_no_exec_statement.keywords["exec"]
# Python 3.7+
python_grammar_no_print_statement_no_exec_statement_async_keywords = (
python_grammar_no_print_statement_no_exec_statement.copy()
)
python_grammar_no_print_statement_no_exec_statement_async_keywords.async_keywords = (
True
)
pattern_grammar = driver.load_packaged_grammar(
"blib2to3", _PATTERN_GRAMMAR_FILE, cache_dir
)
pattern_symbols = Symbols(pattern_grammar)

126
blib2to3/pygram.pyi Normal file
View File

@ -0,0 +1,126 @@
# Stubs for lib2to3.pygram (Python 3.6)
import os
from typing import Any, Union
from blib2to3.pgen2.grammar import Grammar
class Symbols:
def __init__(self, grammar: Grammar) -> None: ...
class python_symbols(Symbols):
and_expr: int
and_test: int
annassign: int
arglist: int
argument: int
arith_expr: int
assert_stmt: int
async_funcdef: int
async_stmt: int
atom: int
augassign: int
break_stmt: int
classdef: int
comp_for: int
comp_if: int
comp_iter: int
comp_op: int
comparison: int
compound_stmt: int
continue_stmt: int
decorated: int
decorator: int
decorators: int
del_stmt: int
dictsetmaker: int
dotted_as_name: int
dotted_as_names: int
dotted_name: int
encoding_decl: int
eval_input: int
except_clause: int
exec_stmt: int
expr: int
expr_stmt: int
exprlist: int
factor: int
file_input: int
flow_stmt: int
for_stmt: int
funcdef: int
global_stmt: int
if_stmt: int
import_as_name: int
import_as_names: int
import_from: int
import_name: int
import_stmt: int
lambdef: int
listmaker: int
namedexpr_test: int
not_test: int
old_comp_for: int
old_comp_if: int
old_comp_iter: int
old_lambdef: int
old_test: int
or_test: int
parameters: int
pass_stmt: int
power: int
print_stmt: int
raise_stmt: int
return_stmt: int
shift_expr: int
simple_stmt: int
single_input: int
sliceop: int
small_stmt: int
star_expr: int
stmt: int
subscript: int
subscriptlist: int
suite: int
term: int
test: int
testlist: int
testlist1: int
testlist_gexp: int
testlist_safe: int
testlist_star_expr: int
tfpdef: int
tfplist: int
tname: int
trailer: int
try_stmt: int
typedargslist: int
varargslist: int
vfpdef: int
vfplist: int
vname: int
while_stmt: int
with_item: int
with_stmt: int
with_var: int
xor_expr: int
yield_arg: int
yield_expr: int
yield_stmt: int
class pattern_symbols(Symbols):
Alternative: int
Alternatives: int
Details: int
Matcher: int
NegatedUnit: int
Repeater: int
Unit: int
python_grammar: Grammar
python_grammar_no_print_statement: Grammar
python_grammar_no_print_statement_no_exec_statement: Grammar
python_grammar_no_print_statement_no_exec_statement_async_keywords: Grammar
python_grammar_no_exec_statement: Grammar
pattern_grammar: Grammar
def initialize(cache_dir: Union[str, os.PathLike, None]) -> None: ...

View File

@ -10,48 +10,31 @@
There's also a pattern matching implementation here.
"""
# mypy: allow-untyped-defs, allow-incomplete-defs
from collections.abc import Iterable, Iterator
from typing import Any, Optional, TypeVar, Union
from blib2to3.pgen2.grammar import Grammar
__author__ = "Guido van Rossum <guido@python.org>"
import sys
from io import StringIO
HUGE: int = 0x7FFFFFFF # maximum repeat count, default max
HUGE = 0x7FFFFFFF # maximum repeat count, default max
_type_reprs: dict[int, Union[str, int]] = {}
_type_reprs = {}
def type_repr(type_num: int) -> Union[str, int]:
def type_repr(type_num):
global _type_reprs
if not _type_reprs:
from . import pygram
if not hasattr(pygram, "python_symbols"):
pygram.initialize(cache_dir=None)
from .pygram import python_symbols
# printing tokens is possible but not as useful
# from .pgen2 import token // token.__dict__.items():
for name in dir(pygram.python_symbols):
val = getattr(pygram.python_symbols, name)
for name, val in python_symbols.__dict__.items():
if type(val) == int:
_type_reprs[val] = name
return _type_reprs.setdefault(type_num, type_num)
_P = TypeVar("_P", bound="Base")
class Base(object):
NL = Union["Node", "Leaf"]
Context = tuple[str, tuple[int, int]]
RawNode = tuple[int, Optional[str], Optional[Context], Optional[list[NL]]]
class Base:
"""
Abstract base class for Node and Leaf.
@ -62,18 +45,18 @@ class Base:
"""
# Default values for instance variables
type: int # int: token number (< 256) or symbol number (>= 256)
parent: Optional["Node"] = None # Parent node pointer, or None
children: list[NL] # List of subnodes
was_changed: bool = False
was_checked: bool = False
type = None # int: token number (< 256) or symbol number (>= 256)
parent = None # Parent node pointer, or None
children = () # Tuple of subnodes
was_changed = False
was_checked = False
def __new__(cls, *args, **kwds):
"""Constructor that prevents Base from being instantiated."""
assert cls is not Base, "Cannot instantiate Base"
return object.__new__(cls)
def __eq__(self, other: Any) -> bool:
def __eq__(self, other):
"""
Compare two nodes for equality.
@ -83,11 +66,9 @@ def __eq__(self, other: Any) -> bool:
return NotImplemented
return self._eq(other)
@property
def prefix(self) -> str:
raise NotImplementedError
__hash__ = None # For Py3 compatibility.
def _eq(self: _P, other: _P) -> bool:
def _eq(self, other):
"""
Compare two nodes for equality.
@ -98,10 +79,7 @@ def _eq(self: _P, other: _P) -> bool:
"""
raise NotImplementedError
def __deepcopy__(self: _P, memo: Any) -> _P:
return self.clone()
def clone(self: _P) -> _P:
def clone(self):
"""
Return a cloned (deep) copy of self.
@ -109,7 +87,7 @@ def clone(self: _P) -> _P:
"""
raise NotImplementedError
def post_order(self) -> Iterator[NL]:
def post_order(self):
"""
Return a post-order iterator for the tree.
@ -117,7 +95,7 @@ def post_order(self) -> Iterator[NL]:
"""
raise NotImplementedError
def pre_order(self) -> Iterator[NL]:
def pre_order(self):
"""
Return a pre-order iterator for the tree.
@ -125,7 +103,7 @@ def pre_order(self) -> Iterator[NL]:
"""
raise NotImplementedError
def replace(self, new: Union[NL, list[NL]]) -> None:
def replace(self, new):
"""Replace this node with a new one in the parent."""
assert self.parent is not None, str(self)
assert new is not None
@ -149,23 +127,23 @@ def replace(self, new: Union[NL, list[NL]]) -> None:
x.parent = self.parent
self.parent = None
def get_lineno(self) -> Optional[int]:
def get_lineno(self):
"""Return the line number which generated the invocant node."""
node = self
while not isinstance(node, Leaf):
if not node.children:
return None
return
node = node.children[0]
return node.lineno
def changed(self) -> None:
def changed(self):
if self.was_changed:
return
if self.parent:
self.parent.changed()
self.was_changed = True
def remove(self) -> Optional[int]:
def remove(self):
"""
Remove the node from the tree. Returns the position of the node in its
parent's children before it was removed.
@ -178,10 +156,9 @@ def remove(self) -> Optional[int]:
self.parent.invalidate_sibling_maps()
self.parent = None
return i
return None
@property
def next_sibling(self) -> Optional[NL]:
def next_sibling(self):
"""
The node immediately following the invocant in their parent's children
list. If the invocant does not have a next sibling, it is None
@ -191,11 +168,10 @@ def next_sibling(self) -> Optional[NL]:
if self.parent.next_sibling_map is None:
self.parent.update_sibling_maps()
assert self.parent.next_sibling_map is not None
return self.parent.next_sibling_map[id(self)]
@property
def prev_sibling(self) -> Optional[NL]:
def prev_sibling(self):
"""
The node immediately preceding the invocant in their parent's children
list. If the invocant does not have a previous sibling, it is None.
@ -205,19 +181,18 @@ def prev_sibling(self) -> Optional[NL]:
if self.parent.prev_sibling_map is None:
self.parent.update_sibling_maps()
assert self.parent.prev_sibling_map is not None
return self.parent.prev_sibling_map[id(self)]
def leaves(self) -> Iterator["Leaf"]:
def leaves(self):
for child in self.children:
yield from child.leaves()
def depth(self) -> int:
def depth(self):
if self.parent is None:
return 0
return 1 + self.parent.depth()
def get_suffix(self) -> str:
def get_suffix(self):
"""
Return the string immediately following the invocant node. This is
effectively equivalent to node.next_sibling.prefix
@ -225,24 +200,19 @@ def get_suffix(self) -> str:
next_sib = self.next_sibling
if next_sib is None:
return ""
prefix = next_sib.prefix
return prefix
return next_sib.prefix
if sys.version_info < (3, 0):
def __str__(self):
return str(self).encode("ascii")
class Node(Base):
"""Concrete implementation for interior nodes."""
fixers_applied: Optional[list[Any]]
used_names: Optional[set[str]]
def __init__(
self,
type: int,
children: list[NL],
context: Optional[Any] = None,
prefix: Optional[str] = None,
fixers_applied: Optional[list[Any]] = None,
) -> None:
def __init__(self, type, children, context=None, prefix=None, fixers_applied=None):
"""
Initializer.
@ -265,12 +235,15 @@ def __init__(
else:
self.fixers_applied = None
def __repr__(self) -> str:
def __repr__(self):
"""Return a canonical string representation."""
assert self.type is not None
return f"{self.__class__.__name__}({type_repr(self.type)}, {self.children!r})"
return "%s(%s, %r)" % (
self.__class__.__name__,
type_repr(self.type),
self.children,
)
def __str__(self) -> str:
def __unicode__(self):
"""
Return a pretty string representation.
@ -278,12 +251,14 @@ def __str__(self) -> str:
"""
return "".join(map(str, self.children))
def _eq(self, other: Base) -> bool:
if sys.version_info > (3, 0):
__str__ = __unicode__
def _eq(self, other):
"""Compare two nodes for equality."""
return (self.type, self.children) == (other.type, other.children)
def clone(self) -> "Node":
assert self.type is not None
def clone(self):
"""Return a cloned (deep) copy of self."""
return Node(
self.type,
@ -291,20 +266,20 @@ def clone(self) -> "Node":
fixers_applied=self.fixers_applied,
)
def post_order(self) -> Iterator[NL]:
def post_order(self):
"""Return a post-order iterator for the tree."""
for child in self.children:
yield from child.post_order()
yield self
def pre_order(self) -> Iterator[NL]:
def pre_order(self):
"""Return a pre-order iterator for the tree."""
yield self
for child in self.children:
yield from child.pre_order()
@property
def prefix(self) -> str:
def prefix(self):
"""
The whitespace and comments preceding this node in the input.
"""
@ -313,11 +288,11 @@ def prefix(self) -> str:
return self.children[0].prefix
@prefix.setter
def prefix(self, prefix: str) -> None:
def prefix(self, prefix):
if self.children:
self.children[0].prefix = prefix
def set_child(self, i: int, child: NL) -> None:
def set_child(self, i, child):
"""
Equivalent to 'node.children[i] = child'. This method also sets the
child's parent attribute appropriately.
@ -328,7 +303,7 @@ def set_child(self, i: int, child: NL) -> None:
self.changed()
self.invalidate_sibling_maps()
def insert_child(self, i: int, child: NL) -> None:
def insert_child(self, i, child):
"""
Equivalent to 'node.children.insert(i, child)'. This method also sets
the child's parent attribute appropriately.
@ -338,7 +313,7 @@ def insert_child(self, i: int, child: NL) -> None:
self.changed()
self.invalidate_sibling_maps()
def append_child(self, child: NL) -> None:
def append_child(self, child):
"""
Equivalent to 'node.children.append(child)'. This method also sets the
child's parent attribute appropriately.
@ -348,16 +323,14 @@ def append_child(self, child: NL) -> None:
self.changed()
self.invalidate_sibling_maps()
def invalidate_sibling_maps(self) -> None:
self.prev_sibling_map: Optional[dict[int, Optional[NL]]] = None
self.next_sibling_map: Optional[dict[int, Optional[NL]]] = None
def invalidate_sibling_maps(self):
self.prev_sibling_map = None
self.next_sibling_map = None
def update_sibling_maps(self) -> None:
_prev: dict[int, Optional[NL]] = {}
_next: dict[int, Optional[NL]] = {}
self.prev_sibling_map = _prev
self.next_sibling_map = _next
previous: Optional[NL] = None
def update_sibling_maps(self):
self.prev_sibling_map = _prev = {}
self.next_sibling_map = _next = {}
previous = None
for current in self.children:
_prev[id(current)] = previous
_next[id(previous)] = current
@ -366,40 +339,21 @@ def update_sibling_maps(self) -> None:
class Leaf(Base):
"""Concrete implementation for leaf nodes."""
# Default values for instance variables
value: str
fixers_applied: list[Any]
bracket_depth: int
# Changed later in brackets.py
opening_bracket: Optional["Leaf"] = None
used_names: Optional[set[str]]
_prefix = "" # Whitespace and comments preceding this token in the input
lineno: int = 0 # Line where this token starts in the input
column: int = 0 # Column where this token starts in the input
# If not None, this Leaf is created by converting a block of fmt off/skip
# code, and `fmt_pass_converted_first_leaf` points to the first Leaf in the
# converted code.
fmt_pass_converted_first_leaf: Optional["Leaf"] = None
lineno = 0 # Line where this token starts in the input
column = 0 # Column where this token starts in the input
def __init__(
self,
type: int,
value: str,
context: Optional[Context] = None,
prefix: Optional[str] = None,
fixers_applied: list[Any] = [],
opening_bracket: Optional["Leaf"] = None,
fmt_pass_converted_first_leaf: Optional["Leaf"] = None,
) -> None:
def __init__(self, type, value, context=None, prefix=None, fixers_applied=[]):
"""
Initializer.
Takes a type constant (a token number < 256), a string value, and an
optional context keyword argument.
"""
assert 0 <= type < 256, type
if context is not None:
self._prefix, (self.lineno, self.column) = context
@ -407,35 +361,34 @@ def __init__(
self.value = value
if prefix is not None:
self._prefix = prefix
self.fixers_applied: Optional[list[Any]] = fixers_applied[:]
self.children = []
self.opening_bracket = opening_bracket
self.fmt_pass_converted_first_leaf = fmt_pass_converted_first_leaf
self.fixers_applied = fixers_applied[:]
def __repr__(self) -> str:
def __repr__(self):
"""Return a canonical string representation."""
from .pgen2.token import tok_name
assert self.type is not None
return (
f"{self.__class__.__name__}({tok_name.get(self.type, self.type)},"
f" {self.value!r})"
return "%s(%s, %r)" % (
self.__class__.__name__,
tok_name.get(self.type, self.type),
self.value,
)
def __str__(self) -> str:
def __unicode__(self):
"""
Return a pretty string representation.
This reproduces the input source exactly.
"""
return self._prefix + str(self.value)
return self.prefix + str(self.value)
def _eq(self, other: "Leaf") -> bool:
if sys.version_info > (3, 0):
__str__ = __unicode__
def _eq(self, other):
"""Compare two nodes for equality."""
return (self.type, self.value) == (other.type, other.value)
def clone(self) -> "Leaf":
assert self.type is not None
def clone(self):
"""Return a cloned (deep) copy of self."""
return Leaf(
self.type,
@ -444,31 +397,31 @@ def clone(self) -> "Leaf":
fixers_applied=self.fixers_applied,
)
def leaves(self) -> Iterator["Leaf"]:
def leaves(self):
yield self
def post_order(self) -> Iterator["Leaf"]:
def post_order(self):
"""Return a post-order iterator for the tree."""
yield self
def pre_order(self) -> Iterator["Leaf"]:
def pre_order(self):
"""Return a pre-order iterator for the tree."""
yield self
@property
def prefix(self) -> str:
def prefix(self):
"""
The whitespace and comments preceding this token in the input.
"""
return self._prefix
@prefix.setter
def prefix(self, prefix: str) -> None:
def prefix(self, prefix):
self.changed()
self._prefix = prefix
def convert(gr: Grammar, raw_node: RawNode) -> NL:
def convert(gr, raw_node):
"""
Convert raw node information to a Node or Leaf instance.
@ -480,18 +433,15 @@ def convert(gr: Grammar, raw_node: RawNode) -> NL:
if children or type in gr.number2symbol:
# If there's exactly one child, return that child instead of
# creating a new node.
assert children is not None
if len(children) == 1:
return children[0]
return Node(type, children, context=context)
else:
return Leaf(type, value or "", context=context)
return Leaf(type, value, context=context)
_Results = dict[str, NL]
class BasePattern(object):
class BasePattern:
"""
A pattern is a tree matching pattern.
@ -507,27 +457,22 @@ class BasePattern:
"""
# Defaults for instance variables
type: Optional[int]
type = None # Node type (token if < 256, symbol if >= 256)
content: Any = None # Optional content matching pattern
name: Optional[str] = None # Optional name used to store match in results dict
content = None # Optional content matching pattern
name = None # Optional name used to store match in results dict
def __new__(cls, *args, **kwds):
"""Constructor that prevents BasePattern from being instantiated."""
assert cls is not BasePattern, "Cannot instantiate BasePattern"
return object.__new__(cls)
def __repr__(self) -> str:
assert self.type is not None
def __repr__(self):
args = [type_repr(self.type), self.content, self.name]
while args and args[-1] is None:
del args[-1]
return f"{self.__class__.__name__}({', '.join(map(repr, args))})"
return "%s(%s)" % (self.__class__.__name__, ", ".join(map(repr, args)))
def _submatch(self, node, results=None) -> bool:
raise NotImplementedError
def optimize(self) -> "BasePattern":
def optimize(self):
"""
A subclass can define this as a hook for optimizations.
@ -535,7 +480,7 @@ def optimize(self) -> "BasePattern":
"""
return self
def match(self, node: NL, results: Optional[_Results] = None) -> bool:
def match(self, node, results=None):
"""
Does this pattern exactly match a node?
@ -549,19 +494,18 @@ def match(self, node: NL, results: Optional[_Results] = None) -> bool:
if self.type is not None and node.type != self.type:
return False
if self.content is not None:
r: Optional[_Results] = None
r = None
if results is not None:
r = {}
if not self._submatch(node, r):
return False
if r:
assert results is not None
results.update(r)
if results is not None and self.name:
results[self.name] = node
return True
def match_seq(self, nodes: list[NL], results: Optional[_Results] = None) -> bool:
def match_seq(self, nodes, results=None):
"""
Does this pattern exactly match a sequence of nodes?
@ -571,24 +515,19 @@ def match_seq(self, nodes: list[NL], results: Optional[_Results] = None) -> bool
return False
return self.match(nodes[0], results)
def generate_matches(self, nodes: list[NL]) -> Iterator[tuple[int, _Results]]:
def generate_matches(self, nodes):
"""
Generator yielding all matches for this pattern.
Default implementation for non-wildcard patterns.
"""
r: _Results = {}
r = {}
if nodes and self.match(nodes[0], r):
yield 1, r
class LeafPattern(BasePattern):
def __init__(
self,
type: Optional[int] = None,
content: Optional[str] = None,
name: Optional[str] = None,
) -> None:
def __init__(self, type=None, content=None, name=None):
"""
Initializer. Takes optional type, content, and name.
@ -608,7 +547,7 @@ def __init__(
self.content = content
self.name = name
def match(self, node: NL, results=None) -> bool:
def match(self, node, results=None):
"""Override match() to insist on a leaf node."""
if not isinstance(node, Leaf):
return False
@ -631,14 +570,10 @@ def _submatch(self, node, results=None):
class NodePattern(BasePattern):
wildcards: bool = False
def __init__(
self,
type: Optional[int] = None,
content: Optional[Iterable[str]] = None,
name: Optional[str] = None,
) -> None:
wildcards = False
def __init__(self, type=None, content=None, name=None):
"""
Initializer. Takes optional type, content, and name.
@ -658,19 +593,16 @@ def __init__(
assert type >= 256, type
if content is not None:
assert not isinstance(content, str), repr(content)
newcontent = list(content)
for i, item in enumerate(newcontent):
content = list(content)
for i, item in enumerate(content):
assert isinstance(item, BasePattern), (i, item)
# I don't even think this code is used anywhere, but it does cause
# unreachable errors from mypy. This function's signature does look
# odd though *shrug*.
if isinstance(item, WildcardPattern): # type: ignore[unreachable]
self.wildcards = True # type: ignore[unreachable]
if isinstance(item, WildcardPattern):
self.wildcards = True
self.type = type
self.content = newcontent # TODO: this is unbound when content is None
self.content = content
self.name = name
def _submatch(self, node, results=None) -> bool:
def _submatch(self, node, results=None):
"""
Match the pattern's content to the node's children.
@ -699,6 +631,7 @@ def _submatch(self, node, results=None) -> bool:
class WildcardPattern(BasePattern):
"""
A wildcard pattern can match zero or more nodes.
@ -711,16 +644,7 @@ class WildcardPattern(BasePattern):
except it always uses non-greedy matching.
"""
min: int
max: int
def __init__(
self,
content: Optional[str] = None,
min: int = 0,
max: int = HUGE,
name: Optional[str] = None,
) -> None:
def __init__(self, content=None, min=0, max=HUGE, name=None):
"""
Initializer.
@ -745,20 +669,17 @@ def __init__(
"""
assert 0 <= min <= max <= HUGE, (min, max)
if content is not None:
f = lambda s: tuple(s)
wrapped_content = tuple(map(f, content)) # Protect against alterations
content = tuple(map(tuple, content)) # Protect against alterations
# Check sanity of alternatives
assert len(wrapped_content), repr(
wrapped_content
) # Can't have zero alternatives
for alt in wrapped_content:
assert len(content), repr(content) # Can't have zero alternatives
for alt in content:
assert len(alt), repr(alt) # Can have empty alternatives
self.content = wrapped_content
self.content = content
self.min = min
self.max = max
self.name = name
def optimize(self) -> Any:
def optimize(self):
"""Optimize certain stacked wildcard patterns."""
subpattern = None
if (
@ -786,11 +707,11 @@ def optimize(self) -> Any:
)
return self
def match(self, node, results=None) -> bool:
def match(self, node, results=None):
"""Does this pattern exactly match a node?"""
return self.match_seq([node], results)
def match_seq(self, nodes, results=None) -> bool:
def match_seq(self, nodes, results=None):
"""Does this pattern exactly match a sequence of nodes?"""
for c, r in self.generate_matches(nodes):
if c == len(nodes):
@ -801,7 +722,7 @@ def match_seq(self, nodes, results=None) -> bool:
return True
return False
def generate_matches(self, nodes) -> Iterator[tuple[int, _Results]]:
def generate_matches(self, nodes):
"""
Generator yielding matches for a sequence of nodes.
@ -846,7 +767,7 @@ def generate_matches(self, nodes) -> Iterator[tuple[int, _Results]]:
if hasattr(sys, "getrefcount"):
sys.stderr = save_stderr
def _iterative_matches(self, nodes) -> Iterator[tuple[int, _Results]]:
def _iterative_matches(self, nodes):
"""Helper to iteratively yield the matches."""
nodelen = len(nodes)
if 0 >= self.min:
@ -875,10 +796,10 @@ def _iterative_matches(self, nodes) -> Iterator[tuple[int, _Results]]:
new_results.append((c0 + c1, r))
results = new_results
def _bare_name_matches(self, nodes) -> tuple[int, _Results]:
def _bare_name_matches(self, nodes):
"""Special optimized matcher for bare_name."""
count = 0
r = {} # type: _Results
r = {}
done = False
max = len(nodes)
while not done and count < max:
@ -888,11 +809,10 @@ def _bare_name_matches(self, nodes) -> tuple[int, _Results]:
count += 1
done = False
break
assert self.name is not None
r[self.name] = nodes[:count]
return count, r
def _recursive_matches(self, nodes, count) -> Iterator[tuple[int, _Results]]:
def _recursive_matches(self, nodes, count):
"""Helper to recursively yield the matches."""
assert self.content is not None
if count >= self.min:
@ -908,7 +828,7 @@ def _recursive_matches(self, nodes, count) -> Iterator[tuple[int, _Results]]:
class NegatedPattern(BasePattern):
def __init__(self, content: Optional[BasePattern] = None) -> None:
def __init__(self, content=None):
"""
Initializer.
@ -921,15 +841,15 @@ def __init__(self, content: Optional[BasePattern] = None) -> None:
assert isinstance(content, BasePattern), repr(content)
self.content = content
def match(self, node, results=None) -> bool:
def match(self, node):
# We never match a node in its entirety
return False
def match_seq(self, nodes, results=None) -> bool:
def match_seq(self, nodes):
# We only match an empty sequence of nodes in its entirety
return len(nodes) == 0
def generate_matches(self, nodes: list[NL]) -> Iterator[tuple[int, _Results]]:
def generate_matches(self, nodes):
if self.content is None:
# Return a match if there is an empty sequence
if len(nodes) == 0:
@ -941,9 +861,7 @@ def generate_matches(self, nodes: list[NL]) -> Iterator[tuple[int, _Results]]:
yield 0, {}
def generate_matches(
patterns: list[BasePattern], nodes: list[NL]
) -> Iterator[tuple[int, _Results]]:
def generate_matches(patterns, nodes):
"""
Generator yielding matches for a sequence of patterns and nodes.
@ -955,7 +873,7 @@ def generate_matches(
(count, results) tuples where:
count: the entire sequence of patterns matches nodes[:count];
results: dict containing named submatches.
"""
"""
if not patterns:
yield 0, {}
else:

89
blib2to3/pytree.pyi Normal file
View File

@ -0,0 +1,89 @@
# Stubs for lib2to3.pytree (Python 3.6)
import sys
from typing import Any, Callable, Dict, Iterator, List, Optional, Text, Tuple, TypeVar, Union
from blib2to3.pgen2.grammar import Grammar
_P = TypeVar('_P')
_NL = Union[Node, Leaf]
_Context = Tuple[Text, int, int]
_Results = Dict[Text, _NL]
_RawNode = Tuple[int, Text, _Context, Optional[List[_NL]]]
_Convert = Callable[[Grammar, _RawNode], Any]
HUGE: int
def type_repr(type_num: int) -> Text: ...
class Base:
type: int
parent: Optional[Node]
prefix: Text
children: List[_NL]
was_changed: bool
was_checked: bool
def __eq__(self, other: Any) -> bool: ...
def _eq(self: _P, other: _P) -> bool: ...
def clone(self: _P) -> _P: ...
def post_order(self) -> Iterator[_NL]: ...
def pre_order(self) -> Iterator[_NL]: ...
def replace(self, new: Union[_NL, List[_NL]]) -> None: ...
def get_lineno(self) -> int: ...
def changed(self) -> None: ...
def remove(self) -> Optional[int]: ...
@property
def next_sibling(self) -> Optional[_NL]: ...
@property
def prev_sibling(self) -> Optional[_NL]: ...
def leaves(self) -> Iterator[Leaf]: ...
def depth(self) -> int: ...
def get_suffix(self) -> Text: ...
if sys.version_info < (3,):
def get_prefix(self) -> Text: ...
def set_prefix(self, prefix: Text) -> None: ...
class Node(Base):
fixers_applied: List[Any]
def __init__(self, type: int, children: List[_NL], context: Optional[Any] = ..., prefix: Optional[Text] = ..., fixers_applied: Optional[List[Any]] = ...) -> None: ...
def set_child(self, i: int, child: _NL) -> None: ...
def insert_child(self, i: int, child: _NL) -> None: ...
def append_child(self, child: _NL) -> None: ...
class Leaf(Base):
lineno: int
column: int
value: Text
fixers_applied: List[Any]
def __init__(self, type: int, value: Text, context: Optional[_Context] = ..., prefix: Optional[Text] = ..., fixers_applied: List[Any] = ...) -> None: ...
# bolted on attributes by Black
bracket_depth: int
opening_bracket: Leaf
def convert(gr: Grammar, raw_node: _RawNode) -> _NL: ...
class BasePattern:
type: int
content: Optional[Text]
name: Optional[Text]
def optimize(self) -> BasePattern: ... # sic, subclasses are free to optimize themselves into different patterns
def match(self, node: _NL, results: Optional[_Results] = ...) -> bool: ...
def match_seq(self, nodes: List[_NL], results: Optional[_Results] = ...) -> bool: ...
def generate_matches(self, nodes: List[_NL]) -> Iterator[Tuple[int, _Results]]: ...
class LeafPattern(BasePattern):
def __init__(self, type: Optional[int] = ..., content: Optional[Text] = ..., name: Optional[Text] = ...) -> None: ...
class NodePattern(BasePattern):
wildcards: bool
def __init__(self, type: Optional[int] = ..., content: Optional[Text] = ..., name: Optional[Text] = ...) -> None: ...
class WildcardPattern(BasePattern):
min: int
max: int
def __init__(self, content: Optional[Text] = ..., min: int = ..., max: int = ..., name: Optional[Text] = ...) -> None: ...
class NegatedPattern(BasePattern):
def __init__(self, content: Optional[Text] = ...) -> None: ...
def generate_matches(patterns: List[BasePattern], nodes: List[_NL]) -> Iterator[Tuple[int, _Results]]: ...

View File

@ -17,4 +17,4 @@ help:
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

View File

@ -1 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="78" height="20"><linearGradient id="b" x2="0" y2="100%"><stop offset="0" stop-color="#bbb" stop-opacity=".1"/><stop offset="1" stop-opacity=".1"/></linearGradient><clipPath id="a"><rect width="78" height="20" rx="3" fill="#fff"/></clipPath><g clip-path="url(#a)"><path fill="#555" d="M0 0h47v20H0z"/><path fill="#7900CA" d="M47 0h31v20H47z"/><path fill="url(#b)" d="M0 0h78v20H0z"/></g><g fill="#fff" text-anchor="middle" font-family="DejaVu Sans,Verdana,Geneva,sans-serif" font-size="110"><text x="245" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="370">license</text><text x="245" y="140" transform="scale(.1)" textLength="370">license</text><text x="615" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="210">MIT</text><text x="615" y="140" transform="scale(.1)" textLength="210">MIT</text></g> </svg>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="78" height="20"><linearGradient id="b" x2="0" y2="100%"><stop offset="0" stop-color="#bbb" stop-opacity=".1"/><stop offset="1" stop-opacity=".1"/></linearGradient><clipPath id="a"><rect width="78" height="20" rx="3" fill="#fff"/></clipPath><g clip-path="url(#a)"><path fill="#555" d="M0 0h47v20H0z"/><path fill="#7900CA" d="M47 0h31v20H47z"/><path fill="url(#b)" d="M0 0h78v20H0z"/></g><g fill="#fff" text-anchor="middle" font-family="DejaVu Sans,Verdana,Geneva,sans-serif" font-size="110"><text x="245" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="370">license</text><text x="245" y="140" transform="scale(.1)" textLength="370">license</text><text x="615" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="210">MIT</text><text x="615" y="140" transform="scale(.1)" textLength="210">MIT</text></g> </svg>

Before

Width:  |  Height:  |  Size: 950 B

After

Width:  |  Height:  |  Size: 949 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 97 KiB

After

Width:  |  Height:  |  Size: 79 KiB

View File

@ -1,3 +0,0 @@
```{include} ../AUTHORS.md
```

1
docs/authors.md Symbolic link
View File

@ -0,0 +1 @@
_build/generated/authors.md

1
docs/blackd.md Symbolic link
View File

@ -0,0 +1 @@
_build/generated/blackd.md

View File

@ -1,3 +0,0 @@
```{include} ../CHANGES.md
```

1
docs/change_log.md Symbolic link
View File

@ -0,0 +1 @@
_build/generated/change_log.md

View File

@ -1,3 +0,0 @@
[flake8]
max-line-length = 88
extend-ignore = E203,E701

View File

@ -1,3 +0,0 @@
[flake8]
max-line-length = 88
extend-ignore = E203,E701

View File

@ -1,3 +0,0 @@
[flake8]
max-line-length = 88
extend-ignore = E203,E701

View File

@ -1,2 +0,0 @@
[*.py]
profile = black

View File

@ -1,2 +0,0 @@
[settings]
profile = black

View File

@ -1,2 +0,0 @@
[tool.isort]
profile = 'black'

View File

@ -1,2 +0,0 @@
[isort]
profile = black

View File

@ -1,3 +0,0 @@
[pycodestyle]
max-line-length = 88
ignore = E203,E701

View File

@ -1,3 +0,0 @@
[pycodestyle]
max-line-length = 88
ignore = E203,E701

View File

@ -1,3 +0,0 @@
[pycodestyle]
max-line-length = 88
ignore = E203,E701

View File

@ -1,2 +0,0 @@
[format]
max-line-length = 88

View File

@ -1,2 +0,0 @@
[tool.pylint.format]
max-line-length = "88"

View File

@ -1,2 +0,0 @@
[pylint]
max-line-length = 88

View File

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
#
# Configuration file for the Sphinx documentation builder.
#
@ -11,92 +12,103 @@
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import re
import string
from importlib.metadata import version
from pathlib import Path
import regex as re
import shutil
import string
from setuptools_scm import get_version
from recommonmark.parser import CommonMarkParser
from sphinx.application import Sphinx
CURRENT_DIR = Path(__file__).parent
def make_pypi_svg(version: str) -> None:
template: Path = CURRENT_DIR / "_static" / "pypi_template.svg"
target: Path = CURRENT_DIR / "_static" / "pypi.svg"
with open(str(template), encoding="utf8") as f:
svg: str = string.Template(f.read()).substitute(version=version)
def make_pypi_svg(version):
template = CURRENT_DIR / "_static" / "pypi_template.svg"
target = CURRENT_DIR / "_static" / "pypi.svg"
with open(str(template), "r", encoding="utf8") as f:
svg = string.Template(f.read()).substitute(version=version)
with open(str(target), "w", encoding="utf8") as f:
f.write(svg)
def replace_pr_numbers_with_links(content: str) -> str:
"""Replaces all PR numbers with the corresponding GitHub link."""
return re.sub(r"#(\d+)", r"[#\1](https://github.com/psf/black/pull/\1)", content)
def make_filename(line):
non_letters = re.compile(r"[^a-z]+")
filename = line[3:].rstrip().lower()
filename = non_letters.sub("_", filename)
if filename.startswith("_"):
filename = filename[1:]
if filename.endswith("_"):
filename = filename[:-1]
return filename + ".md"
def handle_include_read(
app: Sphinx,
relative_path: Path,
parent_docname: str,
content: list[str],
) -> None:
"""Handler for the include-read sphinx event."""
if parent_docname == "change_log":
content[0] = replace_pr_numbers_with_links(content[0])
def generate_sections_from_readme():
target_dir = CURRENT_DIR / "_build" / "generated"
readme = CURRENT_DIR / ".." / "README.md"
shutil.rmtree(str(target_dir), ignore_errors=True)
target_dir.mkdir(parents=True)
output = None
target_dir = target_dir.relative_to(CURRENT_DIR)
with open(str(readme), "r", encoding="utf8") as f:
for line in f:
if line.startswith("## "):
if output is not None:
output.close()
filename = make_filename(line)
output_path = CURRENT_DIR / filename
if output_path.is_symlink() or output_path.is_file():
output_path.unlink()
output_path.symlink_to(target_dir / filename)
output = open(str(output_path), "w", encoding="utf8")
output.write(
"[//]: # (NOTE: THIS FILE IS AUTOGENERATED FROM README.md)\n\n"
)
def setup(app: Sphinx) -> None:
"""Sets up a minimal sphinx extension."""
app.connect("include-read", handle_include_read)
if output is None:
continue
if line.startswith("##"):
line = line[1:]
output.write(line)
# Necessary so Click doesn't hit an encode error when called by
# sphinxcontrib-programoutput on Windows.
os.putenv("pythonioencoding", "utf-8")
# -- Project information -----------------------------------------------------
project = "Black"
copyright = "2018-Present, Łukasz Langa and contributors to Black"
copyright = "2018, Łukasz Langa and contributors to Black"
author = "Łukasz Langa and contributors to Black"
# Autopopulate version
# The version, including alpha/beta/rc tags, but not commit hash and datestamps
release = version("black").split("+")[0]
# The full version, including alpha/beta/rc tags.
release = get_version(root=CURRENT_DIR.parent)
# The short X.Y version.
version = release
for sp in "abcfr":
version = version.split(sp)[0]
make_pypi_svg(release)
generate_sections_from_readme()
# -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = "4.4"
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.intersphinx",
"sphinx.ext.napoleon",
"myst_parser",
"sphinxcontrib.programoutput",
"sphinx_copybutton",
]
# If you need extensions of a certain version or higher, list them here.
needs_extensions = {"myst_parser": "0.13.7"}
extensions = ["sphinx.ext.autodoc", "sphinx.ext.intersphinx", "sphinx.ext.napoleon"]
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
source_parsers = {".md": CommonMarkParser}
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
source_suffix = [".rst", ".md"]
@ -109,40 +121,47 @@ def setup(app: Sphinx) -> None:
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = "en"
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path .
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"
# We need headers to be linkable to so ask MyST-Parser to autogenerate anchor IDs for
# headers up to and including level 3.
myst_heading_anchors = 3
# Prettier support formatting some MyST syntax but not all, so let's disable the
# unsupported yet still enabled by default ones.
myst_disable_syntax = [
"colon_fence",
"myst_block_break",
"myst_line_comment",
"math_block",
]
# Optional MyST Syntaxes
myst_enable_extensions = []
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "furo"
html_logo = "_static/logo2-readme.png"
html_theme = "alabaster"
html_sidebars = {
"**": [
"about.html",
"navigation.html",
"relations.html",
"sourcelink.html",
"searchbox.html",
]
}
html_theme_options = {
"show_related": False,
"description": "“Any color you like.”",
"github_button": True,
"github_user": "psf",
"github_repo": "black",
"github_type": "star",
"show_powered_by": True,
"fixed_sidebar": True,
"logo": "logo2.png",
"travis_button": True,
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
@ -168,16 +187,33 @@ def setup(app: Sphinx) -> None:
# -- Options for LaTeX output ------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [(
master_doc,
"black.tex",
"Documentation for Black",
"Łukasz Langa and contributors to Black",
"manual",
)]
latex_documents = [
(
master_doc,
"black.tex",
"Documentation for Black",
"Łukasz Langa and contributors to Black",
"manual",
)
]
# -- Options for manual page output ------------------------------------------
@ -192,15 +228,17 @@ def setup(app: Sphinx) -> None:
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [(
master_doc,
"Black",
"Documentation for Black",
author,
"Black",
"The uncompromising Python code formatter",
"Miscellaneous",
)]
texinfo_documents = [
(
master_doc,
"Black",
"Documentation for Black",
author,
"Black",
"The uncompromising Python code formatter",
"Miscellaneous",
)
]
# -- Options for Epub output -------------------------------------------------
@ -228,14 +266,7 @@ def setup(app: Sphinx) -> None:
autodoc_member_order = "bysource"
# -- sphinx-copybutton configuration ----------------------------------------
copybutton_prompt_text = (
r">>> |\.\.\. |> |\$ |\# | In \[\d*\]: | {2,5}\.\.\.: | {5,8}: "
)
copybutton_prompt_is_regexp = True
copybutton_remove_prompts = True
# -- Options for intersphinx extension ---------------------------------------
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {"<name>": ("https://docs.python.org/3/", None)}
intersphinx_mapping = {"https://docs.python.org/3/": None}

1
docs/contributing.md Symbolic link
View File

@ -0,0 +1 @@
../CONTRIBUTING.md

View File

@ -1,58 +0,0 @@
# Gauging changes
A lot of the time, your change will affect formatting and/or performance. Quantifying
these changes is hard, so we have tooling to help make it easier.
It's recommended you evaluate the quantifiable changes your _Black_ formatting
modification causes before submitting a PR. Think about if the change seems disruptive
enough to cause frustration to projects that are already "black formatted".
## diff-shades
diff-shades is a tool that runs _Black_ across a list of open-source projects recording
the results. The main highlight feature of diff-shades is being able to compare two
revisions of _Black_. This is incredibly useful as it allows us to see what exact
changes will occur, say merging a certain PR.
For more information, please see the [diff-shades documentation][diff-shades].
### CI integration
diff-shades is also the tool behind the "diff-shades results comparing ..." /
"diff-shades reports zero changes ..." comments on PRs. The project has a GitHub Actions
workflow that analyzes and compares two revisions of _Black_ according to these rules:
| | Baseline revision | Target revision |
| --------------------- | ----------------------- | ---------------------------- |
| On PRs | latest commit on `main` | PR commit with `main` merged |
| On pushes (main only) | latest PyPI version | the pushed commit |
For pushes to main, there's only one analysis job named `preview-changes` where the
preview style is used for all projects.
For PRs they get one more analysis job: `assert-no-changes`. It's similar to
`preview-changes` but runs with the stable code style. It will fail if changes were
made. This makes sure code won't be reformatted again and again within the same year in
accordance to Black's stability policy.
Additionally for PRs, a PR comment will be posted embedding a summary of the preview
changes and links to further information. If there's a pre-existing diff-shades comment,
it'll be updated instead the next time the workflow is triggered on the same PR.
```{note}
The `preview-changes` job will only fail intentionally if while analyzing a file failed to
format. Otherwise a failure indicates a bug in the workflow.
```
The workflow uploads several artifacts upon completion:
- The raw analyses (.json)
- HTML diffs (.html)
- `.pr-comment.json` (if triggered by a PR)
The last one is downloaded by the `diff-shades-comment` workflow and shouldn't be
downloaded locally. The HTML diffs come in handy for push-based where there's no PR to
post a comment. And the analyses exist just in case you want to do further analysis
using the collected data locally.
[diff-shades]: https://github.com/ichard26/diff-shades#readme

View File

@ -1,45 +0,0 @@
# Contributing
```{toctree}
---
hidden:
---
the_basics
gauging_changes
issue_triage
release_process
```
Welcome! Happy to see you willing to make the project better. Have you read the entire
[user documentation](https://black.readthedocs.io/en/latest/) yet?
```{rubric} Bird's eye view
```
In terms of inspiration, _Black_ is about as configurable as _gofmt_ (which is to say,
not very). This is deliberate. _Black_ aims to provide a consistent style and take away
opportunities for arguing about style.
Bug reports and fixes are always welcome! Please follow the
[issue templates on GitHub](https://github.com/psf/black/issues/new/choose) for best
results.
Before you suggest a new feature or configuration knob, ask yourself why you want it. If
it enables better integration with some workflow, fixes an inconsistency, speeds things
up, and so on - go for it! On the other hand, if your answer is "because I don't like a
particular formatting" then you're not ready to embrace _Black_ yet. Such changes are
unlikely to get accepted. You can still try but prepare to be disappointed.
```{rubric} Contents
```
This section covers the following topics:
- {doc}`the_basics`
- {doc}`gauging_changes`
- {doc}`release_process`
For an overview on contributing to the _Black_, please checkout {doc}`the_basics`.

View File

@ -1,169 +0,0 @@
# Issue triage
Currently, _Black_ uses the issue tracker for bugs, feature requests, proposed style
modifications, and general user support. Each of these issues have to be triaged so they
can be eventually be resolved somehow. This document outlines the triaging process and
also the current guidelines and recommendations.
```{tip}
If you're looking for a way to contribute without submitting patches, this might be
the area for you. Since _Black_ is a popular project, its issue tracker is quite busy
and always needs more attention than is available. While triage isn't the most
glamorous or technically challenging form of contribution, it's still important.
For example, we would love to know whether that old bug report is still reproducible!
You can get easily started by reading over this document and then responding to issues.
If you contribute enough and have stayed for a long enough time, you may even be
given Triage permissions!
```
## The basics
_Black_ gets a whole bunch of different issues, they range from bug reports to user
support issues. To triage is to identify, organize, and kickstart the issue's journey
through its lifecycle to resolution.
More specifically, to triage an issue means to:
- identify what type and categories the issue falls under
- confirm bugs
- ask questions / for further information if necessary
- link related issues
- provide the first initial feedback / support
Note that triage is typically the first response to an issue, so don't fret if the issue
doesn't make much progress after initial triage. The main goal of triaging to prepare
the issue for future more specific development or discussion, so _eventually_ it will be
resolved.
The lifecycle of a bug report or user support issue typically goes something like this:
1. _the issue is waiting for triage_
2. **identified** - has been marked with a type label and other relevant labels, more
details or a functional reproduction may be still needed (and therefore should be
marked with `S: needs repro` or `S: awaiting response`)
3. **confirmed** - the issue can reproduced and necessary details have been provided
4. **discussion** - initial triage has been done and now the general details on how the
issue should be best resolved are being hashed out
5. **awaiting fix** - no further discussion on the issue is necessary and a resolving PR
is the next step
6. **closed** - the issue has been resolved, reasons include:
- the issue couldn't be reproduced
- the issue has been fixed
- duplicate of another pre-existing issue or is invalid
For enhancement, documentation, and style issues, the lifecycle looks very similar but
the details are different:
1. _the issue is waiting for triage_
2. **identified** - has been marked with a type label and other relevant labels
3. **discussion** - the merits of the suggested changes are currently being discussed, a
PR would be acceptable but would be at significant risk of being rejected
4. **accepted & awaiting PR** - it's been determined the suggested changes are OK and a
PR would be welcomed (`S: accepted`)
5. **closed**: - the issue has been resolved, reasons include:
- the suggested changes were implemented
- it was rejected (due to technical concerns, ethos conflicts, etc.)
- duplicate of a pre-existing issue or is invalid
**Note**: documentation issues don't use the `S: accepted` label currently since they're
less likely to be rejected.
## Labelling
We use labels to organize, track progress, and help effectively divvy up work.
Our labels are divided up into several groups identified by their prefix:
- **T - Type**: the general flavor of issue / PR
- **C - Category**: areas of concerns, ranges from bug types to project maintenance
- **F - Formatting Area**: like C but for formatting specifically
- **S - Status**: what stage of resolution is this issue currently in?
- **R - Resolution**: how / why was the issue / PR resolved?
We also have a few standalone labels:
- **`good first issue`**: issues that are beginner-friendly (and will show up in GitHub
banners for first-time visitors to the repository)
- **`help wanted`**: complex issues that need and are looking for a fair bit of work as
to progress (will also show up in various GitHub pages)
- **`skip news`**: for PRs that are trivial and don't need a CHANGELOG entry (and skips
the CHANGELOG entry check)
```{note}
We do use labels for PRs, in particular the `skip news` label, but we aren't that
rigorous about it. Just follow your judgement on what labels make sense for the
specific PR (if any even make sense).
```
## Projects
For more general and broad goals we use projects to track work. Some may be longterm
projects with no true end (e.g. the "Amazing documentation" project) while others may be
more focused and have a definite end (like the "Getting to beta" project).
```{note}
To modify GitHub Projects you need the [Write repository permission level or higher](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-permission-levels-for-an-organization#repository-access-for-each-permission-level).
```
## Closing issues
Closing an issue signifies the issue has reached the end of its life, so closing issues
should be taken with care. The following is the general recommendation for each type of
issue. Note that these are only guidelines and if your judgement says something else
it's totally cool to go with it instead.
For most issues, closing the issue manually or automatically after a resolving PR is
ideal. For bug reports specifically, if the bug has already been fixed, try to check in
with the issue opener that their specific case has been resolved before closing. Note
that we close issues as soon as they're fixed in the `main` branch. This doesn't
necessarily mean they've been released yet.
Design and enhancement issues should be also closed when it's clear the proposed change
won't be implemented, whether that has been determined after a lot of discussion or just
simply goes against _Black_'s ethos. If such an issue turns heated, closing and locking
is acceptable if it's severe enough (although checking in with the core team is probably
a good idea).
User support issues are best closed by the author or when it's clear the issue has been
resolved in some sort of manner.
Duplicates and invalid issues should always be closed since they serve no purpose and
add noise to an already busy issue tracker. Although be careful to make sure it's truly
a duplicate and not just very similar before labelling and closing an issue as
duplicate.
## Common reports
Some issues are frequently opened, like issues about _Black_ formatted code causing E203
messages. Even though these issues are probably heavily duplicated, they still require
triage sucking up valuable time from other things (although they usually skip most of
their lifecycle since they're closed on triage).
Here's some of the most common issues and also pre-made responses you can use:
### "The trailing comma isn't being removed by Black!"
```text
Black used to remove the trailing comma if the expression fits in a single line, but this was changed by #826 and #1288. Now a trailing comma tells Black to always explode the expression. This change was made mostly for the cases where you _know_ a collection or whatever will grow in the future. Having it always exploded as one element per line reduces diff noise when adding elements. Before the "magic trailing comma" feature, you couldn't anticipate a collection's growth reliably since collections that fitted in one line were ruthlessly collapsed regardless of your intentions. One of Black's goals is reducing diff noise, so this was a good pragmatic change.
So no, this is not a bug, but an intended feature. Anyway, [here's the documentation](https://github.com/psf/black/blob/master/docs/the_black_code_style.md#the-magic-trailing-comma) on the "magic trailing comma", including the ability to skip this functionality with the `--skip-magic-trailing-comma` option. Hopefully that helps solve the possible confusion.
```
### "Black formatted code is violating Flake8's E203!"
```text
Hi,
This is expected behaviour, please see the documentation regarding this case (emphasis
mine):
> PEP 8 recommends to treat : in slices as a binary operator with the lowest priority, and to leave an equal amount of space on either side, **except if a parameter is omitted (e.g. ham[1 + 1 :])**. It recommends no spaces around : operators for “simple expressions” (ham[lower:upper]), and **extra space for “complex expressions” (ham[lower : upper + offset])**. **Black treats anything more than variable names as “complex” (ham[lower : upper + 1]).** It also states that for extended slices, both : operators have to have the same amount of spacing, except if a parameter is omitted (ham[1 + 1 ::]). Black enforces these rules consistently.
> This behaviour may raise E203 whitespace before ':' warnings in style guide enforcement tools like Flake8. **Since E203 is not PEP 8 compliant, you should tell Flake8 to ignore these warnings**.
https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#slices
Have a good day!
```

View File

@ -1,174 +0,0 @@
# Release process
_Black_ has had a lot of work done into standardizing and automating its release
process. This document sets out to explain how everything works and how to release
_Black_ using said automation.
## Release cadence
**We aim to release whatever is on `main` every 1-2 months.** This ensures merged
improvements and bugfixes are shipped to users reasonably quickly, while not massively
fracturing the user-base with too many versions. This also keeps the workload on
maintainers consistent and predictable.
If there's not much new on `main` to justify a release, it's acceptable to skip a
month's release. Ideally January releases should not be skipped because as per our
[stability policy](labels/stability-policy), the first release in a new calendar year
may make changes to the _stable_ style. While the policy applies to the first release
(instead of only January releases), confining changes to the stable style to January
will keep things predictable (and nicer) for users.
Unless there is a serious regression or bug that requires immediate patching, **there
should not be more than one release per month**. While version numbers are cheap,
releases require a maintainer to both commit to do the actual cutting of a release, but
also to be able to deal with the potential fallout post-release. Releasing more
frequently than monthly nets rapidly diminishing returns.
## Cutting a release
**You must have `write` permissions for the _Black_ repository to cut a release.**
The 10,000 foot view of the release process is that you prepare a release PR and then
publish a [GitHub Release]. This triggers [release automation](#release-workflows) that
builds all release artifacts and publishes them to the various platforms we publish to.
We now have a `scripts/release.py` script to help with cutting the release PRs.
- `python3 scripts/release.py --help` is your friend.
- `release.py` has only been tested in Python 3.12 (so get with the times :D)
To cut a release:
1. Determine the release's version number
- **_Black_ follows the [CalVer] versioning standard using the `YY.M.N` format**
- So unless there already has been a release during this month, `N` should be `0`
- Example: the first release in January, 2022 → `22.1.0`
- `release.py` will calculate this and log to stderr for you copy paste pleasure
1. File a PR editing `CHANGES.md` and the docs to version the latest changes
- Run `python3 scripts/release.py [--debug]` to generate most changes
- Sub headings in the template, if they have no bullet points need manual removal
_PR welcome to improve :D_
1. If `release.py` fail manually edit; otherwise, yay, skip this step!
1. Replace the `## Unreleased` header with the version number
1. Remove any empty sections for the current release
1. (_optional_) Read through and copy-edit the changelog (eg. by moving entries,
fixing typos, or rephrasing entries)
1. Double-check that no changelog entries since the last release were put in the
wrong section (e.g., run `git diff <last release> CHANGES.md`)
1. Update references to the latest version in
{doc}`/integrations/source_version_control` and
{doc}`/usage_and_configuration/the_basics`
- Example PR: [GH-3139]
1. Once the release PR is merged, wait until all CI passes
- If CI does not pass, **stop** and investigate the failure(s) as generally we'd want
to fix failing CI before cutting a release
1. [Draft a new GitHub Release][new-release]
1. Click `Choose a tag` and type in the version number, then select the
`Create new tag: YY.M.N on publish` option that appears
1. Verify that the new tag targets the `main` branch
1. You can leave the release title blank, GitHub will default to the tag name
1. Copy and paste the _raw changelog Markdown_ for the current release into the
description box
1. Publish the GitHub Release, triggering [release automation](#release-workflows) that
will handle the rest
1. Once CI is done add + commit (git push - No review) a new empty template for the next
release to CHANGES.md _(Template is able to be copy pasted from release.py should we
fail)_
1. `python3 scripts/release.py --add-changes-template|-a [--debug]`
1. Should that fail, please return to copy + paste
1. At this point, you're basically done. It's good practice to go and [watch and verify
that all the release workflows pass][black-actions], although you will receive a
GitHub notification should something fail.
- If something fails, don't panic. Please go read the respective workflow's logs and
configuration file to reverse-engineer your way to a fix/solution.
Congratulations! You've successfully cut a new release of _Black_. Go and stand up and
take a break, you deserve it.
```{important}
Once the release artifacts reach PyPI, you may see new issues being filed indicating
regressions. While regressions are not great, they don't automatically mean a hotfix
release is warranted. Unless the regressions are serious and impact many users, a hotfix
release is probably unnecessary.
In the end, use your best judgement and ask other maintainers for their thoughts.
```
## Release workflows
All of _Black_'s release automation uses [GitHub Actions]. All workflows are therefore
configured using YAML files in the `.github/workflows` directory of the _Black_
repository.
They are triggered by the publication of a [GitHub Release].
Below are descriptions of our release workflows.
### Publish to PyPI
This is our main workflow. It builds an [sdist] and [wheels] to upload to PyPI where the
vast majority of users will download Black from. It's divided into three job groups:
#### sdist + pure wheel
This single job builds the sdist and pure Python wheel (i.e., a wheel that only contains
Python code) using [build] and then uploads them to PyPI using [twine]. These artifacts
are general-purpose and can be used on basically any platform supported by Python.
#### mypyc wheels (…)
We use [mypyc] to compile _Black_ into a CPython C extension for significantly improved
performance. Wheels built with mypyc are platform and Python version specific.
[Supported platforms are documented in the FAQ](labels/mypyc-support).
These matrix jobs use [cibuildwheel] which handles the complicated task of building C
extensions for many environments for us. Since building these wheels is slow, there are
multiple mypyc wheels jobs (hence the term "matrix") that build for a specific platform
(as noted in the job name in parentheses).
Like the previous job group, the built wheels are uploaded to PyPI using [twine].
#### Update stable branch
So this job doesn't _really_ belong here, but updating the `stable` branch after the
other PyPI jobs pass (they must pass for this job to start) makes the most sense. This
saves us from remembering to update the branch sometime after cutting the release.
- _Currently this workflow uses an API token associated with @ambv's PyPI account_
### Publish executables
This workflow builds native executables for multiple platforms using [PyInstaller]. This
allows people to download the executable for their platform and run _Black_ without a
[Python runtime](https://wiki.python.org/moin/PythonImplementations) installed.
The created binaries are stored on the associated GitHub Release for download over _IPv4
only_ (GitHub still does not have IPv6 access 😢).
### docker
This workflow uses the QEMU powered `buildx` feature of Docker to upload an `arm64` and
`amd64`/`x86_64` build of the official _Black_ Docker image™.
- _Currently this workflow uses an API Token associated with @cooperlees account_
```{note}
This also runs on each push to `main`.
```
[black-actions]: https://github.com/psf/black/actions
[build]: https://pypa-build.readthedocs.io/
[calver]: https://calver.org
[cibuildwheel]: https://cibuildwheel.readthedocs.io/
[gh-3139]: https://github.com/psf/black/pull/3139
[github actions]: https://github.com/features/actions
[github release]: https://github.com/psf/black/releases
[new-release]: https://github.com/psf/black/releases/new
[mypyc]: https://mypyc.readthedocs.io/
[mypyc-platform-support]:
/faq.html#what-is-compiled-yes-no-all-about-in-the-version-output
[pyinstaller]: https://www.pyinstaller.org/
[sdist]:
https://packaging.python.org/en/latest/glossary/#term-Source-Distribution-or-sdist
[twine]: https://github.com/features/actions
[wheels]: https://packaging.python.org/en/latest/glossary/#term-Wheel

View File

@ -1,158 +0,0 @@
# The basics
An overview on contributing to the _Black_ project.
## Technicalities
Development on the latest version of Python is preferred. You can use any operating
system.
First clone the _Black_ repository:
```console
$ git clone https://github.com/psf/black.git
$ cd black
```
Then install development dependencies inside a virtual environment of your choice, for
example:
```console
$ python3 -m venv .venv
$ source .venv/bin/activate # activation for linux and mac
$ .venv\Scripts\activate # activation for windows
(.venv)$ pip install -r test_requirements.txt
(.venv)$ pip install -e ".[d]"
(.venv)$ pre-commit install
```
Before submitting pull requests, run lints and tests with the following commands from
the root of the black repo:
```console
# Linting
(.venv)$ pre-commit run -a
# Unit tests
(.venv)$ tox -e py
# Optional Fuzz testing
(.venv)$ tox -e fuzz
# Format Black itself
(.venv)$ tox -e run_self
```
### Development
Further examples of invoking the tests
```console
# Run all of the above mentioned, in parallel
(.venv)$ tox --parallel=auto
# Run tests on a specific python version
(.venv)$ tox -e py39
# Run an individual test
(.venv)$ pytest -k <test name>
# Pass arguments to pytest
(.venv)$ tox -e py -- --no-cov
# Print full tree diff, see documentation below
(.venv)$ tox -e py -- --print-full-tree
# Disable diff printing, see documentation below
(.venv)$ tox -e py -- --print-tree-diff=False
```
### Testing
All aspects of the _Black_ style should be tested. Normally, tests should be created as
files in the `tests/data/cases` directory. These files consist of up to three parts:
- A line that starts with `# flags: ` followed by a set of command-line options. For
example, if the line is `# flags: --preview --skip-magic-trailing-comma`, the test
case will be run with preview mode on and the magic trailing comma off. The options
accepted are mostly a subset of those of _Black_ itself, except for the
`--minimum-version=` flag, which should be used when testing a grammar feature that
works only in newer versions of Python. This flag ensures that we don't try to
validate the AST on older versions and tests that we autodetect the Python version
correctly when the feature is used. For the exact flags accepted, see the function
`get_flags_parser` in `tests/util.py`. If this line is omitted, the default options
are used.
- A block of Python code used as input for the formatter.
- The line `# output`, followed by the output of _Black_ when run on the previous block.
If this is omitted, the test asserts that _Black_ will leave the input code unchanged.
_Black_ has two pytest command-line options affecting test files in `tests/data/` that
are split into an input part, and an output part, separated by a line with`# output`.
These can be passed to `pytest` through `tox`, or directly into pytest if not using
`tox`.
#### `--print-full-tree`
Upon a failing test, print the full concrete syntax tree (CST) as it is after processing
the input ("actual"), and the tree that's yielded after parsing the output ("expected").
Note that a test can fail with different output with the same CST. This used to be the
default, but now defaults to `False`.
#### `--print-tree-diff`
Upon a failing test, print the diff of the trees as described above. This is the
default. To turn it off pass `--print-tree-diff=False`.
### News / Changelog Requirement
`Black` has CI that will check for an entry corresponding to your PR in `CHANGES.md`. If
you feel this PR does not require a changelog entry please state that in a comment and a
maintainer can add a `skip news` label to make the CI pass. Otherwise, please ensure you
have a line in the following format added below the appropriate header:
```md
- `Black` is now more awesome (#X)
```
<!---
The Next PR Number link uses HTML because of a bug in MyST-Parser that double-escapes the ampersand, causing the query parameters to not be processed.
MyST-Parser issue: https://github.com/executablebooks/MyST-Parser/issues/760
MyST-Parser stalled fix PR: https://github.com/executablebooks/MyST-Parser/pull/929
-->
Note that X should be your PR number, not issue number! To workout X, please use
<a href="https://ichard26.github.io/next-pr-number/?owner=psf&name=black">Next PR
Number</a>. This is not perfect but saves a lot of release overhead as now the releaser
does not need to go back and workout what to add to the `CHANGES.md` for each release.
### Style Changes
If a change would affect the advertised code style, please modify the documentation (The
_Black_ code style) to reflect that change. Patches that fix unintended bugs in
formatting don't need to be mentioned separately though. If the change is implemented
with the `--preview` flag, please include the change in the future style document
instead and write the changelog entry under the dedicated "Preview style" heading.
### Docs Testing
If you make changes to docs, you can test they still build locally too.
```console
(.venv)$ pip install -r docs/requirements.txt
(.venv)$ pip install -e ".[d]"
(.venv)$ sphinx-build -a -b html -W docs/ docs/_build/
```
## Hygiene
If you're fixing a bug, add a test. Run it first to confirm it fails, then fix the bug,
and run the test again to confirm it's really fixed.
If adding a new feature, add a test. In fact, always add a test. If adding a large
feature, please first open an issue to discuss it beforehand.
## Finally
Thanks again for your interest in improving the project! You're taking action when most
people decide to sit and watch.

View File

@ -0,0 +1 @@
_build/generated/contributing_to_black.md

Some files were not shown because too many files have changed in this diff Show More