Compare commits

..

No commits in common. "main" and "18.5b0" have entirely different histories.
main ... 18.5b0

481 changed files with 7960 additions and 131844 deletions

10
.appveyor.yml Normal file
View File

@ -0,0 +1,10 @@
install:
- C:\Python36\python.exe -m pip install mypy
- C:\Python36\python.exe -m pip install -e .
# Not a C# project
build: off
test_script:
- C:\Python36\python.exe tests/test_black.py
- C:\Python36\python.exe -m mypy black.py tests/test_black.py

4
.coveragerc Normal file
View File

@ -0,0 +1,4 @@
[report]
omit =
blib2to3/*
*/site-packages/*

View File

@ -1,8 +1,8 @@
# This is an example .flake8 config, used when developing *Black* itself.
# Keep in sync with setup.cfg which is used for source packages.
[flake8] [flake8]
# B905 should be enabled when we drop support for 3.9 ignore = E203, E266, E501, W503
ignore = E203, E266, E501, E701, E704, W503, B905, B907
# line length is intentionally set to 80 here because black uses Bugbear
# See https://black.readthedocs.io/en/stable/guides/using_black_with_other_tools.html#bugbear for more details
max-line-length = 80 max-line-length = 80
max-complexity = 18 max-complexity = 18
select = B,C,E,F,W,T4,B9 select = B,C,E,F,W,T4,B9

View File

@ -1,3 +0,0 @@
node: $Format:%H$
node-date: $Format:%cI$
describe-name: $Format:%(describe:tags=true,match=[0-9]*)$

2
.gitattributes vendored
View File

@ -1,2 +0,0 @@
.git_archival.txt export-subst
*.py diff=python

View File

@ -1,11 +1,13 @@
# Treat each other well # Treat each other well
Everyone participating in the _Black_ project, and in particular in the issue tracker, Everyone participating in the Black project, and in particular in the
pull requests, and social media activity, is expected to treat other people with respect issue tracker, pull requests, and social media activity, is expected
and more generally to follow the guidelines articulated in the to treat other people with respect and more generally to follow the
[Python Community Code of Conduct](https://www.python.org/psf/codeofconduct/). guidelines articulated in the [Python Community Code of
Conduct](https://www.python.org/psf/codeofconduct/).
At the same time, humor is encouraged. In fact, basic familiarity with Monty Python's At the same time, humor is encouraged. In fact, basic familiarity with
Flying Circus is expected. We are not savages. Monty Python's Flying Circus is expected. We are not savages.
And if you _really_ need to slap somebody, do it with a fish while dancing. And if you *really* need to slap somebody, do it with a fish while
dancing.

14
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,14 @@
Howdy! Sorry you're having trouble. To expedite your experience,
provide some basics for me:
Operating system:
Python version:
Black version:
Does also happen on master:
To answer the last question, follow these steps:
* create a new virtualenv (make sure it's the same Python version);
* clone this repository;
* run `pip install -e .`;
* make sure it's sane by running `python setup.py test`; and
* run `black` like you did last time.

View File

@ -1,66 +0,0 @@
---
name: Bug report
about: Create a report to help us improve Black's quality
title: ""
labels: "T: bug"
assignees: ""
---
<!--
Please make sure that the bug is not already fixed either in newer versions or the
current development version. To confirm this, you have three options:
1. Update Black's version if a newer release exists: `pip install -U black`
2. Use the online formatter at <https://black.vercel.app/?version=main>, which will use
the latest main branch. Note that the online formatter currently runs on
an older version of Python and may not support newer syntax, such as the
extended f-string syntax added in Python 3.12.
3. Or run _Black_ on your machine:
- create a new virtualenv (make sure it's the same Python version);
- clone this repository;
- run `pip install -e .[d]`;
- run `pip install -r test_requirements.txt`
- make sure it's sane by running `python -m pytest`; and
- run `black` like you did last time.
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
<!--
Minimal steps to reproduce the behavior with source code and Black's configuration.
-->
For example, take this code:
```python
this = "code"
```
And run it with these arguments:
```sh
$ black file.py --target-version py39
```
The resulting error is:
> cannot format file.py: INTERNAL ERROR: ...
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Environment**
<!-- Please complete the following information: -->
- Black's version: <!-- e.g. [main] -->
- OS and Python version: <!-- e.g. [Linux/Python 3.7.4rc1] -->
**Additional context**
<!-- Add any other context about the problem here. -->

View File

@ -1,12 +0,0 @@
# See also: https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
# This is the default and blank issues are useful so let's keep 'em.
blank_issues_enabled: true
contact_links:
- name: Chat on Python Discord
url: https://discord.gg/RtVdv86PrH
about: |
User support, questions, and other lightweight requests can be
handled via the #black-formatter text channel we have on Python
Discord.

View File

@ -1,27 +0,0 @@
---
name: Documentation
about: Report a problem with or suggest something for the documentation
title: ""
labels: "T: documentation"
assignees: ""
---
**Is this related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is.
e.g. I'm always frustrated when [...] / I wished that [...] -->
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to
happen or see changed. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any
alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the issue
here. -->

View File

@ -1,27 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ""
labels: "T: enhancement"
assignees: ""
---
**Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is.
e.g. I'm always frustrated when [...] -->
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to
happen. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any
alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request
here. -->

View File

@ -1,37 +0,0 @@
---
name: Code style issue
about: Help us improve the Black code style
title: ""
labels: "T: style"
assignees: ""
---
**Describe the style change**
<!-- A clear and concise description of how the style can be
improved. -->
**Examples in the current _Black_ style**
<!-- Think of some short code snippets that show
how the current _Black_ style is not great: -->
```python
def f():
"Make sure this code is blackened"""
pass
```
**Desired style**
<!-- How do you think _Black_ should format the above snippets: -->
```python
def f(
):
pass
```
**Additional context**
<!-- Add any other context about the problem here. -->

View File

@ -1,36 +0,0 @@
<!-- Hello! Thanks for submitting a PR. To help make things go a bit more
smoothly we would appreciate that you go through this template. -->
### Description
<!-- Good things to put here include: reasoning for the change (please link
any relevant issues!), any noteworthy (or hacky) choices to be aware of,
or what the problem resolved here looked like ... we won't mind a ranty
story :) -->
### Checklist - did you ...
<!-- If any of the following items aren't relevant for your contribution
please still tick them so we know you've gone through the checklist.
All user-facing changes should get an entry. Otherwise, signal to us
this should get the magical label to silence the CHANGELOG entry check.
Tests are required for bugfixes and new features. Documentation changes
are necessary for formatting and most enhancement changes. -->
- [ ] Add an entry in `CHANGES.md` if necessary?
- [ ] Add / update tests if necessary?
- [ ] Add new / update outdated documentation?
<!-- Just as a reminder, everyone in all psf/black spaces including PRs
must follow the PSF Code of Conduct (link below).
Finally, once again thanks for your time and effort. If you have any
feedback in regards to your experience contributing here, please
let us know!
Helpful links:
PSF COC: https://www.python.org/psf/conduct/
Contributing docs: https://black.readthedocs.io/en/latest/contributing/index.html
Chat on Python Discord: https://discord.gg/RtVdv86PrH -->

View File

@ -1,16 +0,0 @@
# https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/configuration-options-for-dependency-updates
version: 2
updates:
- package-ecosystem: "github-actions"
# Workflow files in .github/workflows will be checked
directory: "/"
schedule:
interval: "weekly"
labels: ["skip news", "C: dependencies"]
- package-ecosystem: "pip"
directory: "docs/"
schedule:
interval: "weekly"
labels: ["skip news", "C: dependencies", "T: documentation"]

View File

@ -1,24 +0,0 @@
name: changelog
on:
pull_request:
types: [opened, synchronize, labeled, unlabeled, reopened]
permissions:
contents: read
jobs:
build:
name: Changelog Entry Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Grep CHANGES.md for PR number
if: contains(github.event.pull_request.labels.*.name, 'skip news') != true
run: |
grep -Pz "\((\n\s*)?#${{ github.event.pull_request.number }}(\n\s*)?\)" CHANGES.md || \
(echo "Please add '(#${{ github.event.pull_request.number }})' change line to CHANGES.md (or if appropriate, ask a maintainer to add the 'skip news' label)" && \
exit 1)

View File

@ -1,155 +0,0 @@
name: diff-shades
on:
push:
branches: [main]
paths: ["src/**", "pyproject.toml", ".github/workflows/*"]
pull_request:
paths: ["src/**", "pyproject.toml", ".github/workflows/*"]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.run_id }}
cancel-in-progress: true
jobs:
configure:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.set-config.outputs.matrix }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install diff-shades and support dependencies
run: |
python -m pip install 'click>=8.1.7' packaging urllib3
python -m pip install https://github.com/ichard26/diff-shades/archive/stable.zip
- name: Calculate run configuration & metadata
id: set-config
env:
GITHUB_TOKEN: ${{ github.token }}
run: >
python scripts/diff_shades_gha_helper.py config ${{ github.event_name }}
${{ matrix.mode }}
analysis:
name: analysis / ${{ matrix.mode }}
needs: configure
runs-on: ubuntu-latest
env:
HATCH_BUILD_HOOKS_ENABLE: "1"
# Clang is less picky with the C code it's given than gcc (and may
# generate faster binaries too).
CC: clang-18
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.configure.outputs.matrix) }}
steps:
- name: Checkout this repository (full clone)
uses: actions/checkout@v4
with:
# The baseline revision could be rather old so a full clone is ideal.
fetch-depth: 0
- uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install diff-shades and support dependencies
run: |
python -m pip install https://github.com/ichard26/diff-shades/archive/stable.zip
python -m pip install 'click>=8.1.7' packaging urllib3
# After checking out old revisions, this might not exist so we'll use a copy.
cat scripts/diff_shades_gha_helper.py > helper.py
git config user.name "diff-shades-gha"
git config user.email "diff-shades-gha@example.com"
- name: Attempt to use cached baseline analysis
id: baseline-cache
uses: actions/cache@v4
with:
path: ${{ matrix.baseline-analysis }}
key: ${{ matrix.baseline-cache-key }}
- name: Build and install baseline revision
if: steps.baseline-cache.outputs.cache-hit != 'true'
env:
GITHUB_TOKEN: ${{ github.token }}
run: >
${{ matrix.baseline-setup-cmd }}
&& python -m pip install .
- name: Analyze baseline revision
if: steps.baseline-cache.outputs.cache-hit != 'true'
run: >
diff-shades analyze -v --work-dir projects-cache/
${{ matrix.baseline-analysis }} ${{ matrix.force-flag }}
- name: Build and install target revision
env:
GITHUB_TOKEN: ${{ github.token }}
run: >
${{ matrix.target-setup-cmd }}
&& python -m pip install .
- name: Analyze target revision
run: >
diff-shades analyze -v --work-dir projects-cache/
${{ matrix.target-analysis }} --repeat-projects-from
${{ matrix.baseline-analysis }} ${{ matrix.force-flag }}
- name: Generate HTML diff report
run: >
diff-shades --dump-html diff.html compare --diff
${{ matrix.baseline-analysis }} ${{ matrix.target-analysis }}
- name: Upload diff report
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.mode }}-diff.html
path: diff.html
- name: Upload baseline analysis
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.baseline-analysis }}
path: ${{ matrix.baseline-analysis }}
- name: Upload target analysis
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.target-analysis }}
path: ${{ matrix.target-analysis }}
- name: Generate summary file (PR only)
if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes'
run: >
python helper.py comment-body ${{ matrix.baseline-analysis }}
${{ matrix.target-analysis }} ${{ matrix.baseline-sha }}
${{ matrix.target-sha }} ${{ github.event.pull_request.number }}
- name: Upload summary file (PR only)
if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes'
uses: actions/upload-artifact@v4
with:
name: .pr-comment.json
path: .pr-comment.json
- name: Verify zero changes (PR only)
if: matrix.mode == 'assert-no-changes'
run: >
diff-shades compare --check ${{ matrix.baseline-analysis }} ${{ matrix.target-analysis }}
|| (echo "Please verify you didn't change the stable code style unintentionally!" && exit 1)
- name: Check for failed files for target revision
# Even if the previous step failed, we should still check for failed files.
if: always()
run: >
diff-shades show-failed --check --show-log ${{ matrix.target-analysis }}

View File

@ -1,49 +0,0 @@
name: diff-shades-comment
on:
workflow_run:
workflows: [diff-shades]
types: [completed]
permissions:
pull-requests: write
jobs:
comment:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "*"
- name: Install support dependencies
run: |
python -m pip install pip --upgrade
python -m pip install click packaging urllib3
- name: Get details from initial workflow run
id: metadata
env:
GITHUB_TOKEN: ${{ github.token }}
run: >
python scripts/diff_shades_gha_helper.py comment-details
${{github.event.workflow_run.id }}
- name: Try to find pre-existing PR comment
if: steps.metadata.outputs.needs-comment == 'true'
id: find-comment
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e
with:
issue-number: ${{ steps.metadata.outputs.pr-number }}
comment-author: "github-actions[bot]"
body-includes: "diff-shades"
- name: Create or update PR comment
if: steps.metadata.outputs.needs-comment == 'true'
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043
with:
comment-id: ${{ steps.find-comment.outputs.comment-id }}
issue-number: ${{ steps.metadata.outputs.pr-number }}
body: ${{ steps.metadata.outputs.comment-body }}
edit-mode: replace

View File

@ -1,40 +0,0 @@
name: Documentation
on: [push, pull_request]
permissions:
contents: read
jobs:
build:
# We want to run on external PRs, but not on our own internal PRs as they'll be run
# by the push to the branch. Without this if check, checks are duplicated since
# internal PRs match both the push and pull_request events.
if:
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
github.repository
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, windows-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
- name: Set up latest Python
uses: actions/setup-python@v5
with:
python-version: "3.13"
allow-prereleases: true
- name: Install dependencies
run: |
python -m pip install uv
python -m uv venv
python -m uv pip install -e ".[d]"
python -m uv pip install -r "docs/requirements.txt"
- name: Build documentation
run: sphinx-build -a -b html -W --keep-going docs/ docs/_build

View File

@ -1,69 +0,0 @@
name: docker
on:
push:
branches:
- "main"
release:
types: [published]
permissions:
contents: read
jobs:
docker:
if: github.repository == 'psf/black'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Check + set version tag
run:
echo "GIT_TAG=$(git describe --candidates=0 --tags 2> /dev/null || echo
latest_non_release)" >> $GITHUB_ENV
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: pyfound/black:latest,pyfound/black:${{ env.GIT_TAG }}
- name: Build and push latest_release tag
if:
${{ github.event_name == 'release' && github.event.action == 'published' &&
!github.event.release.prerelease }}
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: pyfound/black:latest_release
- name: Build and push latest_prerelease tag
if:
${{ github.event_name == 'release' && github.event.action == 'published' &&
github.event.release.prerelease }}
uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64,linux/arm64
push: true
tags: pyfound/black:latest_prerelease
- name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}

View File

@ -1,43 +0,0 @@
name: Fuzz
on: [push, pull_request]
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}
cancel-in-progress: true
permissions:
contents: read
jobs:
build:
# We want to run on external PRs, but not on our own internal PRs as they'll be run
# by the push to the branch. Without this if check, checks are duplicated since
# internal PRs match both the push and pull_request events.
if:
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
github.repository
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12.4", "3.13"]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
allow-prereleases: true
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade tox
- name: Run fuzz tests
run: |
tox -e fuzz

View File

@ -1,48 +0,0 @@
name: Lint + format ourselves
on: [push, pull_request]
jobs:
build:
# We want to run on external PRs, but not on our own internal PRs as they'll be run
# by the push to the branch. Without this if check, checks are duplicated since
# internal PRs match both the push and pull_request events.
if:
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
github.repository
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Assert PR target is main
if: github.event_name == 'pull_request' && github.repository == 'psf/black'
run: |
if [ "$GITHUB_BASE_REF" != "main" ]; then
echo "::error::PR targeting '$GITHUB_BASE_REF', please refile targeting 'main'." && exit 1
fi
- name: Set up latest Python
uses: actions/setup-python@v5
with:
python-version: "3.13"
allow-prereleases: true
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -e '.'
python -m pip install tox
- name: Run pre-commit hooks
uses: pre-commit/action@v3.0.1
- name: Format ourselves
run: |
tox -e run_self
- name: Regenerate schema
run: |
tox -e generate_schema
git diff --exit-code

View File

@ -1,130 +0,0 @@
name: Build and publish
on:
release:
types: [published]
pull_request:
push:
branches:
- main
permissions:
contents: read
jobs:
main:
name: sdist + pure wheel
runs-on: ubuntu-latest
if: github.event_name == 'release'
steps:
- uses: actions/checkout@v4
- name: Set up latest Python
uses: actions/setup-python@v5
with:
python-version: "3.13"
allow-prereleases: true
- name: Install latest pip, build, twine
run: |
python -m pip install --upgrade --disable-pip-version-check pip
python -m pip install --upgrade build twine
- name: Build wheel and source distributions
run: python -m build
- if: github.event_name == 'release'
name: Upload to PyPI via Twine
env:
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
run: twine upload --verbose -u '__token__' dist/*
generate_wheels_matrix:
name: generate wheels matrix
runs-on: ubuntu-latest
outputs:
include: ${{ steps.set-matrix.outputs.include }}
steps:
- uses: actions/checkout@v4
# Keep cibuildwheel version in sync with below
- name: Install cibuildwheel and pypyp
run: |
pipx install cibuildwheel==2.22.0
pipx install pypyp==1.3.0
- name: generate matrix
if: github.event_name != 'pull_request'
run: |
{
cibuildwheel --print-build-identifiers --platform linux \
| pyp 'json.dumps({"only": x, "os": "ubuntu-latest"})' \
&& cibuildwheel --print-build-identifiers --platform macos \
| pyp 'json.dumps({"only": x, "os": "macos-latest"})' \
&& cibuildwheel --print-build-identifiers --platform windows \
| pyp 'json.dumps({"only": x, "os": "windows-latest"})'
} | pyp 'json.dumps(list(map(json.loads, lines)))' > /tmp/matrix
env:
CIBW_ARCHS_LINUX: x86_64
CIBW_ARCHS_MACOS: x86_64 arm64
CIBW_ARCHS_WINDOWS: AMD64
- name: generate matrix (PR)
if: github.event_name == 'pull_request'
run: |
{
cibuildwheel --print-build-identifiers --platform linux \
| pyp 'json.dumps({"only": x, "os": "ubuntu-latest"})'
} | pyp 'json.dumps(list(map(json.loads, lines)))' > /tmp/matrix
env:
CIBW_BUILD: "cp39-* cp313-*"
CIBW_ARCHS_LINUX: x86_64
- id: set-matrix
run: echo "include=$(cat /tmp/matrix)" | tee -a $GITHUB_OUTPUT
mypyc:
name: mypyc wheels ${{ matrix.only }}
needs: generate_wheels_matrix
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.generate_wheels_matrix.outputs.include) }}
steps:
- uses: actions/checkout@v4
# Keep cibuildwheel version in sync with above
- uses: pypa/cibuildwheel@v2.23.3
with:
only: ${{ matrix.only }}
- name: Upload wheels as workflow artifacts
uses: actions/upload-artifact@v4
with:
name: ${{ matrix.only }}-mypyc-wheels
path: ./wheelhouse/*.whl
- if: github.event_name == 'release'
name: Upload wheels to PyPI via Twine
env:
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
run: pipx run twine upload --verbose -u '__token__' wheelhouse/*.whl
update-stable-branch:
name: Update stable branch
needs: [main, mypyc]
runs-on: ubuntu-latest
if: github.event_name == 'release'
permissions:
contents: write
steps:
- name: Checkout stable branch
uses: actions/checkout@v4
with:
ref: stable
fetch-depth: 0
- if: github.event_name == 'release'
name: Update stable branch to release tag & push
run: |
git reset --hard ${{ github.event.release.tag_name }}
git push

View File

@ -1,56 +0,0 @@
name: Release tool CI
on:
push:
paths:
- .github/workflows/release_tests.yml
- release.py
- release_tests.py
pull_request:
paths:
- .github/workflows/release_tests.yml
- release.py
- release_tests.py
jobs:
build:
# We want to run on external PRs, but not on our own internal PRs as they'll be run
# by the push to the branch. Without this if check, checks are duplicated since
# internal PRs match both the push and pull_request events.
if:
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
github.repository
name: Running python ${{ matrix.python-version }} on ${{matrix.os}}
runs-on: ${{ matrix.os }}
strategy:
matrix:
python-version: ["3.13"]
os: [macOS-latest, ubuntu-latest, windows-latest]
steps:
- uses: actions/checkout@v4
with:
# Give us all history, branches and tags
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
allow-prereleases: true
- name: Print Python Version
run: python --version --version && which python
- name: Print Git Version
run: git --version && which git
- name: Update pip, setuptools + wheels
run: |
python -m pip install --upgrade pip setuptools wheel
- name: Run unit tests via coverage + print report
run: |
python -m pip install coverage
coverage run scripts/release_tests.py
coverage report --show-missing

View File

@ -1,110 +0,0 @@
name: Test
on:
push:
paths-ignore:
- "docs/**"
- "*.md"
pull_request:
paths-ignore:
- "docs/**"
- "*.md"
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.run_id }}
cancel-in-progress: true
jobs:
main:
# We want to run on external PRs, but not on our own internal PRs as they'll be run
# by the push to the branch. Without this if check, checks are duplicated since
# internal PRs match both the push and pull_request events.
if:
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
github.repository
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12.4", "3.13", "pypy-3.9"]
os: [ubuntu-latest, macOS-latest, windows-latest]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
allow-prereleases: true
- name: Install tox
run: |
python -m pip install --upgrade pip
python -m pip install --upgrade tox
- name: Unit tests
if: "!startsWith(matrix.python-version, 'pypy')"
run:
tox -e ci-py$(echo ${{ matrix.python-version }} | tr -d '.') -- -v --color=yes
- name: Unit tests (pypy)
if: "startsWith(matrix.python-version, 'pypy')"
run: tox -e ci-pypy3 -- -v --color=yes
- name: Upload coverage to Coveralls
# Upload coverage if we are on the main repository and
# we're running on Linux (this action only supports Linux)
if:
github.repository == 'psf/black' && matrix.os == 'ubuntu-latest' &&
!startsWith(matrix.python-version, 'pypy')
uses: AndreMiras/coveralls-python-action@ac868b9540fad490f7ca82b8ca00480fd751ed19
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
parallel: true
flag-name: py${{ matrix.python-version }}-${{ matrix.os }}
debug: true
coveralls-finish:
needs: main
if: github.repository == 'psf/black'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Send finished signal to Coveralls
uses: AndreMiras/coveralls-python-action@ac868b9540fad490f7ca82b8ca00480fd751ed19
with:
parallel-finished: true
debug: true
uvloop:
if:
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
github.repository
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macOS-latest]
steps:
- uses: actions/checkout@v4
- name: Set up latest Python
uses: actions/setup-python@v5
with:
python-version: "3.12.4"
- name: Install black with uvloop
run: |
python -m pip install pip --upgrade --disable-pip-version-check
python -m pip install -e ".[uvloop]"
- name: Format ourselves
run: python -m black --check src/ tests/

View File

@ -1,63 +0,0 @@
name: Publish executables
on:
release:
types: [published]
permissions:
contents: write # actions/upload-release-asset needs this.
jobs:
build:
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
os: [windows-2019, ubuntu-22.04, macos-latest]
include:
- os: windows-2019
pathsep: ";"
asset_name: black_windows.exe
executable_mime: "application/vnd.microsoft.portable-executable"
- os: ubuntu-22.04
pathsep: ":"
asset_name: black_linux
executable_mime: "application/x-executable"
- os: macos-latest
pathsep: ":"
asset_name: black_macos
executable_mime: "application/x-mach-binary"
steps:
- uses: actions/checkout@v4
- name: Set up latest Python
uses: actions/setup-python@v5
with:
python-version: "3.12.4"
- name: Install Black and PyInstaller
run: |
python -m pip install --upgrade pip wheel
python -m pip install .[colorama]
python -m pip install pyinstaller
- name: Build executable with PyInstaller
run: >
python -m PyInstaller -F --name ${{ matrix.asset_name }} --add-data
'src/blib2to3${{ matrix.pathsep }}blib2to3' src/black/__main__.py
- name: Quickly test executable
run: |
./dist/${{ matrix.asset_name }} --version
./dist/${{ matrix.asset_name }} src --verbose
- name: Upload binary as release asset
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ github.event.release.upload_url }}
asset_path: dist/${{ matrix.asset_name }}
asset_name: ${{ matrix.asset_name }}
asset_content_type: ${{ matrix.executable_mime }}

20
.gitignore vendored
View File

@ -1,28 +1,8 @@
.venv
.coverage .coverage
.coverage.*
_build _build
.DS_Store .DS_Store
.vscode .vscode
.python-version
docs/_static/pypi.svg docs/_static/pypi.svg
.tox .tox
__pycache__ __pycache__
# Packaging artifacts
black.egg-info black.egg-info
black.dist-info
build/
dist/
pip-wheel-metadata/
.eggs
src/_black_version.py
.idea
.dmypy.json
*.swp
.hypothesis/
venv/
.ipynb_checkpoints/
node_modules/

View File

@ -1,83 +1,19 @@
# Note: don't use this config for your own repositories. Instead, see # Note: don't use this config for your own repositories. Instead, see
# "Version control integration" in docs/integrations/source_version_control.md # "Version control integration" in README.md.
exclude: ^(profiling/|tests/data/) - repo: local
repos:
- repo: local
hooks: hooks:
- id: check-pre-commit-rev-in-example - id: black
name: Check pre-commit rev in example name: black
language: python language: system
entry: python -m scripts.check_pre_commit_rev_in_example entry: python3 -m black
files: '(CHANGES\.md|source_version_control\.md)$' files: ^(black|setup|tests/test_black)\.py$
additional_dependencies: - id: flake8
&version_check_dependencies [ name: flake8
commonmark==0.9.1, language: system
pyyaml==6.0.1, entry: flake8
beautifulsoup4==4.9.3, files: ^(black|setup|tests/test_black)\.py$
] - id: mypy
name: mypy
- id: check-version-in-the-basics-example language: system
name: Check black version in the basics example entry: mypy
language: python files: ^(black|setup|tests/test_black)\.py$
entry: python -m scripts.check_version_in_basics_example
files: '(CHANGES\.md|the_basics\.md)$'
additional_dependencies: *version_check_dependencies
- repo: https://github.com/pycqa/isort
rev: 6.0.1
hooks:
- id: isort
- repo: https://github.com/pycqa/flake8
rev: 7.2.0
hooks:
- id: flake8
additional_dependencies:
- flake8-bugbear==24.2.6
- flake8-comprehensions
- flake8-simplify
exclude: ^src/blib2to3/
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.15.0
hooks:
- id: mypy
exclude: ^(docs/conf.py|scripts/generate_schema.py)$
args: []
additional_dependencies: &mypy_deps
- types-PyYAML
- types-atheris
- tomli >= 0.2.6, < 2.0.0
- click >= 8.2.0
# Click is intentionally out-of-sync with pyproject.toml
# v8.2 has breaking changes. We work around them at runtime, but we need the newer stubs.
- packaging >= 22.0
- platformdirs >= 2.1.0
- pytokens >= 0.1.10
- pytest
- hypothesis
- aiohttp >= 3.7.4
- types-commonmark
- urllib3
- hypothesmith
- id: mypy
name: mypy (Python 3.10)
files: scripts/generate_schema.py
args: ["--python-version=3.10"]
additional_dependencies: *mypy_deps
- repo: https://github.com/rbubley/mirrors-prettier
rev: v3.5.3
hooks:
- id: prettier
types_or: [markdown, yaml, json]
exclude: \.github/workflows/diff_shades\.yml
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0
hooks:
- id: end-of-file-fixer
- id: trailing-whitespace
ci:
autoupdate_schedule: quarterly

View File

@ -1,20 +1,7 @@
# Note that we recommend using https://github.com/psf/black-pre-commit-mirror instead - id: black
# This will work about 2x as fast as using the hooks in this repository name: black
- id: black description: 'Black: The uncompromising Python code formatter'
name: black entry: black
description: "Black: The uncompromising Python code formatter" language: python
entry: black language_version: python3.6
language: python types: [python]
minimum_pre_commit_version: 2.9.2
require_serial: true
types_or: [python, pyi]
- id: black-jupyter
name: black-jupyter
description:
"Black: The uncompromising Python code formatter (with Jupyter Notebook support)"
entry: black
language: python
minimum_pre_commit_version: 2.9.2
require_serial: true
types_or: [python, pyi, jupyter]
additional_dependencies: [".[jupyter]"]

View File

@ -1,3 +0,0 @@
proseWrap: always
printWidth: 88
endOfLine: auto

View File

@ -1,21 +0,0 @@
version: 2
formats:
- htmlzip
build:
os: ubuntu-22.04
tools:
python: "3.11"
python:
install:
- requirements: docs/requirements.txt
- method: pip
path: .
extra_requirements:
- d
sphinx:
configuration: docs/conf.py

24
.travis.yml Normal file
View File

@ -0,0 +1,24 @@
sudo: required
dist: xenial
language: python
cache: pip
before_install:
- if [[ $TRAVIS_PYTHON_VERSION == '3.7-dev' ]]; then sudo add-apt-repository ppa:deadsnakes/ppa -y; fi
- if [[ $TRAVIS_PYTHON_VERSION == '3.7-dev' ]]; then sudo sudo apt-get update; fi
install:
- pip install coverage coveralls flake8 flake8-bugbear mypy
- pip install -e .
script:
- coverage run tests/test_black.py
- if [[ $TRAVIS_PYTHON_VERSION == '3.6' ]]; then mypy black.py tests/test_black.py; fi
- if [[ $TRAVIS_PYTHON_VERSION == '3.6-dev' ]]; then flake8 black.py tests/test_black.py; fi
after_success:
- coveralls
notifications:
on_success: change
on_failure: always
matrix:
include:
- python: 3.6
- python: 3.6-dev
- python: 3.7-dev

View File

@ -1,197 +0,0 @@
# Authors
Glued together by [Łukasz Langa](mailto:lukasz@langa.pl).
Maintained with:
- [Carol Willing](mailto:carolcode@willingconsulting.com)
- [Carl Meyer](mailto:carl@oddbird.net)
- [Jelle Zijlstra](mailto:jelle.zijlstra@gmail.com)
- [Mika Naylor](mailto:mail@autophagy.io)
- [Zsolt Dollenstein](mailto:zsol.zsol@gmail.com)
- [Cooper Lees](mailto:me@cooperlees.com)
- [Richard Si](mailto:sichard26@gmail.com)
- [Felix Hildén](mailto:felix.hilden@gmail.com)
- [Batuhan Taskaya](mailto:batuhan@python.org)
- [Shantanu Jain](mailto:hauntsaninja@gmail.com)
Multiple contributions by:
- [Abdur-Rahmaan Janhangeer](mailto:arj.python@gmail.com)
- [Adam Johnson](mailto:me@adamj.eu)
- [Adam Williamson](mailto:adamw@happyassassin.net)
- [Alexander Huynh](mailto:ahrex-gh-psf-black@e.sc)
- [Alexandr Artemyev](mailto:mogost@gmail.com)
- [Alex Vandiver](mailto:github@chmrr.net)
- [Allan Simon](mailto:allan.simon@supinfo.com)
- Anders-Petter Ljungquist
- [Amethyst Reese](mailto:amy@n7.gg)
- [Andrew Thorp](mailto:andrew.thorp.dev@gmail.com)
- [Andrew Zhou](mailto:andrewfzhou@gmail.com)
- [Andrey](mailto:dyuuus@yandex.ru)
- [Andy Freeland](mailto:andy@andyfreeland.net)
- [Anthony Sottile](mailto:asottile@umich.edu)
- [Antonio Ossa Guerra](mailto:aaossa+black@uc.cl)
- [Arjaan Buijk](mailto:arjaan.buijk@gmail.com)
- [Arnav Borbornah](mailto:arnavborborah11@gmail.com)
- [Artem Malyshev](mailto:proofit404@gmail.com)
- [Asger Hautop Drewsen](mailto:asgerdrewsen@gmail.com)
- [Augie Fackler](mailto:raf@durin42.com)
- [Aviskar KC](mailto:aviskarkc10@gmail.com)
- Batuhan Taşkaya
- [Benjamin Wohlwend](mailto:bw@piquadrat.ch)
- [Benjamin Woodruff](mailto:github@benjam.info)
- [Bharat Raghunathan](mailto:bharatraghunthan9767@gmail.com)
- [Brandt Bucher](mailto:brandtbucher@gmail.com)
- [Brett Cannon](mailto:brett@python.org)
- [Bryan Bugyi](mailto:bryan.bugyi@rutgers.edu)
- [Bryan Forbes](mailto:bryan@reigndropsfall.net)
- [Calum Lind](mailto:calumlind@gmail.com)
- [Charles](mailto:peacech@gmail.com)
- Charles Reid
- [Christian Clauss](mailto:cclauss@bluewin.ch)
- [Christian Heimes](mailto:christian@python.org)
- [Chuck Wooters](mailto:chuck.wooters@microsoft.com)
- [Chris Rose](mailto:offline@offby1.net)
- Codey Oxley
- [Cong](mailto:congusbongus@gmail.com)
- [Cooper Ry Lees](mailto:me@cooperlees.com)
- [Dan Davison](mailto:dandavison7@gmail.com)
- [Daniel Hahler](mailto:github@thequod.de)
- [Daniel M. Capella](mailto:polycitizen@gmail.com)
- Daniele Esposti
- [David Hotham](mailto:david.hotham@metaswitch.com)
- [David Lukes](mailto:dafydd.lukes@gmail.com)
- [David Szotten](mailto:davidszotten@gmail.com)
- [Denis Laxalde](mailto:denis@laxalde.org)
- [Douglas Thor](mailto:dthor@transphormusa.com)
- dylanjblack
- [Eli Treuherz](mailto:eli@treuherz.com)
- [Emil Hessman](mailto:emil@hessman.se)
- [Felix Kohlgrüber](mailto:felix.kohlgrueber@gmail.com)
- [Florent Thiery](mailto:fthiery@gmail.com)
- Francisco
- [Giacomo Tagliabue](mailto:giacomo.tag@gmail.com)
- [Greg Gandenberger](mailto:ggandenberger@shoprunner.com)
- [Gregory P. Smith](mailto:greg@krypto.org)
- Gustavo Camargo
- hauntsaninja
- [Hadi Alqattan](mailto:alqattanhadizaki@gmail.com)
- [Hassan Abouelela](mailto:hassan@hassanamr.com)
- [Heaford](mailto:dan@heaford.com)
- [Hugo Barrera](mailto::hugo@barrera.io)
- Hugo van Kemenade
- [Hynek Schlawack](mailto:hs@ox.cx)
- [Ionite](mailto:dev@ionite.io)
- [Ivan Katanić](mailto:ivan.katanic@gmail.com)
- [Jakub Kadlubiec](mailto:jakub.kadlubiec@skyscanner.net)
- [Jakub Warczarek](mailto:jakub.warczarek@gmail.com)
- [Jan Hnátek](mailto:jan.hnatek@gmail.com)
- [Jason Fried](mailto:me@jasonfried.info)
- [Jason Friedland](mailto:jason@friedland.id.au)
- [jgirardet](mailto:ijkl@netc.fr)
- Jim Brännlund
- [Jimmy Jia](mailto:tesrin@gmail.com)
- [Joe Antonakakis](mailto:jma353@cornell.edu)
- [Jon Dufresne](mailto:jon.dufresne@gmail.com)
- [Jonas Obrist](mailto:ojiidotch@gmail.com)
- [Jonty Wareing](mailto:jonty@jonty.co.uk)
- [Jose Nazario](mailto:jose.monkey.org@gmail.com)
- [Joseph Larson](mailto:larson.joseph@gmail.com)
- [Josh Bode](mailto:joshbode@fastmail.com)
- [Josh Holland](mailto:anowlcalledjosh@gmail.com)
- [Joshua Cannon](mailto:joshdcannon@gmail.com)
- [José Padilla](mailto:jpadilla@webapplicate.com)
- [Juan Luis Cano Rodríguez](mailto:hello@juanlu.space)
- [kaiix](mailto:kvn.hou@gmail.com)
- [Katie McLaughlin](mailto:katie@glasnt.com)
- Katrin Leinweber
- [Keith Smiley](mailto:keithbsmiley@gmail.com)
- [Kenyon Ralph](mailto:kenyon@kenyonralph.com)
- [Kevin Kirsche](mailto:Kev.Kirsche+GitHub@gmail.com)
- [Kyle Hausmann](mailto:kyle.hausmann@gmail.com)
- [Kyle Sunden](mailto:sunden@wisc.edu)
- Lawrence Chan
- [Linus Groh](mailto:mail@linusgroh.de)
- [Loren Carvalho](mailto:comradeloren@gmail.com)
- [Luka Sterbic](mailto:luka.sterbic@gmail.com)
- [LukasDrude](mailto:mail@lukas-drude.de)
- Mahmoud Hossam
- Mariatta
- [Matt VanEseltine](mailto:vaneseltine@gmail.com)
- [Matthew Clapp](mailto:itsayellow+dev@gmail.com)
- [Matthew Walster](mailto:matthew@walster.org)
- Max Smolens
- [Michael Aquilina](mailto:michaelaquilina@gmail.com)
- [Michael Flaxman](mailto:michael.flaxman@gmail.com)
- [Michael J. Sullivan](mailto:sully@msully.net)
- [Michael McClimon](mailto:michael@mcclimon.org)
- [Miguel Gaiowski](mailto:miggaiowski@gmail.com)
- [Mike](mailto:roshi@fedoraproject.org)
- [mikehoyio](mailto:mikehoy@gmail.com)
- [Min ho Kim](mailto:minho42@gmail.com)
- [Miroslav Shubernetskiy](mailto:miroslav@miki725.com)
- MomIsBestFriend
- [Nathan Goldbaum](mailto:ngoldbau@illinois.edu)
- [Nathan Hunt](mailto:neighthan.hunt@gmail.com)
- [Neraste](mailto:neraste.herr10@gmail.com)
- [Nikolaus Waxweiler](mailto:madigens@gmail.com)
- [Ofek Lev](mailto:ofekmeister@gmail.com)
- [Osaetin Daniel](mailto:osaetindaniel@gmail.com)
- [otstrel](mailto:otstrel@gmail.com)
- [Pablo Galindo](mailto:Pablogsal@gmail.com)
- [Paul Ganssle](mailto:p.ganssle@gmail.com)
- [Paul Meinhardt](mailto:mnhrdt@gmail.com)
- [Peter Bengtsson](mailto:mail@peterbe.com)
- [Peter Grayson](mailto:pete@jpgrayson.net)
- [Peter Stensmyr](mailto:peter.stensmyr@gmail.com)
- pmacosta
- [Quentin Pradet](mailto:quentin@pradet.me)
- [Ralf Schmitt](mailto:ralf@systemexit.de)
- [Ramón Valles](mailto:mroutis@protonmail.com)
- [Richard Fearn](mailto:richardfearn@gmail.com)
- [Rishikesh Jha](mailto:rishijha424@gmail.com)
- [Rupert Bedford](mailto:rupert@rupertb.com)
- Russell Davis
- [Sagi Shadur](mailto:saroad2@gmail.com)
- [Rémi Verschelde](mailto:rverschelde@gmail.com)
- [Sami Salonen](mailto:sakki@iki.fi)
- [Samuel Cormier-Iijima](mailto:samuel@cormier-iijima.com)
- [Sanket Dasgupta](mailto:sanketdasgupta@gmail.com)
- Sergi
- [Scott Stevenson](mailto:scott@stevenson.io)
- Shantanu
- [shaoran](mailto:shaoran@sakuranohana.org)
- [Shinya Fujino](mailto:shf0811@gmail.com)
- springstan
- [Stavros Korokithakis](mailto:hi@stavros.io)
- [Stephen Rosen](mailto:sirosen@globus.org)
- [Steven M. Vascellaro](mailto:S.Vascellaro@gmail.com)
- [Sunil Kapil](mailto:snlkapil@gmail.com)
- [Sébastien Eustace](mailto:sebastien.eustace@gmail.com)
- [Tal Amuyal](mailto:TalAmuyal@gmail.com)
- [Terrance](mailto:git@terrance.allofti.me)
- [Thom Lu](mailto:thomas.c.lu@gmail.com)
- [Thomas Grainger](mailto:tagrain@gmail.com)
- [Tim Gates](mailto:tim.gates@iress.com)
- [Tim Swast](mailto:swast@google.com)
- [Timo](mailto:timo_tk@hotmail.com)
- Toby Fleming
- [Tom Christie](mailto:tom@tomchristie.com)
- [Tony Narlock](mailto:tony@git-pull.com)
- [Tsuyoshi Hombashi](mailto:tsuyoshi.hombashi@gmail.com)
- [Tushar Chandra](mailto:tusharchandra2018@u.northwestern.edu)
- [Tushar Sadhwani](mailto:tushar.sadhwani000@gmail.com)
- [Tzu-ping Chung](mailto:uranusjr@gmail.com)
- [Utsav Shah](mailto:ukshah2@illinois.edu)
- utsav-dbx
- vezeli
- [Ville Skyttä](mailto:ville.skytta@iki.fi)
- [Vishwas B Sharma](mailto:sharma.vishwas88@gmail.com)
- [Vlad Emelianov](mailto:volshebnyi@gmail.com)
- [williamfzc](mailto:178894043@qq.com)
- [wouter bolsterlee](mailto:wouter@bolsterl.ee)
- Yazdan
- [Yngve Høiseth](mailto:yngve@hoiseth.net)
- [Yurii Karabas](mailto:1998uriyyo@gmail.com)
- [Zac Hatfield-Dodds](mailto:zac@zhd.dev)

1997
CHANGES.md

File diff suppressed because it is too large Load Diff

View File

@ -1,22 +0,0 @@
cff-version: 1.2.0
title: "Black: The uncompromising Python code formatter"
message: >-
If you use this software, please cite it using the metadata from this file.
type: software
authors:
- family-names: Langa
given-names: Łukasz
- name: "contributors to Black"
repository-code: "https://github.com/psf/black"
url: "https://black.readthedocs.io/en/stable/"
abstract: >-
Black is the uncompromising Python code formatter. By using it, you agree to cede
control over minutiae of hand-formatting. In return, Black gives you speed,
determinism, and freedom from pycodestyle nagging about formatting. You will save time
and mental energy for more important matters.
Blackened code looks the same regardless of the project you're reading. Formatting
becomes transparent after a while and you can focus on the content instead.
Black makes code review faster by producing the smallest diffs possible.
license: MIT

View File

@ -1,13 +1,59 @@
# Contributing to _Black_ # Contributing to Black
Welcome future contributor! We're happy to see you willing to make the project better. Welcome! Happy to see you willing to make the project better. Have you
read the entire [user documentation](http://black.readthedocs.io/en/latest/)
yet?
If you aren't familiar with _Black_, or are looking for documentation on something
specific, the [user documentation](https://black.readthedocs.io/en/latest/) is the best
place to look.
For getting started on contributing, please read the ## Bird's eye view
[contributing documentation](https://black.readthedocs.org/en/latest/contributing/) for
all you need to know.
Thank you, and we look forward to your contributions! In terms of inspiration, *Black* is about as configurable as *gofmt*.
This is deliberate.
Bug reports and fixes are always welcome! Please follow the issue
template on GitHub for best results.
Before you suggest a new feature or configuration knob, ask yourself why
you want it. If it enables better integration with some workflow, fixes
an inconsistency, speeds things up, and so on - go for it! On the other
hand, if your answer is "because I don't like a particular formatting"
then you're not ready to embrace *Black* yet. Such changes are unlikely
to get accepted. You can still try but prepare to be disappointed.
## Technicalities
Development on the latest version of Python is preferred. As of this
writing it's 3.6.4. You can use any operating system. I am using macOS
myself and CentOS at work.
Install all development dependencies using:
```
$ pipenv install --dev
$ pre-commit install
```
If you haven't used `pipenv` before but are comfortable with virtualenvs,
just run `pip install pipenv` in the virtualenv you're already using and
invoke the command above from the cloned Black repo. It will do the
correct thing.
Before submitting pull requests, run tests with:
```
$ python setup.py test
```
## Hygiene
If you're fixing a bug, add a test. Run it first to confirm it fails,
then fix the bug, run it again to confirm it's really fixed.
If adding a new feature, add a test. In fact, always add a test. But
wait, before adding any large feature, first open an issue for us to
discuss the idea first.
## Finally
Thanks again for your interest in improving the project! You're taking
action when most people decide to sit and watch.

View File

@ -1,22 +0,0 @@
FROM python:3.12-slim AS builder
RUN mkdir /src
COPY . /src/
ENV VIRTUAL_ENV=/opt/venv
ENV HATCH_BUILD_HOOKS_ENABLE=1
# Install build tools to compile black + dependencies
RUN apt update && apt install -y build-essential git python3-dev
RUN python -m venv $VIRTUAL_ENV
RUN python -m pip install --no-cache-dir hatch hatch-fancy-pypi-readme hatch-vcs
RUN . /opt/venv/bin/activate && pip install --no-cache-dir --upgrade pip setuptools \
&& cd /src && hatch build -t wheel \
&& pip install --no-cache-dir dist/*-cp* \
&& pip install black[colorama,d,uvloop]
FROM python:3.12-slim
# copy only Python packages to limit the image size
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
CMD ["/opt/venv/bin/black"]

3
MANIFEST.in Normal file
View File

@ -0,0 +1,3 @@
include *.rst *.md LICENSE
recursive-include blib2to3 *.txt *.py
recursive-include tests *.txt *.out *.diff *.py

21
Pipfile Normal file
View File

@ -0,0 +1,21 @@
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"
[packages]
attrs = ">=17.4.0"
click = "*"
setuptools = ">=38.6.0"
appdirs = "*"
[dev-packages]
pre-commit = "*"
coverage = "*"
flake8 = "*"
flake8-bugbear = "*"
flake8-mypy = "*"
mypy = "*"
recommonmark = "*"
Sphinx = "*"
twine = ">=1.11.0rc1"

420
Pipfile.lock generated Normal file
View File

@ -0,0 +1,420 @@
{
"_meta": {
"hash": {
"sha256": "b6412a09cc7dd70b0dcd83aa9f1ab659f4c0e2ba413060ab03f7ba4b064bebce"
},
"pipfile-spec": 6,
"requires": {},
"sources": [
{
"name": "pypi",
"url": "https://pypi.python.org/simple",
"verify_ssl": true
}
]
},
"default": {
"appdirs": {
"hashes": [
"sha256:9e5896d1372858f8dd3344faf4e5014d21849c756c8d5701f78f8a103b372d92",
"sha256:d8b24664561d0d34ddfaec54636d502d7cea6e29c3eaf68f3df6180863e2166e"
],
"index": "pypi",
"version": "==1.4.3"
},
"attrs": {
"hashes": [
"sha256:4b90b09eeeb9b88c35bc642cbac057e45a5fd85367b985bd2809c62b7b939265",
"sha256:e0d0eb91441a3b53dab4d9b743eafc1ac44476296a2053b6ca3af0b139faf87b"
],
"index": "pypi",
"version": "==18.1.0"
},
"click": {
"hashes": [
"sha256:29f99fc6125fbc931b758dc053b3114e55c77a6e4c6c3a2674a2dc986016381d",
"sha256:f15516df478d5a56180fbf80e68f206010e6d160fc39fa508b65e035fd75130b"
],
"index": "pypi",
"version": "==6.7"
}
},
"develop": {
"alabaster": {
"hashes": [
"sha256:2eef172f44e8d301d25aff8068fddd65f767a3f04b5f15b0f4922f113aa1c732",
"sha256:37cdcb9e9954ed60912ebc1ca12a9d12178c26637abdf124e3cde2341c257fe0"
],
"version": "==0.7.10"
},
"aspy.yaml": {
"hashes": [
"sha256:c959530fab398e2391516bc8d5146489f9273b07d87dd8ba5e8b73406f7cc1fa",
"sha256:da95110d120a9168c9f43601b9cb732f006d8f193ee2c9b402c823026e4a9387"
],
"version": "==1.1.0"
},
"attrs": {
"hashes": [
"sha256:4b90b09eeeb9b88c35bc642cbac057e45a5fd85367b985bd2809c62b7b939265",
"sha256:e0d0eb91441a3b53dab4d9b743eafc1ac44476296a2053b6ca3af0b139faf87b"
],
"index": "pypi",
"version": "==18.1.0"
},
"babel": {
"hashes": [
"sha256:8ce4cb6fdd4393edd323227cba3a077bceb2a6ce5201c902c65e730046f41f14",
"sha256:ad209a68d7162c4cff4b29cdebe3dec4cef75492df501b0049a9433c96ce6f80"
],
"version": "==2.5.3"
},
"cached-property": {
"hashes": [
"sha256:67acb3ee8234245e8aea3784a492272239d9c4b487eba2fdcce9d75460d34520",
"sha256:bf093e640b7294303c7cc7ba3212f00b7a07d0416c1d923465995c9ef860a139"
],
"version": "==1.4.2"
},
"certifi": {
"hashes": [
"sha256:13e698f54293db9f89122b0581843a782ad0934a4fe0172d2a980ba77fc61bb7",
"sha256:9fa520c1bacfb634fa7af20a76bcbd3d5fb390481724c597da32c719a7dca4b0"
],
"version": "==2018.4.16"
},
"cfgv": {
"hashes": [
"sha256:2fbaf8d082456d8fff5a68163ff59c1025a52e906914fbc738be7d8ea5b7aa4b",
"sha256:733aa2f66b5106af32d271336a571610b9808e868de0ad5690d9d5155e5960c5"
],
"version": "==1.0.0"
},
"chardet": {
"hashes": [
"sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae",
"sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691"
],
"version": "==3.0.4"
},
"commonmark": {
"hashes": [
"sha256:34d73ec8085923c023930dfc0bcd1c4286e28a2a82de094bb72fabcc0281cbe5"
],
"version": "==0.5.4"
},
"coverage": {
"hashes": [
"sha256:03481e81d558d30d230bc12999e3edffe392d244349a90f4ef9b88425fac74ba",
"sha256:0b136648de27201056c1869a6c0d4e23f464750fd9a9ba9750b8336a244429ed",
"sha256:104ab3934abaf5be871a583541e8829d6c19ce7bde2923b2751e0d3ca44db60a",
"sha256:15b111b6a0f46ee1a485414a52a7ad1d703bdf984e9ed3c288a4414d3871dcbd",
"sha256:198626739a79b09fa0a2f06e083ffd12eb55449b5f8bfdbeed1df4910b2ca640",
"sha256:1c383d2ef13ade2acc636556fd544dba6e14fa30755f26812f54300e401f98f2",
"sha256:28b2191e7283f4f3568962e373b47ef7f0392993bb6660d079c62bd50fe9d162",
"sha256:2eb564bbf7816a9d68dd3369a510be3327f1c618d2357fa6b1216994c2e3d508",
"sha256:337ded681dd2ef9ca04ef5d93cfc87e52e09db2594c296b4a0a3662cb1b41249",
"sha256:3a2184c6d797a125dca8367878d3b9a178b6fdd05fdc2d35d758c3006a1cd694",
"sha256:3c79a6f7b95751cdebcd9037e4d06f8d5a9b60e4ed0cd231342aa8ad7124882a",
"sha256:3d72c20bd105022d29b14a7d628462ebdc61de2f303322c0212a054352f3b287",
"sha256:3eb42bf89a6be7deb64116dd1cc4b08171734d721e7a7e57ad64cc4ef29ed2f1",
"sha256:4635a184d0bbe537aa185a34193898eee409332a8ccb27eea36f262566585000",
"sha256:56e448f051a201c5ebbaa86a5efd0ca90d327204d8b059ab25ad0f35fbfd79f1",
"sha256:5a13ea7911ff5e1796b6d5e4fbbf6952381a611209b736d48e675c2756f3f74e",
"sha256:69bf008a06b76619d3c3f3b1983f5145c75a305a0fea513aca094cae5c40a8f5",
"sha256:6bc583dc18d5979dc0f6cec26a8603129de0304d5ae1f17e57a12834e7235062",
"sha256:701cd6093d63e6b8ad7009d8a92425428bc4d6e7ab8d75efbb665c806c1d79ba",
"sha256:7608a3dd5d73cb06c531b8925e0ef8d3de31fed2544a7de6c63960a1e73ea4bc",
"sha256:76ecd006d1d8f739430ec50cc872889af1f9c1b6b8f48e29941814b09b0fd3cc",
"sha256:7aa36d2b844a3e4a4b356708d79fd2c260281a7390d678a10b91ca595ddc9e99",
"sha256:7d3f553904b0c5c016d1dad058a7554c7ac4c91a789fca496e7d8347ad040653",
"sha256:7e1fe19bd6dce69d9fd159d8e4a80a8f52101380d5d3a4d374b6d3eae0e5de9c",
"sha256:8c3cb8c35ec4d9506979b4cf90ee9918bc2e49f84189d9bf5c36c0c1119c6558",
"sha256:9d6dd10d49e01571bf6e147d3b505141ffc093a06756c60b053a859cb2128b1f",
"sha256:9e112fcbe0148a6fa4f0a02e8d58e94470fc6cb82a5481618fea901699bf34c4",
"sha256:ac4fef68da01116a5c117eba4dd46f2e06847a497de5ed1d64bb99a5fda1ef91",
"sha256:b8815995e050764c8610dbc82641807d196927c3dbed207f0a079833ffcf588d",
"sha256:be6cfcd8053d13f5f5eeb284aa8a814220c3da1b0078fa859011c7fffd86dab9",
"sha256:c1bb572fab8208c400adaf06a8133ac0712179a334c09224fb11393e920abcdd",
"sha256:de4418dadaa1c01d497e539210cb6baa015965526ff5afc078c57ca69160108d",
"sha256:e05cb4d9aad6233d67e0541caa7e511fa4047ed7750ec2510d466e806e0255d6",
"sha256:e4d96c07229f58cb686120f168276e434660e4358cc9cf3b0464210b04913e77",
"sha256:f3f501f345f24383c0000395b26b726e46758b71393267aeae0bd36f8b3ade80",
"sha256:f8a923a85cb099422ad5a2e345fe877bbc89a8a8b23235824a93488150e45f6e"
],
"index": "pypi",
"version": "==4.5.1"
},
"docutils": {
"hashes": [
"sha256:02aec4bd92ab067f6ff27a38a38a41173bf01bed8f89157768c1573f53e474a6",
"sha256:51e64ef2ebfb29cae1faa133b3710143496eca21c530f3f71424d77687764274",
"sha256:7a4bd47eaf6596e1295ecb11361139febe29b084a87bf005bf899f9a42edc3c6"
],
"version": "==0.14"
},
"flake8": {
"hashes": [
"sha256:7253265f7abd8b313e3892944044a365e3f4ac3fcdcfb4298f55ee9ddf188ba0",
"sha256:c7841163e2b576d435799169b78703ad6ac1bbb0f199994fc05f700b2a90ea37"
],
"index": "pypi",
"version": "==3.5.0"
},
"flake8-bugbear": {
"hashes": [
"sha256:541746f0f3b2f1a8d7278e1d2d218df298996b60b02677708560db7c7e620e3b",
"sha256:5f14a99d458e29cb92be9079c970030e0dd398b2decb179d76d39a5266ea1578"
],
"index": "pypi",
"version": "==18.2.0"
},
"flake8-mypy": {
"hashes": [
"sha256:47120db63aff631ee1f84bac6fe8e64731dc66da3efc1c51f85e15ade4a3ba18",
"sha256:cff009f4250e8391bf48990093cff85802778c345c8449d6498b62efefeebcbc"
],
"index": "pypi",
"version": "==17.8.0"
},
"identify": {
"hashes": [
"sha256:8c127f455e8503eb3a5ed5388527719e1fef00a41b5e58dc036bc116f3bb8a16",
"sha256:bb5bdf324b4a24def86757c8dd8a4e91a9c28bbf1bf8505d702ce4b8d2508270"
],
"version": "==1.0.16"
},
"idna": {
"hashes": [
"sha256:2c6a5de3089009e3da7c5dde64a141dbc8551d5b7f6cf4ed7c2568d0cc520a8f",
"sha256:8c7309c718f94b3a625cb648ace320157ad16ff131ae0af362c9f21b80ef6ec4"
],
"version": "==2.6"
},
"imagesize": {
"hashes": [
"sha256:3620cc0cadba3f7475f9940d22431fc4d407269f1be59ec9b8edcca26440cf18",
"sha256:5b326e4678b6925158ccc66a9fa3122b6106d7c876ee32d7de6ce59385b96315"
],
"version": "==1.0.0"
},
"jinja2": {
"hashes": [
"sha256:74c935a1b8bb9a3947c50a54766a969d4846290e1e788ea44c1392163723c3bd",
"sha256:f84be1bb0040caca4cea721fcbbbbd61f9be9464ca236387158b0feea01914a4"
],
"version": "==2.10"
},
"markupsafe": {
"hashes": [
"sha256:a6be69091dac236ea9c6bc7d012beab42010fa914c459791d627dad4910eb665"
],
"version": "==1.0"
},
"mccabe": {
"hashes": [
"sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42",
"sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"
],
"version": "==0.6.1"
},
"mypy": {
"hashes": [
"sha256:01cf289838f266ae7c6550c813181ee77d21eac9459dbf067e7a95a0a2db9721",
"sha256:bc251cb31bc236d9fe4bcc442c994c45fff2541f7161ee52dc949741fe9ca3dd"
],
"index": "pypi",
"version": "==0.600"
},
"nodeenv": {
"hashes": [
"sha256:dd0a34001090ff042cfdb4b0c8d6a6f7ec9baa49733f00b695bb8a8b4700ba6c"
],
"version": "==1.3.0"
},
"packaging": {
"hashes": [
"sha256:e9215d2d2535d3ae866c3d6efc77d5b24a0192cce0ff20e42896cc0664f889c0",
"sha256:f019b770dd64e585a99714f1fd5e01c7a8f11b45635aa953fd41c689a657375b"
],
"version": "==17.1"
},
"pkginfo": {
"hashes": [
"sha256:5878d542a4b3f237e359926384f1dde4e099c9f5525d236b1840cf704fa8d474",
"sha256:a39076cb3eb34c333a0dd390b568e9e1e881c7bf2cc0aee12120636816f55aee"
],
"version": "==1.4.2"
},
"pre-commit": {
"hashes": [
"sha256:01bb5f44606735ca30c8be641fa24f5760fcc599a0260ead0067bcde2f0305f9",
"sha256:823452163aa9fb024a9ff30947ba7f5a2778708db7554a4d36438b9bbead6bbb"
],
"index": "pypi",
"version": "==1.8.2"
},
"pycodestyle": {
"hashes": [
"sha256:682256a5b318149ca0d2a9185d365d8864a768a28db66a84a2ea946bcc426766",
"sha256:6c4245ade1edfad79c3446fadfc96b0de2759662dc29d07d80a6f27ad1ca6ba9"
],
"version": "==2.3.1"
},
"pyflakes": {
"hashes": [
"sha256:08bd6a50edf8cffa9fa09a463063c425ecaaf10d1eb0335a7e8b1401aef89e6f",
"sha256:8d616a382f243dbf19b54743f280b80198be0bca3a5396f1d2e1fca6223e8805"
],
"version": "==1.6.0"
},
"pygments": {
"hashes": [
"sha256:78f3f434bcc5d6ee09020f92ba487f95ba50f1e3ef83ae96b9d5ffa1bab25c5d",
"sha256:dbae1046def0efb574852fab9e90209b23f556367b5a320c0bcb871c77c3e8cc"
],
"version": "==2.2.0"
},
"pyparsing": {
"hashes": [
"sha256:0832bcf47acd283788593e7a0f542407bd9550a55a8a8435214a1960e04bcb04",
"sha256:281683241b25fe9b80ec9d66017485f6deff1af5cde372469134b56ca8447a07",
"sha256:8f1e18d3fd36c6795bb7e02a39fd05c611ffc2596c1e0d995d34d67630426c18",
"sha256:9e8143a3e15c13713506886badd96ca4b579a87fbdf49e550dbfc057d6cb218e",
"sha256:b8b3117ed9bdf45e14dcc89345ce638ec7e0e29b2b579fa1ecf32ce45ebac8a5",
"sha256:e4d45427c6e20a59bf4f88c639dcc03ce30d193112047f94012102f235853a58",
"sha256:fee43f17a9c4087e7ed1605bd6df994c6173c1e977d7ade7b651292fab2bd010"
],
"version": "==2.2.0"
},
"pytz": {
"hashes": [
"sha256:65ae0c8101309c45772196b21b74c46b2e5d11b6275c45d251b150d5da334555",
"sha256:c06425302f2cf668f1bba7a0a03f3c1d34d4ebeef2c72003da308b3947c7f749"
],
"version": "==2018.4"
},
"pyyaml": {
"hashes": [
"sha256:0c507b7f74b3d2dd4d1322ec8a94794927305ab4cebbe89cc47fe5e81541e6e8",
"sha256:16b20e970597e051997d90dc2cddc713a2876c47e3d92d59ee198700c5427736",
"sha256:3262c96a1ca437e7e4763e2843746588a965426550f3797a79fca9c6199c431f",
"sha256:326420cbb492172dec84b0f65c80942de6cedb5233c413dd824483989c000608",
"sha256:4474f8ea030b5127225b8894d626bb66c01cda098d47a2b0d3429b6700af9fd8",
"sha256:592766c6303207a20efc445587778322d7f73b161bd994f227adaa341ba212ab",
"sha256:5ac82e411044fb129bae5cfbeb3ba626acb2af31a8d17d175004b70862a741a7",
"sha256:5f84523c076ad14ff5e6c037fe1c89a7f73a3e04cf0377cb4d017014976433f3",
"sha256:827dc04b8fa7d07c44de11fabbc888e627fa8293b695e0f99cb544fdfa1bf0d1",
"sha256:b4c423ab23291d3945ac61346feeb9a0dc4184999ede5e7c43e1ffb975130ae6",
"sha256:bc6bced57f826ca7cb5125a10b23fd0f2fff3b7c4701d64c439a300ce665fff8",
"sha256:c01b880ec30b5a6e6aa67b09a2fe3fb30473008c85cd6a67359a1b15ed6d83a4",
"sha256:ca233c64c6e40eaa6c66ef97058cdc80e8d0157a443655baa1b2966e812807ca",
"sha256:e863072cdf4c72eebf179342c94e6989c67185842d9997960b3e69290b2fa269"
],
"version": "==3.12"
},
"recommonmark": {
"hashes": [
"sha256:6e29c723abcf5533842376d87c4589e62923ecb6002a8e059eb608345ddaff9d",
"sha256:cd8bf902e469dae94d00367a8197fb7b81fcabc9cfb79d520e0d22d0fbeaa8b7"
],
"index": "pypi",
"version": "==0.4.0"
},
"requests": {
"hashes": [
"sha256:6a1b267aa90cac58ac3a765d067950e7dbbf75b1da07e895d1f594193a40a38b",
"sha256:9c443e7324ba5b85070c4a818ade28bfabedf16ea10206da1132edaa6dda237e"
],
"version": "==2.18.4"
},
"requests-toolbelt": {
"hashes": [
"sha256:42c9c170abc2cacb78b8ab23ac957945c7716249206f90874651971a4acff237",
"sha256:f6a531936c6fa4c6cfce1b9c10d5c4f498d16528d2a54a22ca00011205a187b5"
],
"version": "==0.8.0"
},
"six": {
"hashes": [
"sha256:70e8a77beed4562e7f14fe23a786b54f6296e34344c23bc42f07b15018ff98e9",
"sha256:832dc0e10feb1aa2c68dcc57dbb658f1c7e65b9b61af69048abc87a2db00a0eb"
],
"version": "==1.11.0"
},
"snowballstemmer": {
"hashes": [
"sha256:919f26a68b2c17a7634da993d91339e288964f93c274f1343e3bbbe2096e1128",
"sha256:9f3bcd3c401c3e862ec0ebe6d2c069ebc012ce142cce209c098ccb5b09136e89"
],
"version": "==1.2.1"
},
"sphinx": {
"hashes": [
"sha256:2e7ad92e96eff1b2006cf9f0cdb2743dacbae63755458594e9e8238b0c3dc60b",
"sha256:e9b1a75a3eae05dded19c80eb17325be675e0698975baae976df603b6ed1eb10"
],
"index": "pypi",
"version": "==1.7.4"
},
"sphinxcontrib-websupport": {
"hashes": [
"sha256:7a85961326aa3a400cd4ad3c816d70ed6f7c740acd7ce5d78cd0a67825072eb9",
"sha256:f4932e95869599b89bf4f80fc3989132d83c9faa5bf633e7b5e0c25dffb75da2"
],
"version": "==1.0.1"
},
"tqdm": {
"hashes": [
"sha256:9fc19da10d7c962613cbcb9cdced41230deb31d9e20332da84c96917ff534281",
"sha256:ce205451a27b6050faed0bb2bcbea96c6a550f8c27cd2b5441d72e948113ad18"
],
"version": "==4.23.3"
},
"twine": {
"hashes": [
"sha256:08eb132bbaec40c6d25b358f546ec1dc96ebd2638a86eea68769d9e67fe2b129",
"sha256:2fd9a4d9ff0bcacf41fdc40c8cb0cfaef1f1859457c9653fd1b92237cc4e9f25"
],
"index": "pypi",
"version": "==1.11.0"
},
"typed-ast": {
"hashes": [
"sha256:0948004fa228ae071054f5208840a1e88747a357ec1101c17217bfe99b299d58",
"sha256:25d8feefe27eb0303b73545416b13d108c6067b846b543738a25ff304824ed9a",
"sha256:29464a177d56e4e055b5f7b629935af7f49c196be47528cc94e0a7bf83fbc2b9",
"sha256:2e214b72168ea0275efd6c884b114ab42e316de3ffa125b267e732ed2abda892",
"sha256:3e0d5e48e3a23e9a4d1a9f698e32a542a4a288c871d33ed8df1b092a40f3a0f9",
"sha256:519425deca5c2b2bdac49f77b2c5625781abbaf9a809d727d3a5596b30bb4ded",
"sha256:57fe287f0cdd9ceaf69e7b71a2e94a24b5d268b35df251a88fef5cc241bf73aa",
"sha256:668d0cec391d9aed1c6a388b0d5b97cd22e6073eaa5fbaa6d2946603b4871efe",
"sha256:68ba70684990f59497680ff90d18e756a47bf4863c604098f10de9716b2c0bdd",
"sha256:6de012d2b166fe7a4cdf505eee3aaa12192f7ba365beeefaca4ec10e31241a85",
"sha256:79b91ebe5a28d349b6d0d323023350133e927b4de5b651a8aa2db69c761420c6",
"sha256:8550177fa5d4c1f09b5e5f524411c44633c80ec69b24e0e98906dd761941ca46",
"sha256:a8034021801bc0440f2e027c354b4eafd95891b573e12ff0418dec385c76785c",
"sha256:bc978ac17468fe868ee589c795d06777f75496b1ed576d308002c8a5756fb9ea",
"sha256:c05b41bc1deade9f90ddc5d988fe506208019ebba9f2578c622516fd201f5863",
"sha256:c9b060bd1e5a26ab6e8267fd46fc9e02b54eb15fffb16d112d4c7b1c12987559",
"sha256:edb04bdd45bfd76c8292c4d9654568efaedf76fe78eb246dde69bdb13b2dad87",
"sha256:f19f2a4f547505fe9072e15f6f4ae714af51b5a681a97f187971f50c283193b6"
],
"version": "==1.1.0"
},
"urllib3": {
"hashes": [
"sha256:06330f386d6e4b195fbfc736b297f58c5a892e4440e54d294d7004e3a9bbea1b",
"sha256:cc44da8e1145637334317feebd728bd869a35285b93cbb4cca2577da7e62db4f"
],
"version": "==1.22"
},
"virtualenv": {
"hashes": [
"sha256:1d7e241b431e7afce47e77f8843a276f652699d1fa4f93b9d8ce0076fd7b0b54",
"sha256:e8e05d4714a1c51a2f5921e62f547fcb0f713ebbe959e0a7f585cc8bef71d11f"
],
"version": "==15.2.0"
}
}
}

941
README.md

File diff suppressed because it is too large Load Diff

View File

@ -1,11 +0,0 @@
# Security Policy
## Supported Versions
Only the latest non-prerelease version is supported.
## Security contact information
To report a security vulnerability, please use the
[Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the
fix and disclosure.

View File

@ -1,79 +0,0 @@
name: "Black"
description: "The uncompromising Python code formatter."
author: "Łukasz Langa and contributors to Black"
inputs:
options:
description:
"Options passed to Black. Use `black --help` to see available options. Default:
'--check --diff'"
required: false
default: "--check --diff"
src:
description: "Source to run Black. Default: '.'"
required: false
default: "."
jupyter:
description:
"Set this option to true to include Jupyter Notebook files. Default: false"
required: false
default: false
black_args:
description: "[DEPRECATED] Black input arguments."
required: false
default: ""
deprecationMessage:
"Input `with.black_args` is deprecated. Use `with.options` and `with.src` instead."
version:
description: 'Python Version specifier (PEP440) - e.g. "21.5b1"'
required: false
default: ""
use_pyproject:
description: Read Black version specifier from pyproject.toml if `true`.
required: false
default: "false"
summary:
description: "Whether to add the output to the workflow summary"
required: false
default: true
branding:
color: "black"
icon: "check-circle"
runs:
using: composite
steps:
- name: black
run: |
# Even when black fails, do not close the shell
set +e
if [ "$RUNNER_OS" == "Windows" ]; then
runner="python"
else
runner="python3"
fi
out=$(${runner} $GITHUB_ACTION_PATH/action/main.py)
exit_code=$?
# Display the raw output in the step
echo "${out}"
if [ "${{ inputs.summary }}" == "true" ]; then
# Display the Markdown output in the job summary
echo "\`\`\`python" >> $GITHUB_STEP_SUMMARY
echo "${out}" >> $GITHUB_STEP_SUMMARY
echo "\`\`\`" >> $GITHUB_STEP_SUMMARY
fi
# Exit with the exit-code returned by Black
exit ${exit_code}
env:
# TODO: Remove once https://github.com/actions/runner/issues/665 is fixed.
INPUT_OPTIONS: ${{ inputs.options }}
INPUT_SRC: ${{ inputs.src }}
INPUT_JUPYTER: ${{ inputs.jupyter }}
INPUT_BLACK_ARGS: ${{ inputs.black_args }}
INPUT_VERSION: ${{ inputs.version }}
INPUT_USE_PYPROJECT: ${{ inputs.use_pyproject }}
pythonioencoding: utf-8
shell: bash

View File

@ -1,182 +0,0 @@
import os
import re
import shlex
import shutil
import sys
from pathlib import Path
from subprocess import PIPE, STDOUT, run
from typing import Union
ACTION_PATH = Path(os.environ["GITHUB_ACTION_PATH"])
ENV_PATH = ACTION_PATH / ".black-env"
ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
OPTIONS = os.getenv("INPUT_OPTIONS", default="")
SRC = os.getenv("INPUT_SRC", default="")
JUPYTER = os.getenv("INPUT_JUPYTER") == "true"
BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
VERSION = os.getenv("INPUT_VERSION", default="")
USE_PYPROJECT = os.getenv("INPUT_USE_PYPROJECT") == "true"
BLACK_VERSION_RE = re.compile(r"^black([^A-Z0-9._-]+.*)$", re.IGNORECASE)
EXTRAS_RE = re.compile(r"\[.*\]")
EXPORT_SUBST_FAIL_RE = re.compile(r"\$Format:.*\$")
def determine_version_specifier() -> str:
"""Determine the version of Black to install.
The version can be specified either via the `with.version` input or via the
pyproject.toml file if `with.use_pyproject` is set to `true`.
"""
if USE_PYPROJECT and VERSION:
print(
"::error::'with.version' and 'with.use_pyproject' inputs are "
"mutually exclusive.",
file=sys.stderr,
flush=True,
)
sys.exit(1)
if USE_PYPROJECT:
return read_version_specifier_from_pyproject()
elif VERSION and VERSION[0] in "0123456789":
return f"=={VERSION}"
else:
return VERSION
def read_version_specifier_from_pyproject() -> str:
if sys.version_info < (3, 11):
print(
"::error::'with.use_pyproject' input requires Python 3.11 or later.",
file=sys.stderr,
flush=True,
)
sys.exit(1)
import tomllib # type: ignore[import-not-found,unreachable]
try:
with Path("pyproject.toml").open("rb") as fp:
pyproject = tomllib.load(fp)
except FileNotFoundError:
print(
"::error::'with.use_pyproject' input requires a pyproject.toml file.",
file=sys.stderr,
flush=True,
)
sys.exit(1)
version = pyproject.get("tool", {}).get("black", {}).get("required-version")
if version is not None:
return f"=={version}"
arrays = [
*pyproject.get("dependency-groups", {}).values(),
pyproject.get("project", {}).get("dependencies"),
*pyproject.get("project", {}).get("optional-dependencies", {}).values(),
]
for array in arrays:
version = find_black_version_in_array(array)
if version is not None:
break
if version is None:
print(
"::error::'black' dependency missing from pyproject.toml.",
file=sys.stderr,
flush=True,
)
sys.exit(1)
return version
def find_black_version_in_array(array: object) -> Union[str, None]:
if not isinstance(array, list):
return None
try:
for item in array:
# Rudimentary PEP 508 parsing.
item = item.split(";")[0]
item = EXTRAS_RE.sub("", item).strip()
if item == "black":
print(
"::error::Version specifier missing for 'black' dependency in "
"pyproject.toml.",
file=sys.stderr,
flush=True,
)
sys.exit(1)
elif m := BLACK_VERSION_RE.match(item):
return m.group(1).strip()
except TypeError:
pass
return None
run([sys.executable, "-m", "venv", str(ENV_PATH)], check=True)
version_specifier = determine_version_specifier()
if JUPYTER:
extra_deps = "[colorama,jupyter]"
else:
extra_deps = "[colorama]"
if version_specifier:
req = f"black{extra_deps}{version_specifier}"
else:
describe_name = ""
with open(ACTION_PATH / ".git_archival.txt", encoding="utf-8") as fp:
for line in fp:
if line.startswith("describe-name: "):
describe_name = line[len("describe-name: ") :].rstrip()
break
if not describe_name:
print("::error::Failed to detect action version.", file=sys.stderr, flush=True)
sys.exit(1)
# expected format is one of:
# - 23.1.0
# - 23.1.0-51-g448bba7
# - $Format:%(describe:tags=true,match=*[0-9]*)$ (if export-subst fails)
if (
describe_name.count("-") < 2
and EXPORT_SUBST_FAIL_RE.match(describe_name) is None
):
# the action's commit matches a tag exactly, install exact version from PyPI
req = f"black{extra_deps}=={describe_name}"
else:
# the action's commit does not match any tag, install from the local git repo
req = f".{extra_deps}"
print(f"Installing {req}...", flush=True)
pip_proc = run(
[str(ENV_BIN / "python"), "-m", "pip", "install", req],
stdout=PIPE,
stderr=STDOUT,
encoding="utf-8",
cwd=ACTION_PATH,
)
if pip_proc.returncode:
print(pip_proc.stdout)
print("::error::Failed to install Black.", file=sys.stderr, flush=True)
sys.exit(pip_proc.returncode)
base_cmd = [str(ENV_BIN / "black")]
if BLACK_ARGS:
# TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.
proc = run(
[*base_cmd, *shlex.split(BLACK_ARGS)],
stdout=PIPE,
stderr=STDOUT,
encoding="utf-8",
)
else:
proc = run(
[*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)],
stdout=PIPE,
stderr=STDOUT,
encoding="utf-8",
)
shutil.rmtree(ENV_PATH, ignore_errors=True)
print(proc.stdout)
sys.exit(proc.returncode)

View File

@ -1,243 +0,0 @@
python3 << EndPython3
import collections
import os
import sys
import vim
def strtobool(text):
if text.lower() in ['y', 'yes', 't', 'true', 'on', '1']:
return True
if text.lower() in ['n', 'no', 'f', 'false', 'off', '0']:
return False
raise ValueError(f"{text} is not convertible to boolean")
class Flag(collections.namedtuple("FlagBase", "name, cast")):
@property
def var_name(self):
return self.name.replace("-", "_")
@property
def vim_rc_name(self):
name = self.var_name
if name == "line_length":
name = name.replace("_", "")
return "g:black_" + name
FLAGS = [
Flag(name="line_length", cast=int),
Flag(name="fast", cast=strtobool),
Flag(name="skip_string_normalization", cast=strtobool),
Flag(name="quiet", cast=strtobool),
Flag(name="skip_magic_trailing_comma", cast=strtobool),
Flag(name="preview", cast=strtobool),
]
def _get_python_binary(exec_prefix, pyver):
try:
default = vim.eval("g:pymode_python").strip()
except vim.error:
default = ""
if default and os.path.exists(default):
return default
if sys.platform[:3] == "win":
return exec_prefix / 'python.exe'
bin_path = exec_prefix / "bin"
exec_path = (bin_path / f"python{pyver[0]}.{pyver[1]}").resolve()
if exec_path.exists():
return exec_path
# It is possible that some environments may only have python3
exec_path = (bin_path / f"python3").resolve()
if exec_path.exists():
return exec_path
raise ValueError("python executable not found")
def _get_pip(venv_path):
if sys.platform[:3] == "win":
return venv_path / 'Scripts' / 'pip.exe'
return venv_path / 'bin' / 'pip'
def _get_virtualenv_site_packages(venv_path, pyver):
if sys.platform[:3] == "win":
return venv_path / 'Lib' / 'site-packages'
return venv_path / 'lib' / f'python{pyver[0]}.{pyver[1]}' / 'site-packages'
def _initialize_black_env(upgrade=False):
if vim.eval("g:black_use_virtualenv ? 'true' : 'false'") == "false":
if upgrade:
print("Upgrade disabled due to g:black_use_virtualenv being disabled.")
print("Either use your system package manager (or pip) to upgrade black separately,")
print("or modify your vimrc to have 'let g:black_use_virtualenv = 1'.")
return False
else:
# Nothing needed to be done.
return True
pyver = sys.version_info[:3]
if pyver < (3, 9):
print("Sorry, Black requires Python 3.9+ to run.")
return False
from pathlib import Path
import subprocess
import venv
virtualenv_path = Path(vim.eval("g:black_virtualenv")).expanduser()
virtualenv_site_packages = str(_get_virtualenv_site_packages(virtualenv_path, pyver))
first_install = False
if not virtualenv_path.is_dir():
print('Please wait, one time setup for Black.')
_executable = sys.executable
_base_executable = getattr(sys, "_base_executable", _executable)
try:
executable = str(_get_python_binary(Path(sys.exec_prefix), pyver))
sys.executable = executable
sys._base_executable = executable
print(f'Creating a virtualenv in {virtualenv_path}...')
print('(this path can be customized in .vimrc by setting g:black_virtualenv)')
venv.create(virtualenv_path, with_pip=True)
except Exception:
print('Encountered exception while creating virtualenv (see traceback below).')
print(f'Removing {virtualenv_path}...')
import shutil
shutil.rmtree(virtualenv_path)
raise
finally:
sys.executable = _executable
sys._base_executable = _base_executable
first_install = True
if first_install:
print('Installing Black with pip...')
if upgrade:
print('Upgrading Black with pip...')
if first_install or upgrade:
subprocess.run([str(_get_pip(virtualenv_path)), 'install', '-U', 'black'], stdout=subprocess.PIPE)
print('DONE! You are all set, thanks for waiting ✨ 🍰 ✨')
if first_install:
print('Pro-tip: to upgrade Black in the future, use the :BlackUpgrade command and restart Vim.\n')
if virtualenv_site_packages not in sys.path:
sys.path.insert(0, virtualenv_site_packages)
return True
if _initialize_black_env():
import black
import time
def get_target_version(tv):
if isinstance(tv, black.TargetVersion):
return tv
ret = None
try:
ret = black.TargetVersion[tv.upper()]
except KeyError:
print(f"WARNING: Target version {tv!r} not recognized by Black, using default target")
return ret
def Black(**kwargs):
"""
kwargs allows you to override ``target_versions`` argument of
``black.FileMode``.
``target_version`` needs to be cleaned because ``black.FileMode``
expects the ``target_versions`` argument to be a set of TargetVersion enums.
Allow kwargs["target_version"] to be a string to allow
to type it more quickly.
Using also target_version instead of target_versions to remain
consistent to Black's documentation of the structure of pyproject.toml.
"""
start = time.time()
configs = get_configs()
black_kwargs = {}
if "target_version" in kwargs:
target_version = kwargs["target_version"]
if not isinstance(target_version, (list, set)):
target_version = [target_version]
target_version = set(filter(lambda x: x, map(lambda tv: get_target_version(tv), target_version)))
black_kwargs["target_versions"] = target_version
mode = black.FileMode(
line_length=configs["line_length"],
string_normalization=not configs["skip_string_normalization"],
is_pyi=vim.current.buffer.name.endswith('.pyi'),
magic_trailing_comma=not configs["skip_magic_trailing_comma"],
preview=configs["preview"],
**black_kwargs,
)
quiet = configs["quiet"]
buffer_str = '\n'.join(vim.current.buffer) + '\n'
try:
new_buffer_str = black.format_file_contents(
buffer_str,
fast=configs["fast"],
mode=mode,
)
except black.NothingChanged:
if not quiet:
print(f'Black: already well formatted, good job. (took {time.time() - start:.4f}s)')
except Exception as exc:
print(f'Black: {exc}')
else:
current_buffer = vim.current.window.buffer
cursors = []
for i, tabpage in enumerate(vim.tabpages):
if tabpage.valid:
for j, window in enumerate(tabpage.windows):
if window.valid and window.buffer == current_buffer:
cursors.append((i, j, window.cursor))
vim.current.buffer[:] = new_buffer_str.split('\n')[:-1]
for i, j, cursor in cursors:
window = vim.tabpages[i].windows[j]
try:
window.cursor = cursor
except vim.error:
window.cursor = (len(window.buffer), 0)
if not quiet:
print(f'Black: reformatted in {time.time() - start:.4f}s.')
def get_configs():
filename = vim.eval("@%")
path_pyproject_toml = black.find_pyproject_toml((filename,))
if path_pyproject_toml:
toml_config = black.parse_pyproject_toml(path_pyproject_toml)
else:
toml_config = {}
return {
flag.var_name: toml_config.get(flag.name, flag.cast(vim.eval(flag.vim_rc_name)))
for flag in FLAGS
}
def BlackUpgrade():
_initialize_black_env(upgrade=True)
def BlackVersion():
print(f'Black, version {black.__version__} on Python {sys.version}.')
EndPython3
function black#Black(...)
let kwargs = {}
for arg in a:000
let arg_list = split(arg, '=')
let kwargs[arg_list[0]] = arg_list[1]
endfor
python3 << EOF
import vim
kwargs = vim.eval("kwargs")
EOF
:py3 Black(**kwargs)
endfunction
function black#BlackUpgrade()
:py3 BlackUpgrade()
endfunction
function black#BlackVersion()
:py3 BlackVersion()
endfunction

3117
black.py Normal file

File diff suppressed because it is too large Load Diff

173
blib2to3/Grammar.txt Normal file
View File

@ -0,0 +1,173 @@
# Grammar for 2to3. This grammar supports Python 2.x and 3.x.
# NOTE WELL: You should also follow all the steps listed at
# https://devguide.python.org/grammar/
# Start symbols for the grammar:
# file_input is a module or sequence of commands read from an input file;
# single_input is a single interactive statement;
# eval_input is the input for the eval() and input() functions.
# NB: compound_stmt in single_input is followed by extra NEWLINE!
file_input: (NEWLINE | stmt)* ENDMARKER
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
eval_input: testlist NEWLINE* ENDMARKER
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
decorators: decorator+
decorated: decorators (classdef | funcdef | async_funcdef)
async_funcdef: ASYNC funcdef
funcdef: 'def' NAME parameters ['->' test] ':' suite
parameters: '(' [typedargslist] ')'
typedargslist: ((tfpdef ['=' test] ',')*
('*' [tname] (',' tname ['=' test])* [',' ['**' tname [',']]] | '**' tname [','])
| tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
tname: NAME [':' test]
tfpdef: tname | '(' tfplist ')'
tfplist: tfpdef (',' tfpdef)* [',']
varargslist: ((vfpdef ['=' test] ',')*
('*' [vname] (',' vname ['=' test])* [',' ['**' vname [',']]] | '**' vname [','])
| vfpdef ['=' test] (',' vfpdef ['=' test])* [','])
vname: NAME
vfpdef: vname | '(' vfplist ')'
vfplist: vfpdef (',' vfpdef)* [',']
stmt: simple_stmt | compound_stmt
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt |
import_stmt | global_stmt | exec_stmt | assert_stmt)
expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |
('=' (yield_expr|testlist_star_expr))*)
annassign: ':' test ['=' test]
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |
'<<=' | '>>=' | '**=' | '//=')
# For normal and annotated assignments, additional restrictions enforced by the interpreter
print_stmt: 'print' ( [ test (',' test)* [','] ] |
'>>' test [ (',' test)+ [','] ] )
del_stmt: 'del' exprlist
pass_stmt: 'pass'
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
break_stmt: 'break'
continue_stmt: 'continue'
return_stmt: 'return' [testlist]
yield_stmt: yield_expr
raise_stmt: 'raise' [test ['from' test | ',' test [',' test]]]
import_stmt: import_name | import_from
import_name: 'import' dotted_as_names
import_from: ('from' ('.'* dotted_name | '.'+)
'import' ('*' | '(' import_as_names ')' | import_as_names))
import_as_name: NAME ['as' NAME]
dotted_as_name: dotted_name ['as' NAME]
import_as_names: import_as_name (',' import_as_name)* [',']
dotted_as_names: dotted_as_name (',' dotted_as_name)*
dotted_name: NAME ('.' NAME)*
global_stmt: ('global' | 'nonlocal') NAME (',' NAME)*
exec_stmt: 'exec' expr ['in' test [',' test]]
assert_stmt: 'assert' test [',' test]
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt
async_stmt: ASYNC (funcdef | with_stmt | for_stmt)
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
while_stmt: 'while' test ':' suite ['else' ':' suite]
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
try_stmt: ('try' ':' suite
((except_clause ':' suite)+
['else' ':' suite]
['finally' ':' suite] |
'finally' ':' suite))
with_stmt: 'with' with_item (',' with_item)* ':' suite
with_item: test ['as' expr]
with_var: 'as' expr
# NB compile.c makes sure that the default except clause is last
except_clause: 'except' [test [(',' | 'as') test]]
suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
# Backward compatibility cruft to support:
# [ x for x in lambda: True, lambda: False if x() ]
# even while also allowing:
# lambda x: 5 if x else 2
# (But not a mix of the two)
testlist_safe: old_test [(',' old_test)+ [',']]
old_test: or_test | old_lambdef
old_lambdef: 'lambda' [varargslist] ':' old_test
test: or_test ['if' or_test 'else' test] | lambdef
or_test: and_test ('or' and_test)*
and_test: not_test ('and' not_test)*
not_test: 'not' not_test | comparison
comparison: expr (comp_op expr)*
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
star_expr: '*' expr
expr: xor_expr ('|' xor_expr)*
xor_expr: and_expr ('^' and_expr)*
and_expr: shift_expr ('&' shift_expr)*
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
arith_expr: term (('+'|'-') term)*
term: factor (('*'|'@'|'/'|'%'|'//') factor)*
factor: ('+'|'-'|'~') factor | power
power: [AWAIT] atom trailer* ['**' factor]
atom: ('(' [yield_expr|testlist_gexp] ')' |
'[' [listmaker] ']' |
'{' [dictsetmaker] '}' |
'`' testlist1 '`' |
NAME | NUMBER | STRING+ | '.' '.' '.')
listmaker: (test|star_expr) ( old_comp_for | (',' (test|star_expr))* [','] )
testlist_gexp: (test|star_expr) ( old_comp_for | (',' (test|star_expr))* [','] )
lambdef: 'lambda' [varargslist] ':' test
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: subscript (',' subscript)* [',']
subscript: test | [test] ':' [test] [sliceop]
sliceop: ':' [test]
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
testlist: test (',' test)* [',']
dictsetmaker: ( ((test ':' test | '**' expr)
(comp_for | (',' (test ':' test | '**' expr))* [','])) |
((test | star_expr)
(comp_for | (',' (test | star_expr))* [','])) )
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
arglist: argument (',' argument)* [',']
# "test '=' test" is really "keyword '=' test", but we have no such token.
# These need to be in a single rule to avoid grammar that is ambiguous
# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,
# we explicitly match '*' here, too, to give it proper precedence.
# Illegal combinations and orderings are blocked in ast.c:
# multiple (test comp_for) arguments are blocked; keyword unpackings
# that precede iterable unpackings are blocked; etc.
argument: ( test [comp_for] |
test '=' test |
'**' test |
'*' test )
comp_iter: comp_for | comp_if
comp_for: [ASYNC] 'for' exprlist 'in' or_test [comp_iter]
comp_if: 'if' old_test [comp_iter]
# As noted above, testlist_safe extends the syntax allowed in list
# comprehensions and generators. We can't use it indiscriminately in all
# derivations using a comp_for-like pattern because the testlist_safe derivation
# contains comma which clashes with trailing comma in arglist.
#
# This was an issue because the parser would not follow the correct derivation
# when parsing syntactically valid Python code. Since testlist_safe was created
# specifically to handle list comprehensions and generator expressions enclosed
# with parentheses, it's safe to only use it in those. That avoids the issue; we
# can parse code like set(x for x in [],).
#
# The syntax supported by this set of rules is not a valid Python 3 syntax,
# hence the prefix "old".
#
# See https://bugs.python.org/issue27494
old_comp_iter: old_comp_for | old_comp_if
old_comp_for: [ASYNC] 'for' exprlist 'in' testlist_safe [old_comp_iter]
old_comp_if: 'if' old_test [old_comp_iter]
testlist1: test (',' test)*
# not used in grammar, but may appear in "node" passed from Parser to Compiler
encoding_decl: NAME
yield_expr: 'yield' [yield_arg]
yield_arg: 'from' test | testlist

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -2,12 +2,12 @@ A. HISTORY OF THE SOFTWARE
========================== ==========================
Python was created in the early 1990s by Guido van Rossum at Stichting Python was created in the early 1990s by Guido van Rossum at Stichting
Mathematisch Centrum (CWI, see https://www.cwi.nl) in the Netherlands Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands
as a successor of a language called ABC. Guido remains Python's as a successor of a language called ABC. Guido remains Python's
principal author, although it includes many contributions from others. principal author, although it includes many contributions from others.
In 1995, Guido continued his work on Python at the Corporation for In 1995, Guido continued his work on Python at the Corporation for
National Research Initiatives (CNRI, see https://www.cnri.reston.va.us) National Research Initiatives (CNRI, see http://www.cnri.reston.va.us)
in Reston, Virginia where he released several versions of the in Reston, Virginia where he released several versions of the
software. software.
@ -19,7 +19,7 @@ https://www.python.org/psf/) was formed, a non-profit organization
created specifically to own Python-related Intellectual Property. created specifically to own Python-related Intellectual Property.
Zope Corporation was a sponsoring member of the PSF. Zope Corporation was a sponsoring member of the PSF.
All Python releases are Open Source (see https://opensource.org for All Python releases are Open Source (see http://www.opensource.org for
the Open Source Definition). Historically, most, but not all, Python the Open Source Definition). Historically, most, but not all, Python
releases have also been GPL-compatible; the table below summarizes releases have also been GPL-compatible; the table below summarizes
the various releases. the various releases.

Binary file not shown.

Binary file not shown.

Binary file not shown.

13
blib2to3/README Normal file
View File

@ -0,0 +1,13 @@
A subset of lib2to3 taken from Python 3.7.0b2.
Commit hash: 9c17e3a1987004b8bcfbe423953aad84493a7984
Reasons for forking:
- consistent handling of f-strings for users of Python < 3.6.2
- backport of BPO-33064 that fixes parsing files with trailing commas after
*args and **kwargs
- backport of GH-6143 that restores the ability to reformat legacy usage of
`async`
- support all types of string literals
- better ability to debug (better reprs)
- INDENT and DEDENT don't hold whitespace and comment prefixes
- ability to Cythonize

1
blib2to3/__init__.py Normal file
View File

@ -0,0 +1 @@
#empty

1
blib2to3/__init__.pyi Normal file
View File

@ -0,0 +1 @@
# Stubs for lib2to3 (Python 3.6)

View File

@ -0,0 +1,10 @@
# Stubs for lib2to3.pgen2 (Python 3.6)
import os
import sys
from typing import Text, Union
if sys.version_info >= (3, 6):
_Path = Union[Text, os.PathLike]
else:
_Path = Text

View File

@ -1,8 +1,6 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement. # Licensed to PSF under a Contributor Agreement.
# mypy: ignore-errors
"""Convert graminit.[ch] spit out by pgen to Python code. """Convert graminit.[ch] spit out by pgen to Python code.
Pgen is the Python parser generator. It is useful to quickly create a Pgen is the Python parser generator. It is useful to quickly create a
@ -63,7 +61,7 @@ def parse_graminit_h(self, filename):
try: try:
f = open(filename) f = open(filename)
except OSError as err: except OSError as err:
print(f"Can't open {filename}: {err}") print("Can't open %s: %s" % (filename, err))
return False return False
self.symbol2number = {} self.symbol2number = {}
self.number2symbol = {} self.number2symbol = {}
@ -72,7 +70,8 @@ def parse_graminit_h(self, filename):
lineno += 1 lineno += 1
mo = re.match(r"^#define\s+(\w+)\s+(\d+)$", line) mo = re.match(r"^#define\s+(\w+)\s+(\d+)$", line)
if not mo and line.strip(): if not mo and line.strip():
print(f"{filename}({lineno}): can't parse {line.strip()}") print("%s(%s): can't parse %s" % (filename, lineno,
line.strip()))
else: else:
symbol, number = mo.groups() symbol, number = mo.groups()
number = int(number) number = int(number)
@ -113,44 +112,45 @@ def parse_graminit_c(self, filename):
try: try:
f = open(filename) f = open(filename)
except OSError as err: except OSError as err:
print(f"Can't open {filename}: {err}") print("Can't open %s: %s" % (filename, err))
return False return False
# The code below essentially uses f's iterator-ness! # The code below essentially uses f's iterator-ness!
lineno = 0 lineno = 0
# Expect the two #include lines # Expect the two #include lines
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
assert line == '#include "pgenheaders.h"\n', (lineno, line) assert line == '#include "pgenheaders.h"\n', (lineno, line)
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
assert line == '#include "grammar.h"\n', (lineno, line) assert line == '#include "grammar.h"\n', (lineno, line)
# Parse the state definitions # Parse the state definitions
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
allarcs = {} allarcs = {}
states = [] states = []
while line.startswith("static arc "): while line.startswith("static arc "):
while line.startswith("static arc "): while line.startswith("static arc "):
mo = re.match(r"static arc arcs_(\d+)_(\d+)\[(\d+)\] = {$", line) mo = re.match(r"static arc arcs_(\d+)_(\d+)\[(\d+)\] = {$",
line)
assert mo, (lineno, line) assert mo, (lineno, line)
n, m, k = list(map(int, mo.groups())) n, m, k = list(map(int, mo.groups()))
arcs = [] arcs = []
for _ in range(k): for _ in range(k):
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
mo = re.match(r"\s+{(\d+), (\d+)},$", line) mo = re.match(r"\s+{(\d+), (\d+)},$", line)
assert mo, (lineno, line) assert mo, (lineno, line)
i, j = list(map(int, mo.groups())) i, j = list(map(int, mo.groups()))
arcs.append((i, j)) arcs.append((i, j))
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
assert line == "};\n", (lineno, line) assert line == "};\n", (lineno, line)
allarcs[(n, m)] = arcs allarcs[(n, m)] = arcs
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
mo = re.match(r"static state states_(\d+)\[(\d+)\] = {$", line) mo = re.match(r"static state states_(\d+)\[(\d+)\] = {$", line)
assert mo, (lineno, line) assert mo, (lineno, line)
s, t = list(map(int, mo.groups())) s, t = list(map(int, mo.groups()))
assert s == len(states), (lineno, line) assert s == len(states), (lineno, line)
state = [] state = []
for _ in range(t): for _ in range(t):
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
mo = re.match(r"\s+{(\d+), arcs_(\d+)_(\d+)},$", line) mo = re.match(r"\s+{(\d+), arcs_(\d+)_(\d+)},$", line)
assert mo, (lineno, line) assert mo, (lineno, line)
k, n, m = list(map(int, mo.groups())) k, n, m = list(map(int, mo.groups()))
@ -158,9 +158,9 @@ def parse_graminit_c(self, filename):
assert k == len(arcs), (lineno, line) assert k == len(arcs), (lineno, line)
state.append(arcs) state.append(arcs)
states.append(state) states.append(state)
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
assert line == "};\n", (lineno, line) assert line == "};\n", (lineno, line)
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
self.states = states self.states = states
# Parse the dfas # Parse the dfas
@ -169,8 +169,9 @@ def parse_graminit_c(self, filename):
assert mo, (lineno, line) assert mo, (lineno, line)
ndfas = int(mo.group(1)) ndfas = int(mo.group(1))
for i in range(ndfas): for i in range(ndfas):
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
mo = re.match(r'\s+{(\d+), "(\w+)", (\d+), (\d+), states_(\d+),$', line) mo = re.match(r'\s+{(\d+), "(\w+)", (\d+), (\d+), states_(\d+),$',
line)
assert mo, (lineno, line) assert mo, (lineno, line)
symbol = mo.group(2) symbol = mo.group(2)
number, x, y, z = list(map(int, mo.group(1, 3, 4, 5))) number, x, y, z = list(map(int, mo.group(1, 3, 4, 5)))
@ -179,7 +180,7 @@ def parse_graminit_c(self, filename):
assert x == 0, (lineno, line) assert x == 0, (lineno, line)
state = states[z] state = states[z]
assert y == len(state), (lineno, line) assert y == len(state), (lineno, line)
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
mo = re.match(r'\s+("(?:\\\d\d\d)*")},$', line) mo = re.match(r'\s+("(?:\\\d\d\d)*")},$', line)
assert mo, (lineno, line) assert mo, (lineno, line)
first = {} first = {}
@ -187,21 +188,21 @@ def parse_graminit_c(self, filename):
for i, c in enumerate(rawbitset): for i, c in enumerate(rawbitset):
byte = ord(c) byte = ord(c)
for j in range(8): for j in range(8):
if byte & (1 << j): if byte & (1<<j):
first[i * 8 + j] = 1 first[i*8 + j] = 1
dfas[number] = (state, first) dfas[number] = (state, first)
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
assert line == "};\n", (lineno, line) assert line == "};\n", (lineno, line)
self.dfas = dfas self.dfas = dfas
# Parse the labels # Parse the labels
labels = [] labels = []
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
mo = re.match(r"static label labels\[(\d+)\] = {$", line) mo = re.match(r"static label labels\[(\d+)\] = {$", line)
assert mo, (lineno, line) assert mo, (lineno, line)
nlabels = int(mo.group(1)) nlabels = int(mo.group(1))
for i in range(nlabels): for i in range(nlabels):
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
mo = re.match(r'\s+{(\d+), (0|"\w+")},$', line) mo = re.match(r'\s+{(\d+), (0|"\w+")},$', line)
assert mo, (lineno, line) assert mo, (lineno, line)
x, y = mo.groups() x, y = mo.groups()
@ -211,35 +212,35 @@ def parse_graminit_c(self, filename):
else: else:
y = eval(y) y = eval(y)
labels.append((x, y)) labels.append((x, y))
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
assert line == "};\n", (lineno, line) assert line == "};\n", (lineno, line)
self.labels = labels self.labels = labels
# Parse the grammar struct # Parse the grammar struct
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
assert line == "grammar _PyParser_Grammar = {\n", (lineno, line) assert line == "grammar _PyParser_Grammar = {\n", (lineno, line)
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
mo = re.match(r"\s+(\d+),$", line) mo = re.match(r"\s+(\d+),$", line)
assert mo, (lineno, line) assert mo, (lineno, line)
ndfas = int(mo.group(1)) ndfas = int(mo.group(1))
assert ndfas == len(self.dfas) assert ndfas == len(self.dfas)
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
assert line == "\tdfas,\n", (lineno, line) assert line == "\tdfas,\n", (lineno, line)
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
mo = re.match(r"\s+{(\d+), labels},$", line) mo = re.match(r"\s+{(\d+), labels},$", line)
assert mo, (lineno, line) assert mo, (lineno, line)
nlabels = int(mo.group(1)) nlabels = int(mo.group(1))
assert nlabels == len(self.labels), (lineno, line) assert nlabels == len(self.labels), (lineno, line)
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
mo = re.match(r"\s+(\d+)$", line) mo = re.match(r"\s+(\d+)$", line)
assert mo, (lineno, line) assert mo, (lineno, line)
start = int(mo.group(1)) start = int(mo.group(1))
assert start in self.number2symbol, (lineno, line) assert start in self.number2symbol, (lineno, line)
self.start = start self.start = start
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
assert line == "};\n", (lineno, line) assert line == "};\n", (lineno, line)
try: try:
lineno, line = lineno + 1, next(f) lineno, line = lineno+1, next(f)
except StopIteration: except StopIteration:
pass pass
else: else:
@ -247,8 +248,8 @@ def parse_graminit_c(self, filename):
def finish_off(self): def finish_off(self):
"""Create additional useful structures. (Internal).""" """Create additional useful structures. (Internal)."""
self.keywords = {} # map from keyword strings to arc labels self.keywords = {} # map from keyword strings to arc labels
self.tokens = {} # map from numeric token values to arc labels self.tokens = {} # map from numeric token values to arc labels
for ilabel, (type, value) in enumerate(self.labels): for ilabel, (type, value) in enumerate(self.labels):
if type == token.NAME and value is not None: if type == token.NAME and value is not None:
self.keywords[value] = ilabel self.keywords[value] = ilabel

223
blib2to3/pgen2/driver.py Normal file
View File

@ -0,0 +1,223 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
# Modifications:
# Copyright 2006 Google, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
"""Parser driver.
This provides a high-level interface to parse a file into a syntax tree.
"""
__author__ = "Guido van Rossum <guido@python.org>"
__all__ = ["Driver", "load_grammar"]
# Python imports
import codecs
import io
import os
import logging
import pkgutil
import sys
# Pgen imports
from . import grammar, parse, token, tokenize, pgen
class Driver(object):
def __init__(self, grammar, convert=None, logger=None):
self.grammar = grammar
if logger is None:
logger = logging.getLogger()
self.logger = logger
self.convert = convert
def parse_tokens(self, tokens, debug=False):
"""Parse a series of tokens and return the syntax tree."""
# XXX Move the prefix computation into a wrapper around tokenize.
p = parse.Parser(self.grammar, self.convert)
p.setup()
lineno = 1
column = 0
indent_columns = []
type = value = start = end = line_text = None
prefix = ""
for quintuple in tokens:
type, value, start, end, line_text = quintuple
if start != (lineno, column):
assert (lineno, column) <= start, ((lineno, column), start)
s_lineno, s_column = start
if lineno < s_lineno:
prefix += "\n" * (s_lineno - lineno)
lineno = s_lineno
column = 0
if column < s_column:
prefix += line_text[column:s_column]
column = s_column
if type in (tokenize.COMMENT, tokenize.NL):
prefix += value
lineno, column = end
if value.endswith("\n"):
lineno += 1
column = 0
continue
if type == token.OP:
type = grammar.opmap[value]
if debug:
self.logger.debug("%s %r (prefix=%r)",
token.tok_name[type], value, prefix)
if type in {token.INDENT, token.DEDENT}:
_prefix = prefix
prefix = ""
if type == token.DEDENT:
_indent_col = indent_columns.pop()
prefix, _prefix = self._partially_consume_prefix(_prefix, _indent_col)
if p.addtoken(type, value, (prefix, start)):
if debug:
self.logger.debug("Stop.")
break
prefix = ""
if type == token.INDENT:
indent_columns.append(len(value))
if _prefix.startswith(value):
# Don't double-indent. Since we're delaying the prefix that
# would normally belong to INDENT, we need to put the value
# at the end versus at the beginning.
_prefix = _prefix[len(value):] + value
if type in {token.INDENT, token.DEDENT}:
prefix = _prefix
lineno, column = end
if value.endswith("\n"):
lineno += 1
column = 0
else:
# We never broke out -- EOF is too soon (how can this happen???)
raise parse.ParseError("incomplete input",
type, value, (prefix, start))
return p.rootnode
def parse_stream_raw(self, stream, debug=False):
"""Parse a stream and return the syntax tree."""
tokens = tokenize.generate_tokens(stream.readline)
return self.parse_tokens(tokens, debug)
def parse_stream(self, stream, debug=False):
"""Parse a stream and return the syntax tree."""
return self.parse_stream_raw(stream, debug)
def parse_file(self, filename, encoding=None, debug=False):
"""Parse a file and return the syntax tree."""
with io.open(filename, "r", encoding=encoding) as stream:
return self.parse_stream(stream, debug)
def parse_string(self, text, debug=False):
"""Parse a string and return the syntax tree."""
tokens = tokenize.generate_tokens(io.StringIO(text).readline)
return self.parse_tokens(tokens, debug)
def _partially_consume_prefix(self, prefix, column):
lines = []
current_line = ""
current_column = 0
wait_for_nl = False
for char in prefix:
current_line += char
if wait_for_nl:
if char == '\n':
if current_line.strip() and current_column < column:
res = ''.join(lines)
return res, prefix[len(res):]
lines.append(current_line)
current_line = ""
current_column = 0
wait_for_nl = False
elif char == ' ':
current_column += 1
elif char == '\t':
current_column += 4
elif char == '\n':
# enexpected empty line
current_column = 0
else:
# indent is finished
wait_for_nl = True
return ''.join(lines), current_line
def _generate_pickle_name(gt):
head, tail = os.path.splitext(gt)
if tail == ".txt":
tail = ""
return head + tail + ".".join(map(str, sys.version_info)) + ".pickle"
def load_grammar(gt="Grammar.txt", gp=None,
save=True, force=False, logger=None):
"""Load the grammar (maybe from a pickle)."""
if logger is None:
logger = logging.getLogger()
gp = _generate_pickle_name(gt) if gp is None else gp
if force or not _newer(gp, gt):
logger.info("Generating grammar tables from %s", gt)
g = pgen.generate_grammar(gt)
if save:
logger.info("Writing grammar tables to %s", gp)
try:
g.dump(gp)
except OSError as e:
logger.info("Writing failed: %s", e)
else:
g = grammar.Grammar()
g.load(gp)
return g
def _newer(a, b):
"""Inquire whether file a was written since file b."""
if not os.path.exists(a):
return False
if not os.path.exists(b):
return True
return os.path.getmtime(a) >= os.path.getmtime(b)
def load_packaged_grammar(package, grammar_source):
"""Normally, loads a pickled grammar by doing
pkgutil.get_data(package, pickled_grammar)
where *pickled_grammar* is computed from *grammar_source* by adding the
Python version and using a ``.pickle`` extension.
However, if *grammar_source* is an extant file, load_grammar(grammar_source)
is called instead. This facilitates using a packaged grammar file when needed
but preserves load_grammar's automatic regeneration behavior when possible.
"""
if os.path.isfile(grammar_source):
return load_grammar(grammar_source)
pickled_name = _generate_pickle_name(os.path.basename(grammar_source))
data = pkgutil.get_data(package, pickled_name)
g = grammar.Grammar()
g.loads(data)
return g
def main(*args):
"""Main program, when run as a script: produce grammar pickle files.
Calls load_grammar for each argument, a path to a grammar text file.
"""
if not args:
args = sys.argv[1:]
logging.basicConfig(level=logging.INFO, stream=sys.stdout,
format='%(message)s')
for gt in args:
load_grammar(gt, save=True, force=True)
return True
if __name__ == "__main__":
sys.exit(int(not main()))

24
blib2to3/pgen2/driver.pyi Normal file
View File

@ -0,0 +1,24 @@
# Stubs for lib2to3.pgen2.driver (Python 3.6)
import os
import sys
from typing import Any, Callable, IO, Iterable, List, Optional, Text, Tuple, Union
from logging import Logger
from blib2to3.pytree import _Convert, _NL
from blib2to3.pgen2 import _Path
from blib2to3.pgen2.grammar import Grammar
class Driver:
grammar: Grammar
logger: Logger
convert: _Convert
def __init__(self, grammar: Grammar, convert: Optional[_Convert] = ..., logger: Optional[Logger] = ...) -> None: ...
def parse_tokens(self, tokens: Iterable[Any], debug: bool = ...) -> _NL: ...
def parse_stream_raw(self, stream: IO[Text], debug: bool = ...) -> _NL: ...
def parse_stream(self, stream: IO[Text], debug: bool = ...) -> _NL: ...
def parse_file(self, filename: _Path, encoding: Optional[Text] = ..., debug: bool = ...) -> _NL: ...
def parse_string(self, text: Text, debug: bool = ...) -> _NL: ...
def load_grammar(gt: Text = ..., gp: Optional[Text] = ..., save: bool = ..., force: bool = ..., logger: Optional[Logger] = ...) -> Grammar: ...

View File

@ -13,22 +13,13 @@
""" """
# Python imports # Python imports
import os
import pickle import pickle
import tempfile
from typing import Any, Optional, TypeVar, Union
# Local imports # Local imports
from . import token from . import token
_P = TypeVar("_P", bound="Grammar")
Label = tuple[int, Optional[str]]
DFA = list[list[tuple[int, int]]]
DFAS = tuple[DFA, dict[int, int]]
Path = Union[str, "os.PathLike[str]"]
class Grammar(object):
class Grammar:
"""Pgen parsing tables conversion class. """Pgen parsing tables conversion class.
Once initialized, this class supplies the grammar tables for the Once initialized, this class supplies the grammar tables for the
@ -82,78 +73,48 @@ class Grammar:
""" """
def __init__(self) -> None: def __init__(self):
self.symbol2number: dict[str, int] = {} self.symbol2number = {}
self.number2symbol: dict[int, str] = {} self.number2symbol = {}
self.states: list[DFA] = [] self.states = []
self.dfas: dict[int, DFAS] = {} self.dfas = {}
self.labels: list[Label] = [(0, "EMPTY")] self.labels = [(0, "EMPTY")]
self.keywords: dict[str, int] = {} self.keywords = {}
self.soft_keywords: dict[str, int] = {} self.tokens = {}
self.tokens: dict[int, int] = {} self.symbol2label = {}
self.symbol2label: dict[str, int] = {}
self.version: tuple[int, int] = (0, 0)
self.start = 256 self.start = 256
# Python 3.7+ parses async as a keyword, not an identifier
self.async_keywords = False
def dump(self, filename: Path) -> None: def dump(self, filename):
"""Dump the grammar tables to a pickle file.""" """Dump the grammar tables to a pickle file."""
with open(filename, "wb") as f:
pickle.dump(self.__dict__, f, pickle.HIGHEST_PROTOCOL)
# mypyc generates objects that don't have a __dict__, but they def load(self, filename):
# do have __getstate__ methods that will return an equivalent
# dictionary
if hasattr(self, "__dict__"):
d = self.__dict__
else:
d = self.__getstate__() # type: ignore
with tempfile.NamedTemporaryFile(
dir=os.path.dirname(filename), delete=False
) as f:
pickle.dump(d, f, pickle.HIGHEST_PROTOCOL)
os.replace(f.name, filename)
def _update(self, attrs: dict[str, Any]) -> None:
for k, v in attrs.items():
setattr(self, k, v)
def load(self, filename: Path) -> None:
"""Load the grammar tables from a pickle file.""" """Load the grammar tables from a pickle file."""
with open(filename, "rb") as f: with open(filename, "rb") as f:
d = pickle.load(f) d = pickle.load(f)
self._update(d) self.__dict__.update(d)
def loads(self, pkl: bytes) -> None: def loads(self, pkl):
"""Load the grammar tables from a pickle bytes object.""" """Load the grammar tables from a pickle bytes object."""
self._update(pickle.loads(pkl)) self.__dict__.update(pickle.loads(pkl))
def copy(self: _P) -> _P: def copy(self):
""" """
Copy the grammar. Copy the grammar.
""" """
new = self.__class__() new = self.__class__()
for dict_attr in ( for dict_attr in ("symbol2number", "number2symbol", "dfas", "keywords",
"symbol2number", "tokens", "symbol2label"):
"number2symbol",
"dfas",
"keywords",
"soft_keywords",
"tokens",
"symbol2label",
):
setattr(new, dict_attr, getattr(self, dict_attr).copy()) setattr(new, dict_attr, getattr(self, dict_attr).copy())
new.labels = self.labels[:] new.labels = self.labels[:]
new.states = self.states[:] new.states = self.states[:]
new.start = self.start new.start = self.start
new.version = self.version
new.async_keywords = self.async_keywords
return new return new
def report(self) -> None: def report(self):
"""Dump the grammar tables to standard output, for debugging.""" """Dump the grammar tables to standard output, for debugging."""
from pprint import pprint from pprint import pprint
print("s2n") print("s2n")
pprint(self.symbol2number) pprint(self.symbol2number)
print("n2s") print("n2s")
@ -217,8 +178,6 @@ def report(self) -> None:
// DOUBLESLASH // DOUBLESLASH
//= DOUBLESLASHEQUAL //= DOUBLESLASHEQUAL
-> RARROW -> RARROW
:= COLONEQUAL
! BANG
""" """
opmap = {} opmap = {}

View File

@ -0,0 +1,29 @@
# Stubs for lib2to3.pgen2.grammar (Python 3.6)
from blib2to3.pgen2 import _Path
from typing import Any, Dict, List, Optional, Text, Tuple, TypeVar
_P = TypeVar('_P')
_Label = Tuple[int, Optional[Text]]
_DFA = List[List[Tuple[int, int]]]
_DFAS = Tuple[_DFA, Dict[int, int]]
class Grammar:
symbol2number: Dict[Text, int]
number2symbol: Dict[int, Text]
states: List[_DFA]
dfas: Dict[int, _DFAS]
labels: List[_Label]
keywords: Dict[Text, int]
tokens: Dict[int, int]
symbol2label: Dict[Text, int]
start: int
def __init__(self) -> None: ...
def dump(self, filename: _Path) -> None: ...
def load(self, filename: _Path) -> None: ...
def copy(self: _P) -> _P: ...
def report(self) -> None: ...
opmap_raw: Text
opmap: Dict[Text, Text]

View File

@ -5,21 +5,18 @@
import re import re
simple_escapes: dict[str, str] = { simple_escapes = {"a": "\a",
"a": "\a", "b": "\b",
"b": "\b", "f": "\f",
"f": "\f", "n": "\n",
"n": "\n", "r": "\r",
"r": "\r", "t": "\t",
"t": "\t", "v": "\v",
"v": "\v", "'": "'",
"'": "'", '"': '"',
'"': '"', "\\": "\\"}
"\\": "\\",
}
def escape(m):
def escape(m: re.Match[str]) -> str:
all, tail = m.group(0, 1) all, tail = m.group(0, 1)
assert all.startswith("\\") assert all.startswith("\\")
esc = simple_escapes.get(tail) esc = simple_escapes.get(tail)
@ -28,31 +25,29 @@ def escape(m: re.Match[str]) -> str:
if tail.startswith("x"): if tail.startswith("x"):
hexes = tail[1:] hexes = tail[1:]
if len(hexes) < 2: if len(hexes) < 2:
raise ValueError(f"invalid hex string escape ('\\{tail}')") raise ValueError("invalid hex string escape ('\\%s')" % tail)
try: try:
i = int(hexes, 16) i = int(hexes, 16)
except ValueError: except ValueError:
raise ValueError(f"invalid hex string escape ('\\{tail}')") from None raise ValueError("invalid hex string escape ('\\%s')" % tail) from None
else: else:
try: try:
i = int(tail, 8) i = int(tail, 8)
except ValueError: except ValueError:
raise ValueError(f"invalid octal string escape ('\\{tail}')") from None raise ValueError("invalid octal string escape ('\\%s')" % tail) from None
return chr(i) return chr(i)
def evalString(s):
def evalString(s: str) -> str:
assert s.startswith("'") or s.startswith('"'), repr(s[:1]) assert s.startswith("'") or s.startswith('"'), repr(s[:1])
q = s[0] q = s[0]
if s[:3] == q * 3: if s[:3] == q*3:
q = q * 3 q = q*3
assert s.endswith(q), repr(s[-len(q) :]) assert s.endswith(q), repr(s[-len(q):])
assert len(s) >= 2 * len(q) assert len(s) >= 2*len(q)
s = s[len(q) : -len(q)] s = s[len(q):-len(q)]
return re.sub(r"\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})", escape, s) return re.sub(r"\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})", escape, s)
def test():
def test() -> None:
for i in range(256): for i in range(256):
c = chr(i) c = chr(i)
s = repr(c) s = repr(c)

View File

@ -0,0 +1,9 @@
# Stubs for lib2to3.pgen2.literals (Python 3.6)
from typing import Dict, Match, Text
simple_escapes: Dict[Text, Text]
def escape(m: Match) -> Text: ...
def evalString(s: Text) -> Text: ...
def test() -> None: ...

201
blib2to3/pgen2/parse.py Normal file
View File

@ -0,0 +1,201 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
"""Parser engine for the grammar tables generated by pgen.
The grammar table must be loaded first.
See Parser/parser.c in the Python distribution for additional info on
how this parsing engine works.
"""
# Local imports
from . import token
class ParseError(Exception):
"""Exception to signal the parser is stuck."""
def __init__(self, msg, type, value, context):
Exception.__init__(self, "%s: type=%r, value=%r, context=%r" %
(msg, type, value, context))
self.msg = msg
self.type = type
self.value = value
self.context = context
class Parser(object):
"""Parser engine.
The proper usage sequence is:
p = Parser(grammar, [converter]) # create instance
p.setup([start]) # prepare for parsing
<for each input token>:
if p.addtoken(...): # parse a token; may raise ParseError
break
root = p.rootnode # root of abstract syntax tree
A Parser instance may be reused by calling setup() repeatedly.
A Parser instance contains state pertaining to the current token
sequence, and should not be used concurrently by different threads
to parse separate token sequences.
See driver.py for how to get input tokens by tokenizing a file or
string.
Parsing is complete when addtoken() returns True; the root of the
abstract syntax tree can then be retrieved from the rootnode
instance variable. When a syntax error occurs, addtoken() raises
the ParseError exception. There is no error recovery; the parser
cannot be used after a syntax error was reported (but it can be
reinitialized by calling setup()).
"""
def __init__(self, grammar, convert=None):
"""Constructor.
The grammar argument is a grammar.Grammar instance; see the
grammar module for more information.
The parser is not ready yet for parsing; you must call the
setup() method to get it started.
The optional convert argument is a function mapping concrete
syntax tree nodes to abstract syntax tree nodes. If not
given, no conversion is done and the syntax tree produced is
the concrete syntax tree. If given, it must be a function of
two arguments, the first being the grammar (a grammar.Grammar
instance), and the second being the concrete syntax tree node
to be converted. The syntax tree is converted from the bottom
up.
A concrete syntax tree node is a (type, value, context, nodes)
tuple, where type is the node type (a token or symbol number),
value is None for symbols and a string for tokens, context is
None or an opaque value used for error reporting (typically a
(lineno, offset) pair), and nodes is a list of children for
symbols, and None for tokens.
An abstract syntax tree node may be anything; this is entirely
up to the converter function.
"""
self.grammar = grammar
self.convert = convert or (lambda grammar, node: node)
def setup(self, start=None):
"""Prepare for parsing.
This *must* be called before starting to parse.
The optional argument is an alternative start symbol; it
defaults to the grammar's start symbol.
You can use a Parser instance to parse any number of programs;
each time you call setup() the parser is reset to an initial
state determined by the (implicit or explicit) start symbol.
"""
if start is None:
start = self.grammar.start
# Each stack entry is a tuple: (dfa, state, node).
# A node is a tuple: (type, value, context, children),
# where children is a list of nodes or None, and context may be None.
newnode = (start, None, None, [])
stackentry = (self.grammar.dfas[start], 0, newnode)
self.stack = [stackentry]
self.rootnode = None
self.used_names = set() # Aliased to self.rootnode.used_names in pop()
def addtoken(self, type, value, context):
"""Add a token; return True iff this is the end of the program."""
# Map from token to label
ilabel = self.classify(type, value, context)
# Loop until the token is shifted; may raise exceptions
while True:
dfa, state, node = self.stack[-1]
states, first = dfa
arcs = states[state]
# Look for a state with this label
for i, newstate in arcs:
t, v = self.grammar.labels[i]
if ilabel == i:
# Look it up in the list of labels
assert t < 256
# Shift a token; we're done with it
self.shift(type, value, newstate, context)
# Pop while we are in an accept-only state
state = newstate
while states[state] == [(0, state)]:
self.pop()
if not self.stack:
# Done parsing!
return True
dfa, state, node = self.stack[-1]
states, first = dfa
# Done with this token
return False
elif t >= 256:
# See if it's a symbol and if we're in its first set
itsdfa = self.grammar.dfas[t]
itsstates, itsfirst = itsdfa
if ilabel in itsfirst:
# Push a symbol
self.push(t, self.grammar.dfas[t], newstate, context)
break # To continue the outer while loop
else:
if (0, state) in arcs:
# An accepting state, pop it and try something else
self.pop()
if not self.stack:
# Done parsing, but another token is input
raise ParseError("too much input",
type, value, context)
else:
# No success finding a transition
raise ParseError("bad input", type, value, context)
def classify(self, type, value, context):
"""Turn a token into a label. (Internal)"""
if type == token.NAME:
# Keep a listing of all used names
self.used_names.add(value)
# Check for reserved words
ilabel = self.grammar.keywords.get(value)
if ilabel is not None:
return ilabel
ilabel = self.grammar.tokens.get(type)
if ilabel is None:
raise ParseError("bad token", type, value, context)
return ilabel
def shift(self, type, value, newstate, context):
"""Shift a token. (Internal)"""
dfa, state, node = self.stack[-1]
newnode = (type, value, context, None)
newnode = self.convert(self.grammar, newnode)
if newnode is not None:
node[-1].append(newnode)
self.stack[-1] = (dfa, newstate, node)
def push(self, type, newdfa, newstate, context):
"""Push a nonterminal. (Internal)"""
dfa, state, node = self.stack[-1]
newnode = (type, None, context, [])
self.stack[-1] = (dfa, newstate, node)
self.stack.append((newdfa, 0, newnode))
def pop(self):
"""Pop a nonterminal. (Internal)"""
popdfa, popstate, popnode = self.stack.pop()
newnode = self.convert(self.grammar, popnode)
if newnode is not None:
if self.stack:
dfa, state, node = self.stack[-1]
node[-1].append(newnode)
else:
self.rootnode = newnode
self.rootnode.used_names = self.used_names

29
blib2to3/pgen2/parse.pyi Normal file
View File

@ -0,0 +1,29 @@
# Stubs for lib2to3.pgen2.parse (Python 3.6)
from typing import Any, Dict, List, Optional, Sequence, Set, Text, Tuple
from blib2to3.pgen2.grammar import Grammar, _DFAS
from blib2to3.pytree import _NL, _Convert, _RawNode
_Context = Sequence[Any]
class ParseError(Exception):
msg: Text
type: int
value: Optional[Text]
context: _Context
def __init__(self, msg: Text, type: int, value: Optional[Text], context: _Context) -> None: ...
class Parser:
grammar: Grammar
convert: _Convert
stack: List[Tuple[_DFAS, int, _RawNode]]
rootnode: Optional[_NL]
used_names: Set[Text]
def __init__(self, grammar: Grammar, convert: Optional[_Convert] = ...) -> None: ...
def setup(self, start: Optional[int] = ...) -> None: ...
def addtoken(self, type: int, value: Optional[Text], context: _Context) -> bool: ...
def classify(self, type: int, value: Optional[Text], context: _Context) -> int: ...
def shift(self, type: int, value: Optional[Text], newstate: int, context: _Context) -> None: ...
def push(self, type: int, newdfa: _DFAS, newstate: int, context: _Context) -> None: ...
def pop(self) -> None: ...

View File

@ -1,41 +1,30 @@
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved. # Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement. # Licensed to PSF under a Contributor Agreement.
import os # Pgen imports
from collections.abc import Iterator, Sequence from . import grammar, token, tokenize
from typing import IO, Any, NoReturn, Optional, Union
from blib2to3.pgen2 import grammar, token, tokenize
from blib2to3.pgen2.tokenize import TokenInfo
Path = Union[str, "os.PathLike[str]"]
class PgenGrammar(grammar.Grammar): class PgenGrammar(grammar.Grammar):
pass pass
class ParserGenerator(object):
class ParserGenerator: def __init__(self, filename, stream=None):
filename: Path
stream: IO[str]
generator: Iterator[TokenInfo]
first: dict[str, Optional[dict[str, int]]]
def __init__(self, filename: Path, stream: Optional[IO[str]] = None) -> None:
close_stream = None close_stream = None
if stream is None: if stream is None:
stream = open(filename, encoding="utf-8") stream = open(filename)
close_stream = stream.close close_stream = stream.close
self.filename = filename self.filename = filename
self.generator = tokenize.tokenize(stream.read()) self.stream = stream
self.gettoken() # Initialize lookahead self.generator = tokenize.generate_tokens(stream.readline)
self.gettoken() # Initialize lookahead
self.dfas, self.startsymbol = self.parse() self.dfas, self.startsymbol = self.parse()
if close_stream is not None: if close_stream is not None:
close_stream() close_stream()
self.first = {} # map from symbol name to set of tokens self.first = {} # map from symbol name to set of tokens
self.addfirstsets() self.addfirstsets()
def make_grammar(self) -> PgenGrammar: def make_grammar(self):
c = PgenGrammar() c = PgenGrammar()
names = list(self.dfas.keys()) names = list(self.dfas.keys())
names.sort() names.sort()
@ -60,9 +49,8 @@ def make_grammar(self) -> PgenGrammar:
c.start = c.symbol2number[self.startsymbol] c.start = c.symbol2number[self.startsymbol]
return c return c
def make_first(self, c: PgenGrammar, name: str) -> dict[int, int]: def make_first(self, c, name):
rawfirst = self.first[name] rawfirst = self.first[name]
assert rawfirst is not None
first = {} first = {}
for label in sorted(rawfirst): for label in sorted(rawfirst):
ilabel = self.make_label(c, label) ilabel = self.make_label(c, label)
@ -70,7 +58,7 @@ def make_first(self, c: PgenGrammar, name: str) -> dict[int, int]:
first[ilabel] = 1 first[ilabel] = 1
return first return first
def make_label(self, c: PgenGrammar, label: str) -> int: def make_label(self, c, label):
# XXX Maybe this should be a method on a subclass of converter? # XXX Maybe this should be a method on a subclass of converter?
ilabel = len(c.labels) ilabel = len(c.labels)
if label[0].isalpha(): if label[0].isalpha():
@ -99,21 +87,16 @@ def make_label(self, c: PgenGrammar, label: str) -> int:
assert label[0] in ('"', "'"), label assert label[0] in ('"', "'"), label
value = eval(label) value = eval(label)
if value[0].isalpha(): if value[0].isalpha():
if label[0] == '"':
keywords = c.soft_keywords
else:
keywords = c.keywords
# A keyword # A keyword
if value in keywords: if value in c.keywords:
return keywords[value] return c.keywords[value]
else: else:
c.labels.append((token.NAME, value)) c.labels.append((token.NAME, value))
keywords[value] = ilabel c.keywords[value] = ilabel
return ilabel return ilabel
else: else:
# An operator (any non-numeric token) # An operator (any non-numeric token)
itoken = grammar.opmap[value] # Fails if unknown token itoken = grammar.opmap[value] # Fails if unknown token
if itoken in c.tokens: if itoken in c.tokens:
return c.tokens[itoken] return c.tokens[itoken]
else: else:
@ -121,49 +104,47 @@ def make_label(self, c: PgenGrammar, label: str) -> int:
c.tokens[itoken] = ilabel c.tokens[itoken] = ilabel
return ilabel return ilabel
def addfirstsets(self) -> None: def addfirstsets(self):
names = list(self.dfas.keys()) names = list(self.dfas.keys())
names.sort() names.sort()
for name in names: for name in names:
if name not in self.first: if name not in self.first:
self.calcfirst(name) self.calcfirst(name)
# print name, self.first[name].keys() #print name, self.first[name].keys()
def calcfirst(self, name: str) -> None: def calcfirst(self, name):
dfa = self.dfas[name] dfa = self.dfas[name]
self.first[name] = None # dummy to detect left recursion self.first[name] = None # dummy to detect left recursion
state = dfa[0] state = dfa[0]
totalset: dict[str, int] = {} totalset = {}
overlapcheck = {} overlapcheck = {}
for label in state.arcs: for label, next in state.arcs.items():
if label in self.dfas: if label in self.dfas:
if label in self.first: if label in self.first:
fset = self.first[label] fset = self.first[label]
if fset is None: if fset is None:
raise ValueError(f"recursion for rule {name!r}") raise ValueError("recursion for rule %r" % name)
else: else:
self.calcfirst(label) self.calcfirst(label)
fset = self.first[label] fset = self.first[label]
assert fset is not None
totalset.update(fset) totalset.update(fset)
overlapcheck[label] = fset overlapcheck[label] = fset
else: else:
totalset[label] = 1 totalset[label] = 1
overlapcheck[label] = {label: 1} overlapcheck[label] = {label: 1}
inverse: dict[str, str] = {} inverse = {}
for label, itsfirst in overlapcheck.items(): for label, itsfirst in overlapcheck.items():
for symbol in itsfirst: for symbol in itsfirst:
if symbol in inverse: if symbol in inverse:
raise ValueError( raise ValueError("rule %s is ambiguous; %s is in the"
f"rule {name} is ambiguous; {symbol} is in the first sets of" " first sets of %s as well as %s" %
f" {label} as well as {inverse[symbol]}" (name, symbol, label, inverse[symbol]))
)
inverse[symbol] = label inverse[symbol] = label
self.first[name] = totalset self.first[name] = totalset
def parse(self) -> tuple[dict[str, list["DFAState"]], str]: def parse(self):
dfas = {} dfas = {}
startsymbol: Optional[str] = None startsymbol = None
# MSTART: (NEWLINE | RULE)* ENDMARKER # MSTART: (NEWLINE | RULE)* ENDMARKER
while self.type != token.ENDMARKER: while self.type != token.ENDMARKER:
while self.type == token.NEWLINE: while self.type == token.NEWLINE:
@ -173,33 +154,30 @@ def parse(self) -> tuple[dict[str, list["DFAState"]], str]:
self.expect(token.OP, ":") self.expect(token.OP, ":")
a, z = self.parse_rhs() a, z = self.parse_rhs()
self.expect(token.NEWLINE) self.expect(token.NEWLINE)
# self.dump_nfa(name, a, z) #self.dump_nfa(name, a, z)
dfa = self.make_dfa(a, z) dfa = self.make_dfa(a, z)
# self.dump_dfa(name, dfa) #self.dump_dfa(name, dfa)
# oldlen = len(dfa) oldlen = len(dfa)
self.simplify_dfa(dfa) self.simplify_dfa(dfa)
# newlen = len(dfa) newlen = len(dfa)
dfas[name] = dfa dfas[name] = dfa
# print name, oldlen, newlen #print name, oldlen, newlen
if startsymbol is None: if startsymbol is None:
startsymbol = name startsymbol = name
assert startsymbol is not None
return dfas, startsymbol return dfas, startsymbol
def make_dfa(self, start: "NFAState", finish: "NFAState") -> list["DFAState"]: def make_dfa(self, start, finish):
# To turn an NFA into a DFA, we define the states of the DFA # To turn an NFA into a DFA, we define the states of the DFA
# to correspond to *sets* of states of the NFA. Then do some # to correspond to *sets* of states of the NFA. Then do some
# state reduction. Let's represent sets as dicts with 1 for # state reduction. Let's represent sets as dicts with 1 for
# values. # values.
assert isinstance(start, NFAState) assert isinstance(start, NFAState)
assert isinstance(finish, NFAState) assert isinstance(finish, NFAState)
def closure(state):
def closure(state: NFAState) -> dict[NFAState, int]: base = {}
base: dict[NFAState, int] = {}
addclosure(state, base) addclosure(state, base)
return base return base
def addclosure(state, base):
def addclosure(state: NFAState, base: dict[NFAState, int]) -> None:
assert isinstance(state, NFAState) assert isinstance(state, NFAState)
if state in base: if state in base:
return return
@ -207,10 +185,9 @@ def addclosure(state: NFAState, base: dict[NFAState, int]) -> None:
for label, next in state.arcs: for label, next in state.arcs:
if label is None: if label is None:
addclosure(next, base) addclosure(next, base)
states = [DFAState(closure(start), finish)] states = [DFAState(closure(start), finish)]
for state in states: # NB states grows while we're iterating for state in states: # NB states grows while we're iterating
arcs: dict[str, dict[NFAState, int]] = {} arcs = {}
for nfastate in state.nfaset: for nfastate in state.nfaset:
for label, next in nfastate.arcs: for label, next in nfastate.arcs:
if label is not None: if label is not None:
@ -223,9 +200,9 @@ def addclosure(state: NFAState, base: dict[NFAState, int]) -> None:
st = DFAState(nfaset, finish) st = DFAState(nfaset, finish)
states.append(st) states.append(st)
state.addarc(st, label) state.addarc(st, label)
return states # List of DFAState instances; first one is start return states # List of DFAState instances; first one is start
def dump_nfa(self, name: str, start: "NFAState", finish: "NFAState") -> None: def dump_nfa(self, name, start, finish):
print("Dump of NFA for", name) print("Dump of NFA for", name)
todo = [start] todo = [start]
for i, state in enumerate(todo): for i, state in enumerate(todo):
@ -237,18 +214,18 @@ def dump_nfa(self, name: str, start: "NFAState", finish: "NFAState") -> None:
j = len(todo) j = len(todo)
todo.append(next) todo.append(next)
if label is None: if label is None:
print(f" -> {j}") print(" -> %d" % j)
else: else:
print(f" {label} -> {j}") print(" %s -> %d" % (label, j))
def dump_dfa(self, name: str, dfa: Sequence["DFAState"]) -> None: def dump_dfa(self, name, dfa):
print("Dump of DFA for", name) print("Dump of DFA for", name)
for i, state in enumerate(dfa): for i, state in enumerate(dfa):
print(" State", i, state.isfinal and "(final)" or "") print(" State", i, state.isfinal and "(final)" or "")
for label, next in sorted(state.arcs.items()): for label, next in sorted(state.arcs.items()):
print(f" {label} -> {dfa.index(next)}") print(" %s -> %d" % (label, dfa.index(next)))
def simplify_dfa(self, dfa: list["DFAState"]) -> None: def simplify_dfa(self, dfa):
# This is not theoretically optimal, but works well enough. # This is not theoretically optimal, but works well enough.
# Algorithm: repeatedly look for two states that have the same # Algorithm: repeatedly look for two states that have the same
# set of arcs (same labels pointing to the same nodes) and # set of arcs (same labels pointing to the same nodes) and
@ -259,17 +236,17 @@ def simplify_dfa(self, dfa: list["DFAState"]) -> None:
while changes: while changes:
changes = False changes = False
for i, state_i in enumerate(dfa): for i, state_i in enumerate(dfa):
for j in range(i + 1, len(dfa)): for j in range(i+1, len(dfa)):
state_j = dfa[j] state_j = dfa[j]
if state_i == state_j: if state_i == state_j:
# print " unify", i, j #print " unify", i, j
del dfa[j] del dfa[j]
for state in dfa: for state in dfa:
state.unifystate(state_j, state_i) state.unifystate(state_j, state_i)
changes = True changes = True
break break
def parse_rhs(self) -> tuple["NFAState", "NFAState"]: def parse_rhs(self):
# RHS: ALT ('|' ALT)* # RHS: ALT ('|' ALT)*
a, z = self.parse_alt() a, z = self.parse_alt()
if self.value != "|": if self.value != "|":
@ -286,16 +263,17 @@ def parse_rhs(self) -> tuple["NFAState", "NFAState"]:
z.addarc(zz) z.addarc(zz)
return aa, zz return aa, zz
def parse_alt(self) -> tuple["NFAState", "NFAState"]: def parse_alt(self):
# ALT: ITEM+ # ALT: ITEM+
a, b = self.parse_item() a, b = self.parse_item()
while self.value in ("(", "[") or self.type in (token.NAME, token.STRING): while (self.value in ("(", "[") or
self.type in (token.NAME, token.STRING)):
c, d = self.parse_item() c, d = self.parse_item()
b.addarc(c) b.addarc(c)
b = d b = d
return a, b return a, b
def parse_item(self) -> tuple["NFAState", "NFAState"]: def parse_item(self):
# ITEM: '[' RHS ']' | ATOM ['+' | '*'] # ITEM: '[' RHS ']' | ATOM ['+' | '*']
if self.value == "[": if self.value == "[":
self.gettoken() self.gettoken()
@ -315,7 +293,7 @@ def parse_item(self) -> tuple["NFAState", "NFAState"]:
else: else:
return a, a return a, a
def parse_atom(self) -> tuple["NFAState", "NFAState"]: def parse_atom(self):
# ATOM: '(' RHS ')' | NAME | STRING # ATOM: '(' RHS ')' | NAME | STRING
if self.value == "(": if self.value == "(":
self.gettoken() self.gettoken()
@ -329,67 +307,65 @@ def parse_atom(self) -> tuple["NFAState", "NFAState"]:
self.gettoken() self.gettoken()
return a, z return a, z
else: else:
self.raise_error( self.raise_error("expected (...) or NAME or STRING, got %s/%s",
f"expected (...) or NAME or STRING, got {self.type}/{self.value}" self.type, self.value)
)
def expect(self, type: int, value: Optional[Any] = None) -> str: def expect(self, type, value=None):
if self.type != type or (value is not None and self.value != value): if self.type != type or (value is not None and self.value != value):
self.raise_error(f"expected {type}/{value}, got {self.type}/{self.value}") self.raise_error("expected %s/%s, got %s/%s",
type, value, self.type, self.value)
value = self.value value = self.value
self.gettoken() self.gettoken()
return value return value
def gettoken(self) -> None: def gettoken(self):
tup = next(self.generator) tup = next(self.generator)
while tup[0] in (tokenize.COMMENT, tokenize.NL): while tup[0] in (tokenize.COMMENT, tokenize.NL):
tup = next(self.generator) tup = next(self.generator)
self.type, self.value, self.begin, self.end, self.line = tup self.type, self.value, self.begin, self.end, self.line = tup
# print token.tok_name[self.type], repr(self.value) #print token.tok_name[self.type], repr(self.value)
def raise_error(self, msg: str) -> NoReturn: def raise_error(self, msg, *args):
raise SyntaxError( if args:
msg, (str(self.filename), self.end[0], self.end[1], self.line) try:
) msg = msg % args
except:
msg = " ".join([msg] + list(map(str, args)))
raise SyntaxError(msg, (self.filename, self.end[0],
self.end[1], self.line))
class NFAState(object):
class NFAState: def __init__(self):
arcs: list[tuple[Optional[str], "NFAState"]] self.arcs = [] # list of (label, NFAState) pairs
def __init__(self) -> None: def addarc(self, next, label=None):
self.arcs = [] # list of (label, NFAState) pairs
def addarc(self, next: "NFAState", label: Optional[str] = None) -> None:
assert label is None or isinstance(label, str) assert label is None or isinstance(label, str)
assert isinstance(next, NFAState) assert isinstance(next, NFAState)
self.arcs.append((label, next)) self.arcs.append((label, next))
class DFAState(object):
class DFAState: def __init__(self, nfaset, final):
nfaset: dict[NFAState, Any]
isfinal: bool
arcs: dict[str, "DFAState"]
def __init__(self, nfaset: dict[NFAState, Any], final: NFAState) -> None:
assert isinstance(nfaset, dict) assert isinstance(nfaset, dict)
assert isinstance(next(iter(nfaset)), NFAState) assert isinstance(next(iter(nfaset)), NFAState)
assert isinstance(final, NFAState) assert isinstance(final, NFAState)
self.nfaset = nfaset self.nfaset = nfaset
self.isfinal = final in nfaset self.isfinal = final in nfaset
self.arcs = {} # map from label to DFAState self.arcs = {} # map from label to DFAState
def addarc(self, next: "DFAState", label: str) -> None: def addarc(self, next, label):
assert isinstance(label, str) assert isinstance(label, str)
assert label not in self.arcs assert label not in self.arcs
assert isinstance(next, DFAState) assert isinstance(next, DFAState)
self.arcs[label] = next self.arcs[label] = next
def unifystate(self, old: "DFAState", new: "DFAState") -> None: def unifystate(self, old, new):
for label, next in self.arcs.items(): for label, next in self.arcs.items():
if next is old: if next is old:
self.arcs[label] = new self.arcs[label] = new
def __eq__(self, other: Any) -> bool: def __eq__(self, other):
# Equality test -- ignore the nfaset instance variable # Equality test -- ignore the nfaset instance variable
assert isinstance(other, DFAState) assert isinstance(other, DFAState)
if self.isfinal != other.isfinal: if self.isfinal != other.isfinal:
@ -403,9 +379,8 @@ def __eq__(self, other: Any) -> bool:
return False return False
return True return True
__hash__: Any = None # For Py3 compatibility. __hash__ = None # For Py3 compatibility.
def generate_grammar(filename="Grammar.txt"):
def generate_grammar(filename: Path = "Grammar.txt") -> PgenGrammar:
p = ParserGenerator(filename) p = ParserGenerator(filename)
return p.make_grammar() return p.make_grammar()

49
blib2to3/pgen2/pgen.pyi Normal file
View File

@ -0,0 +1,49 @@
# Stubs for lib2to3.pgen2.pgen (Python 3.6)
from typing import Any, Dict, IO, Iterable, Iterator, List, Optional, Text, Tuple
from mypy_extensions import NoReturn
from blib2to3.pgen2 import _Path, grammar
from blib2to3.pgen2.tokenize import _TokenInfo
class PgenGrammar(grammar.Grammar): ...
class ParserGenerator:
filename: _Path
stream: IO[Text]
generator: Iterator[_TokenInfo]
first: Dict[Text, Dict[Text, int]]
def __init__(self, filename: _Path, stream: Optional[IO[Text]] = ...) -> None: ...
def make_grammar(self) -> PgenGrammar: ...
def make_first(self, c: PgenGrammar, name: Text) -> Dict[int, int]: ...
def make_label(self, c: PgenGrammar, label: Text) -> int: ...
def addfirstsets(self) -> None: ...
def calcfirst(self, name: Text) -> None: ...
def parse(self) -> Tuple[Dict[Text, List[DFAState]], Text]: ...
def make_dfa(self, start: NFAState, finish: NFAState) -> List[DFAState]: ...
def dump_nfa(self, name: Text, start: NFAState, finish: NFAState) -> List[DFAState]: ...
def dump_dfa(self, name: Text, dfa: Iterable[DFAState]) -> None: ...
def simplify_dfa(self, dfa: List[DFAState]) -> None: ...
def parse_rhs(self) -> Tuple[NFAState, NFAState]: ...
def parse_alt(self) -> Tuple[NFAState, NFAState]: ...
def parse_item(self) -> Tuple[NFAState, NFAState]: ...
def parse_atom(self) -> Tuple[NFAState, NFAState]: ...
def expect(self, type: int, value: Optional[Any] = ...) -> Text: ...
def gettoken(self) -> None: ...
def raise_error(self, msg: str, *args: Any) -> NoReturn: ...
class NFAState:
arcs: List[Tuple[Optional[Text], NFAState]]
def __init__(self) -> None: ...
def addarc(self, next: NFAState, label: Optional[Text] = ...) -> None: ...
class DFAState:
nfaset: Dict[NFAState, Any]
isfinal: bool
arcs: Dict[Text, DFAState]
def __init__(self, nfaset: Dict[NFAState, Any], final: NFAState) -> None: ...
def addarc(self, next: DFAState, label: Text) -> None: ...
def unifystate(self, old: DFAState, new: DFAState) -> None: ...
def __eq__(self, other: Any) -> bool: ...
def generate_grammar(filename: _Path = ...) -> PgenGrammar: ...

83
blib2to3/pgen2/token.py Normal file
View File

@ -0,0 +1,83 @@
"""Token constants (from "token.h")."""
# Taken from Python (r53757) and modified to include some tokens
# originally monkeypatched in by pgen2.tokenize
#--start constants--
ENDMARKER = 0
NAME = 1
NUMBER = 2
STRING = 3
NEWLINE = 4
INDENT = 5
DEDENT = 6
LPAR = 7
RPAR = 8
LSQB = 9
RSQB = 10
COLON = 11
COMMA = 12
SEMI = 13
PLUS = 14
MINUS = 15
STAR = 16
SLASH = 17
VBAR = 18
AMPER = 19
LESS = 20
GREATER = 21
EQUAL = 22
DOT = 23
PERCENT = 24
BACKQUOTE = 25
LBRACE = 26
RBRACE = 27
EQEQUAL = 28
NOTEQUAL = 29
LESSEQUAL = 30
GREATEREQUAL = 31
TILDE = 32
CIRCUMFLEX = 33
LEFTSHIFT = 34
RIGHTSHIFT = 35
DOUBLESTAR = 36
PLUSEQUAL = 37
MINEQUAL = 38
STAREQUAL = 39
SLASHEQUAL = 40
PERCENTEQUAL = 41
AMPEREQUAL = 42
VBAREQUAL = 43
CIRCUMFLEXEQUAL = 44
LEFTSHIFTEQUAL = 45
RIGHTSHIFTEQUAL = 46
DOUBLESTAREQUAL = 47
DOUBLESLASH = 48
DOUBLESLASHEQUAL = 49
AT = 50
ATEQUAL = 51
OP = 52
COMMENT = 53
NL = 54
RARROW = 55
AWAIT = 56
ASYNC = 57
ERRORTOKEN = 58
N_TOKENS = 59
NT_OFFSET = 256
#--end constants--
tok_name = {}
for _name, _value in list(globals().items()):
if type(_value) is type(0):
tok_name[_value] = _name
def ISTERMINAL(x):
return x < NT_OFFSET
def ISNONTERMINAL(x):
return x >= NT_OFFSET
def ISEOF(x):
return x == ENDMARKER

73
blib2to3/pgen2/token.pyi Normal file
View File

@ -0,0 +1,73 @@
# Stubs for lib2to3.pgen2.token (Python 3.6)
import sys
from typing import Dict, Text
ENDMARKER: int
NAME: int
NUMBER: int
STRING: int
NEWLINE: int
INDENT: int
DEDENT: int
LPAR: int
RPAR: int
LSQB: int
RSQB: int
COLON: int
COMMA: int
SEMI: int
PLUS: int
MINUS: int
STAR: int
SLASH: int
VBAR: int
AMPER: int
LESS: int
GREATER: int
EQUAL: int
DOT: int
PERCENT: int
BACKQUOTE: int
LBRACE: int
RBRACE: int
EQEQUAL: int
NOTEQUAL: int
LESSEQUAL: int
GREATEREQUAL: int
TILDE: int
CIRCUMFLEX: int
LEFTSHIFT: int
RIGHTSHIFT: int
DOUBLESTAR: int
PLUSEQUAL: int
MINEQUAL: int
STAREQUAL: int
SLASHEQUAL: int
PERCENTEQUAL: int
AMPEREQUAL: int
VBAREQUAL: int
CIRCUMFLEXEQUAL: int
LEFTSHIFTEQUAL: int
RIGHTSHIFTEQUAL: int
DOUBLESTAREQUAL: int
DOUBLESLASH: int
DOUBLESLASHEQUAL: int
OP: int
COMMENT: int
NL: int
if sys.version_info >= (3,):
RARROW: int
if sys.version_info >= (3, 5):
AT: int
ATEQUAL: int
AWAIT: int
ASYNC: int
ERRORTOKEN: int
N_TOKENS: int
NT_OFFSET: int
tok_name: Dict[int, Text]
def ISTERMINAL(x: int) -> bool: ...
def ISNONTERMINAL(x: int) -> bool: ...
def ISEOF(x: int) -> bool: ...

567
blib2to3/pgen2/tokenize.py Normal file
View File

@ -0,0 +1,567 @@
# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation.
# All rights reserved.
"""Tokenization help for Python programs.
generate_tokens(readline) is a generator that breaks a stream of
text into Python tokens. It accepts a readline-like method which is called
repeatedly to get the next line of input (or "" for EOF). It generates
5-tuples with these members:
the token type (see token.py)
the token (a string)
the starting (row, column) indices of the token (a 2-tuple of ints)
the ending (row, column) indices of the token (a 2-tuple of ints)
the original line (string)
It is designed to match the working of the Python tokenizer exactly, except
that it produces COMMENT tokens for comments and gives type OP for all
operators
Older entry points
tokenize_loop(readline, tokeneater)
tokenize(readline, tokeneater=printtoken)
are the same, except instead of generating tokens, tokeneater is a callback
function to which the 5 fields described above are passed as 5 arguments,
each time a new token is found."""
__author__ = 'Ka-Ping Yee <ping@lfw.org>'
__credits__ = \
'GvR, ESR, Tim Peters, Thomas Wouters, Fred Drake, Skip Montanaro'
import re
from codecs import BOM_UTF8, lookup
from blib2to3.pgen2.token import *
from . import token
__all__ = [x for x in dir(token) if x[0] != '_'] + ["tokenize",
"generate_tokens", "untokenize"]
del token
try:
bytes
except NameError:
# Support bytes type in Python <= 2.5, so 2to3 turns itself into
# valid Python 3 code.
bytes = str
def group(*choices): return '(' + '|'.join(choices) + ')'
def any(*choices): return group(*choices) + '*'
def maybe(*choices): return group(*choices) + '?'
def _combinations(*l):
return set(
x + y for x in l for y in l + ("",) if x.casefold() != y.casefold()
)
Whitespace = r'[ \f\t]*'
Comment = r'#[^\r\n]*'
Ignore = Whitespace + any(r'\\\r?\n' + Whitespace) + maybe(Comment)
Name = r'\w+' # this is invalid but it's fine because Name comes after Number in all groups
Binnumber = r'0[bB]_?[01]+(?:_[01]+)*'
Hexnumber = r'0[xX]_?[\da-fA-F]+(?:_[\da-fA-F]+)*[lL]?'
Octnumber = r'0[oO]?_?[0-7]+(?:_[0-7]+)*[lL]?'
Decnumber = group(r'[1-9]\d*(?:_\d+)*[lL]?', '0[lL]?')
Intnumber = group(Binnumber, Hexnumber, Octnumber, Decnumber)
Exponent = r'[eE][-+]?\d+(?:_\d+)*'
Pointfloat = group(r'\d+(?:_\d+)*\.(?:\d+(?:_\d+)*)?', r'\.\d+(?:_\d+)*') + maybe(Exponent)
Expfloat = r'\d+(?:_\d+)*' + Exponent
Floatnumber = group(Pointfloat, Expfloat)
Imagnumber = group(r'\d+(?:_\d+)*[jJ]', Floatnumber + r'[jJ]')
Number = group(Imagnumber, Floatnumber, Intnumber)
# Tail end of ' string.
Single = r"[^'\\]*(?:\\.[^'\\]*)*'"
# Tail end of " string.
Double = r'[^"\\]*(?:\\.[^"\\]*)*"'
# Tail end of ''' string.
Single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''"
# Tail end of """ string.
Double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""'
_litprefix = r"(?:[uUrRbBfF]|[rR][fFbB]|[fFbBuU][rR])?"
Triple = group(_litprefix + "'''", _litprefix + '"""')
# Single-line ' or " string.
String = group(_litprefix + r"'[^\n'\\]*(?:\\.[^\n'\\]*)*'",
_litprefix + r'"[^\n"\\]*(?:\\.[^\n"\\]*)*"')
# Because of leftmost-then-longest match semantics, be sure to put the
# longest operators first (e.g., if = came before ==, == would get
# recognized as two instances of =).
Operator = group(r"\*\*=?", r">>=?", r"<<=?", r"<>", r"!=",
r"//=?", r"->",
r"[+\-*/%&@|^=<>]=?",
r"~")
Bracket = '[][(){}]'
Special = group(r'\r?\n', r'[:;.,`@]')
Funny = group(Operator, Bracket, Special)
PlainToken = group(Number, Funny, String, Name)
Token = Ignore + PlainToken
# First (or only) line of ' or " string.
ContStr = group(_litprefix + r"'[^\n'\\]*(?:\\.[^\n'\\]*)*" +
group("'", r'\\\r?\n'),
_litprefix + r'"[^\n"\\]*(?:\\.[^\n"\\]*)*' +
group('"', r'\\\r?\n'))
PseudoExtras = group(r'\\\r?\n', Comment, Triple)
PseudoToken = Whitespace + group(PseudoExtras, Number, Funny, ContStr, Name)
tokenprog = re.compile(Token, re.UNICODE)
pseudoprog = re.compile(PseudoToken, re.UNICODE)
single3prog = re.compile(Single3)
double3prog = re.compile(Double3)
_strprefixes = (
_combinations('r', 'R', 'f', 'F') |
_combinations('r', 'R', 'b', 'B') |
{'u', 'U', 'ur', 'uR', 'Ur', 'UR'}
)
endprogs = {"'": re.compile(Single), '"': re.compile(Double),
"'''": single3prog, '"""': double3prog,
**{f"{prefix}'''": single3prog for prefix in _strprefixes},
**{f'{prefix}"""': double3prog for prefix in _strprefixes},
**{prefix: None for prefix in _strprefixes}}
triple_quoted = (
{"'''", '"""'} |
{f"{prefix}'''" for prefix in _strprefixes} |
{f'{prefix}"""' for prefix in _strprefixes}
)
single_quoted = (
{"'", '"'} |
{f"{prefix}'" for prefix in _strprefixes} |
{f'{prefix}"' for prefix in _strprefixes}
)
tabsize = 8
class TokenError(Exception): pass
class StopTokenizing(Exception): pass
def printtoken(type, token, xxx_todo_changeme, xxx_todo_changeme1, line): # for testing
(srow, scol) = xxx_todo_changeme
(erow, ecol) = xxx_todo_changeme1
print("%d,%d-%d,%d:\t%s\t%s" % \
(srow, scol, erow, ecol, tok_name[type], repr(token)))
def tokenize(readline, tokeneater=printtoken):
"""
The tokenize() function accepts two parameters: one representing the
input stream, and one providing an output mechanism for tokenize().
The first parameter, readline, must be a callable object which provides
the same interface as the readline() method of built-in file objects.
Each call to the function should return one line of input as a string.
The second parameter, tokeneater, must also be a callable object. It is
called once for each token, with five arguments, corresponding to the
tuples generated by generate_tokens().
"""
try:
tokenize_loop(readline, tokeneater)
except StopTokenizing:
pass
# backwards compatible interface
def tokenize_loop(readline, tokeneater):
for token_info in generate_tokens(readline):
tokeneater(*token_info)
class Untokenizer:
def __init__(self):
self.tokens = []
self.prev_row = 1
self.prev_col = 0
def add_whitespace(self, start):
row, col = start
assert row <= self.prev_row
col_offset = col - self.prev_col
if col_offset:
self.tokens.append(" " * col_offset)
def untokenize(self, iterable):
for t in iterable:
if len(t) == 2:
self.compat(t, iterable)
break
tok_type, token, start, end, line = t
self.add_whitespace(start)
self.tokens.append(token)
self.prev_row, self.prev_col = end
if tok_type in (NEWLINE, NL):
self.prev_row += 1
self.prev_col = 0
return "".join(self.tokens)
def compat(self, token, iterable):
startline = False
indents = []
toks_append = self.tokens.append
toknum, tokval = token
if toknum in (NAME, NUMBER):
tokval += ' '
if toknum in (NEWLINE, NL):
startline = True
for tok in iterable:
toknum, tokval = tok[:2]
if toknum in (NAME, NUMBER, ASYNC, AWAIT):
tokval += ' '
if toknum == INDENT:
indents.append(tokval)
continue
elif toknum == DEDENT:
indents.pop()
continue
elif toknum in (NEWLINE, NL):
startline = True
elif startline and indents:
toks_append(indents[-1])
startline = False
toks_append(tokval)
cookie_re = re.compile(r'^[ \t\f]*#.*?coding[:=][ \t]*([-\w.]+)', re.ASCII)
blank_re = re.compile(br'^[ \t\f]*(?:[#\r\n]|$)', re.ASCII)
def _get_normal_name(orig_enc):
"""Imitates get_normal_name in tokenizer.c."""
# Only care about the first 12 characters.
enc = orig_enc[:12].lower().replace("_", "-")
if enc == "utf-8" or enc.startswith("utf-8-"):
return "utf-8"
if enc in ("latin-1", "iso-8859-1", "iso-latin-1") or \
enc.startswith(("latin-1-", "iso-8859-1-", "iso-latin-1-")):
return "iso-8859-1"
return orig_enc
def detect_encoding(readline):
"""
The detect_encoding() function is used to detect the encoding that should
be used to decode a Python source file. It requires one argument, readline,
in the same way as the tokenize() generator.
It will call readline a maximum of twice, and return the encoding used
(as a string) and a list of any lines (left as bytes) it has read
in.
It detects the encoding from the presence of a utf-8 bom or an encoding
cookie as specified in pep-0263. If both a bom and a cookie are present, but
disagree, a SyntaxError will be raised. If the encoding cookie is an invalid
charset, raise a SyntaxError. Note that if a utf-8 bom is found,
'utf-8-sig' is returned.
If no encoding is specified, then the default of 'utf-8' will be returned.
"""
bom_found = False
encoding = None
default = 'utf-8'
def read_or_stop():
try:
return readline()
except StopIteration:
return bytes()
def find_cookie(line):
try:
line_string = line.decode('ascii')
except UnicodeDecodeError:
return None
match = cookie_re.match(line_string)
if not match:
return None
encoding = _get_normal_name(match.group(1))
try:
codec = lookup(encoding)
except LookupError:
# This behaviour mimics the Python interpreter
raise SyntaxError("unknown encoding: " + encoding)
if bom_found:
if codec.name != 'utf-8':
# This behaviour mimics the Python interpreter
raise SyntaxError('encoding problem: utf-8')
encoding += '-sig'
return encoding
first = read_or_stop()
if first.startswith(BOM_UTF8):
bom_found = True
first = first[3:]
default = 'utf-8-sig'
if not first:
return default, []
encoding = find_cookie(first)
if encoding:
return encoding, [first]
if not blank_re.match(first):
return default, [first]
second = read_or_stop()
if not second:
return default, [first]
encoding = find_cookie(second)
if encoding:
return encoding, [first, second]
return default, [first, second]
def untokenize(iterable):
"""Transform tokens back into Python source code.
Each element returned by the iterable must be a token sequence
with at least two elements, a token number and token value. If
only two tokens are passed, the resulting output is poor.
Round-trip invariant for full input:
Untokenized source will match input source exactly
Round-trip invariant for limited intput:
# Output text will tokenize the back to the input
t1 = [tok[:2] for tok in generate_tokens(f.readline)]
newcode = untokenize(t1)
readline = iter(newcode.splitlines(1)).next
t2 = [tok[:2] for tokin generate_tokens(readline)]
assert t1 == t2
"""
ut = Untokenizer()
return ut.untokenize(iterable)
def generate_tokens(readline):
"""
The generate_tokens() generator requires one argument, readline, which
must be a callable object which provides the same interface as the
readline() method of built-in file objects. Each call to the function
should return one line of input as a string. Alternately, readline
can be a callable function terminating with StopIteration:
readline = open(myfile).next # Example of alternate readline
The generator produces 5-tuples with these members: the token type; the
token string; a 2-tuple (srow, scol) of ints specifying the row and
column where the token begins in the source; a 2-tuple (erow, ecol) of
ints specifying the row and column where the token ends in the source;
and the line on which the token was found. The line passed is the
logical line; continuation lines are included.
"""
lnum = parenlev = continued = 0
numchars = '0123456789'
contstr, needcont = '', 0
contline = None
indents = [0]
# 'stashed' and 'async_*' are used for async/await parsing
stashed = None
async_def = False
async_def_indent = 0
async_def_nl = False
while 1: # loop over lines in stream
try:
line = readline()
except StopIteration:
line = ''
lnum = lnum + 1
pos, max = 0, len(line)
if contstr: # continued string
if not line:
raise TokenError("EOF in multi-line string", strstart)
endmatch = endprog.match(line)
if endmatch:
pos = end = endmatch.end(0)
yield (STRING, contstr + line[:end],
strstart, (lnum, end), contline + line)
contstr, needcont = '', 0
contline = None
elif needcont and line[-2:] != '\\\n' and line[-3:] != '\\\r\n':
yield (ERRORTOKEN, contstr + line,
strstart, (lnum, len(line)), contline)
contstr = ''
contline = None
continue
else:
contstr = contstr + line
contline = contline + line
continue
elif parenlev == 0 and not continued: # new statement
if not line: break
column = 0
while pos < max: # measure leading whitespace
if line[pos] == ' ': column = column + 1
elif line[pos] == '\t': column = (column//tabsize + 1)*tabsize
elif line[pos] == '\f': column = 0
else: break
pos = pos + 1
if pos == max: break
if stashed:
yield stashed
stashed = None
if line[pos] in '\r\n': # skip blank lines
yield (NL, line[pos:], (lnum, pos), (lnum, len(line)), line)
continue
if line[pos] == '#': # skip comments
comment_token = line[pos:].rstrip('\r\n')
nl_pos = pos + len(comment_token)
yield (COMMENT, comment_token,
(lnum, pos), (lnum, pos + len(comment_token)), line)
yield (NL, line[nl_pos:],
(lnum, nl_pos), (lnum, len(line)), line)
continue
if column > indents[-1]: # count indents
indents.append(column)
yield (INDENT, line[:pos], (lnum, 0), (lnum, pos), line)
while column < indents[-1]: # count dedents
if column not in indents:
raise IndentationError(
"unindent does not match any outer indentation level",
("<tokenize>", lnum, pos, line))
indents = indents[:-1]
if async_def and async_def_indent >= indents[-1]:
async_def = False
async_def_nl = False
async_def_indent = 0
yield (DEDENT, '', (lnum, pos), (lnum, pos), line)
if async_def and async_def_nl and async_def_indent >= indents[-1]:
async_def = False
async_def_nl = False
async_def_indent = 0
else: # continued statement
if not line:
raise TokenError("EOF in multi-line statement", (lnum, 0))
continued = 0
while pos < max:
pseudomatch = pseudoprog.match(line, pos)
if pseudomatch: # scan for tokens
start, end = pseudomatch.span(1)
spos, epos, pos = (lnum, start), (lnum, end), end
token, initial = line[start:end], line[start]
if initial in numchars or \
(initial == '.' and token != '.'): # ordinary number
yield (NUMBER, token, spos, epos, line)
elif initial in '\r\n':
newline = NEWLINE
if parenlev > 0:
newline = NL
elif async_def:
async_def_nl = True
if stashed:
yield stashed
stashed = None
yield (newline, token, spos, epos, line)
elif initial == '#':
assert not token.endswith("\n")
if stashed:
yield stashed
stashed = None
yield (COMMENT, token, spos, epos, line)
elif token in triple_quoted:
endprog = endprogs[token]
endmatch = endprog.match(line, pos)
if endmatch: # all on one line
pos = endmatch.end(0)
token = line[start:pos]
if stashed:
yield stashed
stashed = None
yield (STRING, token, spos, (lnum, pos), line)
else:
strstart = (lnum, start) # multiple lines
contstr = line[start:]
contline = line
break
elif initial in single_quoted or \
token[:2] in single_quoted or \
token[:3] in single_quoted:
if token[-1] == '\n': # continued string
strstart = (lnum, start)
endprog = (endprogs[initial] or endprogs[token[1]] or
endprogs[token[2]])
contstr, needcont = line[start:], 1
contline = line
break
else: # ordinary string
if stashed:
yield stashed
stashed = None
yield (STRING, token, spos, epos, line)
elif initial.isidentifier(): # ordinary name
if token in ('async', 'await'):
if async_def:
yield (ASYNC if token == 'async' else AWAIT,
token, spos, epos, line)
continue
tok = (NAME, token, spos, epos, line)
if token == 'async' and not stashed:
stashed = tok
continue
if token == 'def':
if (stashed
and stashed[0] == NAME
and stashed[1] == 'async'):
async_def = True
async_def_indent = indents[-1]
yield (ASYNC, stashed[1],
stashed[2], stashed[3],
stashed[4])
stashed = None
if stashed:
yield stashed
stashed = None
yield tok
elif initial == '\\': # continued stmt
# This yield is new; needed for better idempotency:
if stashed:
yield stashed
stashed = None
yield (NL, token, spos, (lnum, pos), line)
continued = 1
else:
if initial in '([{': parenlev = parenlev + 1
elif initial in ')]}': parenlev = parenlev - 1
if stashed:
yield stashed
stashed = None
yield (OP, token, spos, epos, line)
else:
yield (ERRORTOKEN, line[pos],
(lnum, pos), (lnum, pos+1), line)
pos = pos + 1
if stashed:
yield stashed
stashed = None
for indent in indents[1:]: # pop remaining indent levels
yield (DEDENT, '', (lnum, 0), (lnum, 0), '')
yield (ENDMARKER, '', (lnum, 0), (lnum, 0), '')
if __name__ == '__main__': # testing
import sys
if len(sys.argv) > 1: tokenize(open(sys.argv[1]).readline)
else: tokenize(sys.stdin.readline)

View File

@ -0,0 +1,30 @@
# Stubs for lib2to3.pgen2.tokenize (Python 3.6)
# NOTE: Only elements from __all__ are present.
from typing import Callable, Iterable, Iterator, List, Text, Tuple
from blib2to3.pgen2.token import * # noqa
_Coord = Tuple[int, int]
_TokenEater = Callable[[int, Text, _Coord, _Coord, Text], None]
_TokenInfo = Tuple[int, Text, _Coord, _Coord, Text]
class TokenError(Exception): ...
class StopTokenizing(Exception): ...
def tokenize(readline: Callable[[], Text], tokeneater: _TokenEater = ...) -> None: ...
class Untokenizer:
tokens: List[Text]
prev_row: int
prev_col: int
def __init__(self) -> None: ...
def add_whitespace(self, start: _Coord) -> None: ...
def untokenize(self, iterable: Iterable[_TokenInfo]) -> Text: ...
def compat(self, token: Tuple[int, Text], iterable: Iterable[_TokenInfo]) -> None: ...
def untokenize(iterable: Iterable[_TokenInfo]) -> Text: ...
def generate_tokens(
readline: Callable[[], Text]
) -> Iterator[_TokenInfo]: ...

47
blib2to3/pygram.py Normal file
View File

@ -0,0 +1,47 @@
# Copyright 2006 Google, Inc. All Rights Reserved.
# Licensed to PSF under a Contributor Agreement.
"""Export the Python grammar and symbols."""
# Python imports
import os
# Local imports
from .pgen2 import token
from .pgen2 import driver
from . import pytree
# The grammar file
_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "Grammar.txt")
_PATTERN_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__),
"PatternGrammar.txt")
class Symbols(object):
def __init__(self, grammar):
"""Initializer.
Creates an attribute for each grammar symbol (nonterminal),
whose value is the symbol's type (an int >= 256).
"""
for name, symbol in grammar.symbol2number.items():
setattr(self, name, symbol)
# Python 2
python_grammar = driver.load_packaged_grammar("blib2to3", _GRAMMAR_FILE)
python_symbols = Symbols(python_grammar)
# Python 2 + from __future__ import print_function
python_grammar_no_print_statement = python_grammar.copy()
del python_grammar_no_print_statement.keywords["print"]
# Python 3
python_grammar_no_print_statement_no_exec_statement = python_grammar.copy()
del python_grammar_no_print_statement_no_exec_statement.keywords["print"]
del python_grammar_no_print_statement_no_exec_statement.keywords["exec"]
pattern_grammar = driver.load_packaged_grammar("blib2to3", _PATTERN_GRAMMAR_FILE)
pattern_symbols = Symbols(pattern_grammar)

121
blib2to3/pygram.pyi Normal file
View File

@ -0,0 +1,121 @@
# Stubs for lib2to3.pygram (Python 3.6)
from typing import Any
from blib2to3.pgen2.grammar import Grammar
class Symbols:
def __init__(self, grammar: Grammar) -> None: ...
class python_symbols(Symbols):
and_expr: int
and_test: int
annassign: int
arglist: int
argument: int
arith_expr: int
assert_stmt: int
async_funcdef: int
async_stmt: int
atom: int
augassign: int
break_stmt: int
classdef: int
comp_for: int
comp_if: int
comp_iter: int
comp_op: int
comparison: int
compound_stmt: int
continue_stmt: int
decorated: int
decorator: int
decorators: int
del_stmt: int
dictsetmaker: int
dotted_as_name: int
dotted_as_names: int
dotted_name: int
encoding_decl: int
eval_input: int
except_clause: int
exec_stmt: int
expr: int
expr_stmt: int
exprlist: int
factor: int
file_input: int
flow_stmt: int
for_stmt: int
funcdef: int
global_stmt: int
if_stmt: int
import_as_name: int
import_as_names: int
import_from: int
import_name: int
import_stmt: int
lambdef: int
listmaker: int
not_test: int
old_comp_for: int
old_comp_if: int
old_comp_iter: int
old_lambdef: int
old_test: int
or_test: int
parameters: int
pass_stmt: int
power: int
print_stmt: int
raise_stmt: int
return_stmt: int
shift_expr: int
simple_stmt: int
single_input: int
sliceop: int
small_stmt: int
star_expr: int
stmt: int
subscript: int
subscriptlist: int
suite: int
term: int
test: int
testlist: int
testlist1: int
testlist_gexp: int
testlist_safe: int
testlist_star_expr: int
tfpdef: int
tfplist: int
tname: int
trailer: int
try_stmt: int
typedargslist: int
varargslist: int
vfpdef: int
vfplist: int
vname: int
while_stmt: int
with_item: int
with_stmt: int
with_var: int
xor_expr: int
yield_arg: int
yield_expr: int
yield_stmt: int
class pattern_symbols(Symbols):
Alternative: int
Alternatives: int
Details: int
Matcher: int
NegatedUnit: int
Repeater: int
Unit: int
python_grammar: Grammar
python_grammar_no_print_statement: Grammar
python_grammar_no_print_statement_no_exec_statement: Grammar
python_grammar_no_exec_statement: Grammar
pattern_grammar: Grammar

View File

@ -10,48 +10,26 @@
There's also a pattern matching implementation here. There's also a pattern matching implementation here.
""" """
# mypy: allow-untyped-defs, allow-incomplete-defs
from collections.abc import Iterable, Iterator
from typing import Any, Optional, TypeVar, Union
from blib2to3.pgen2.grammar import Grammar
__author__ = "Guido van Rossum <guido@python.org>" __author__ = "Guido van Rossum <guido@python.org>"
import sys import sys
from io import StringIO from io import StringIO
HUGE: int = 0x7FFFFFFF # maximum repeat count, default max HUGE = 0x7FFFFFFF # maximum repeat count, default max
_type_reprs: dict[int, Union[str, int]] = {} _type_reprs = {}
def type_repr(type_num):
def type_repr(type_num: int) -> Union[str, int]:
global _type_reprs global _type_reprs
if not _type_reprs: if not _type_reprs:
from . import pygram from .pygram import python_symbols
if not hasattr(pygram, "python_symbols"):
pygram.initialize(cache_dir=None)
# printing tokens is possible but not as useful # printing tokens is possible but not as useful
# from .pgen2 import token // token.__dict__.items(): # from .pgen2 import token // token.__dict__.items():
for name in dir(pygram.python_symbols): for name, val in python_symbols.__dict__.items():
val = getattr(pygram.python_symbols, name) if type(val) == int: _type_reprs[val] = name
if type(val) == int:
_type_reprs[val] = name
return _type_reprs.setdefault(type_num, type_num) return _type_reprs.setdefault(type_num, type_num)
class Base(object):
_P = TypeVar("_P", bound="Base")
NL = Union["Node", "Leaf"]
Context = tuple[str, tuple[int, int]]
RawNode = tuple[int, Optional[str], Optional[Context], Optional[list[NL]]]
class Base:
""" """
Abstract base class for Node and Leaf. Abstract base class for Node and Leaf.
@ -62,18 +40,18 @@ class Base:
""" """
# Default values for instance variables # Default values for instance variables
type: int # int: token number (< 256) or symbol number (>= 256) type = None # int: token number (< 256) or symbol number (>= 256)
parent: Optional["Node"] = None # Parent node pointer, or None parent = None # Parent node pointer, or None
children: list[NL] # List of subnodes children = () # Tuple of subnodes
was_changed: bool = False was_changed = False
was_checked: bool = False was_checked = False
def __new__(cls, *args, **kwds): def __new__(cls, *args, **kwds):
"""Constructor that prevents Base from being instantiated.""" """Constructor that prevents Base from being instantiated."""
assert cls is not Base, "Cannot instantiate Base" assert cls is not Base, "Cannot instantiate Base"
return object.__new__(cls) return object.__new__(cls)
def __eq__(self, other: Any) -> bool: def __eq__(self, other):
""" """
Compare two nodes for equality. Compare two nodes for equality.
@ -83,11 +61,9 @@ def __eq__(self, other: Any) -> bool:
return NotImplemented return NotImplemented
return self._eq(other) return self._eq(other)
@property __hash__ = None # For Py3 compatibility.
def prefix(self) -> str:
raise NotImplementedError
def _eq(self: _P, other: _P) -> bool: def _eq(self, other):
""" """
Compare two nodes for equality. Compare two nodes for equality.
@ -98,10 +74,7 @@ def _eq(self: _P, other: _P) -> bool:
""" """
raise NotImplementedError raise NotImplementedError
def __deepcopy__(self: _P, memo: Any) -> _P: def clone(self):
return self.clone()
def clone(self: _P) -> _P:
""" """
Return a cloned (deep) copy of self. Return a cloned (deep) copy of self.
@ -109,7 +82,7 @@ def clone(self: _P) -> _P:
""" """
raise NotImplementedError raise NotImplementedError
def post_order(self) -> Iterator[NL]: def post_order(self):
""" """
Return a post-order iterator for the tree. Return a post-order iterator for the tree.
@ -117,7 +90,7 @@ def post_order(self) -> Iterator[NL]:
""" """
raise NotImplementedError raise NotImplementedError
def pre_order(self) -> Iterator[NL]: def pre_order(self):
""" """
Return a pre-order iterator for the tree. Return a pre-order iterator for the tree.
@ -125,7 +98,7 @@ def pre_order(self) -> Iterator[NL]:
""" """
raise NotImplementedError raise NotImplementedError
def replace(self, new: Union[NL, list[NL]]) -> None: def replace(self, new):
"""Replace this node with a new one in the parent.""" """Replace this node with a new one in the parent."""
assert self.parent is not None, str(self) assert self.parent is not None, str(self)
assert new is not None assert new is not None
@ -142,30 +115,27 @@ def replace(self, new: Union[NL, list[NL]]) -> None:
else: else:
l_children.append(ch) l_children.append(ch)
assert found, (self.children, self, new) assert found, (self.children, self, new)
self.parent.children = l_children
self.parent.changed() self.parent.changed()
self.parent.invalidate_sibling_maps() self.parent.children = l_children
for x in new: for x in new:
x.parent = self.parent x.parent = self.parent
self.parent = None self.parent = None
def get_lineno(self) -> Optional[int]: def get_lineno(self):
"""Return the line number which generated the invocant node.""" """Return the line number which generated the invocant node."""
node = self node = self
while not isinstance(node, Leaf): while not isinstance(node, Leaf):
if not node.children: if not node.children:
return None return
node = node.children[0] node = node.children[0]
return node.lineno return node.lineno
def changed(self) -> None: def changed(self):
if self.was_changed:
return
if self.parent: if self.parent:
self.parent.changed() self.parent.changed()
self.was_changed = True self.was_changed = True
def remove(self) -> Optional[int]: def remove(self):
""" """
Remove the node from the tree. Returns the position of the node in its Remove the node from the tree. Returns the position of the node in its
parent's children before it was removed. parent's children before it was removed.
@ -173,15 +143,13 @@ def remove(self) -> Optional[int]:
if self.parent: if self.parent:
for i, node in enumerate(self.parent.children): for i, node in enumerate(self.parent.children):
if node is self: if node is self:
del self.parent.children[i]
self.parent.changed() self.parent.changed()
self.parent.invalidate_sibling_maps() del self.parent.children[i]
self.parent = None self.parent = None
return i return i
return None
@property @property
def next_sibling(self) -> Optional[NL]: def next_sibling(self):
""" """
The node immediately following the invocant in their parent's children The node immediately following the invocant in their parent's children
list. If the invocant does not have a next sibling, it is None list. If the invocant does not have a next sibling, it is None
@ -189,13 +157,16 @@ def next_sibling(self) -> Optional[NL]:
if self.parent is None: if self.parent is None:
return None return None
if self.parent.next_sibling_map is None: # Can't use index(); we need to test by identity
self.parent.update_sibling_maps() for i, child in enumerate(self.parent.children):
assert self.parent.next_sibling_map is not None if child is self:
return self.parent.next_sibling_map[id(self)] try:
return self.parent.children[i+1]
except IndexError:
return None
@property @property
def prev_sibling(self) -> Optional[NL]: def prev_sibling(self):
""" """
The node immediately preceding the invocant in their parent's children The node immediately preceding the invocant in their parent's children
list. If the invocant does not have a previous sibling, it is None. list. If the invocant does not have a previous sibling, it is None.
@ -203,21 +174,23 @@ def prev_sibling(self) -> Optional[NL]:
if self.parent is None: if self.parent is None:
return None return None
if self.parent.prev_sibling_map is None: # Can't use index(); we need to test by identity
self.parent.update_sibling_maps() for i, child in enumerate(self.parent.children):
assert self.parent.prev_sibling_map is not None if child is self:
return self.parent.prev_sibling_map[id(self)] if i == 0:
return None
return self.parent.children[i-1]
def leaves(self) -> Iterator["Leaf"]: def leaves(self):
for child in self.children: for child in self.children:
yield from child.leaves() yield from child.leaves()
def depth(self) -> int: def depth(self):
if self.parent is None: if self.parent is None:
return 0 return 0
return 1 + self.parent.depth() return 1 + self.parent.depth()
def get_suffix(self) -> str: def get_suffix(self):
""" """
Return the string immediately following the invocant node. This is Return the string immediately following the invocant node. This is
effectively equivalent to node.next_sibling.prefix effectively equivalent to node.next_sibling.prefix
@ -225,24 +198,20 @@ def get_suffix(self) -> str:
next_sib = self.next_sibling next_sib = self.next_sibling
if next_sib is None: if next_sib is None:
return "" return ""
prefix = next_sib.prefix return next_sib.prefix
return prefix
if sys.version_info < (3, 0):
def __str__(self):
return str(self).encode("ascii")
class Node(Base): class Node(Base):
"""Concrete implementation for interior nodes.""" """Concrete implementation for interior nodes."""
fixers_applied: Optional[list[Any]] def __init__(self,type, children,
used_names: Optional[set[str]] context=None,
prefix=None,
def __init__( fixers_applied=None):
self,
type: int,
children: list[NL],
context: Optional[Any] = None,
prefix: Optional[str] = None,
fixers_applied: Optional[list[Any]] = None,
) -> None:
""" """
Initializer. Initializer.
@ -257,7 +226,6 @@ def __init__(
for ch in self.children: for ch in self.children:
assert ch.parent is None, repr(ch) assert ch.parent is None, repr(ch)
ch.parent = self ch.parent = self
self.invalidate_sibling_maps()
if prefix is not None: if prefix is not None:
self.prefix = prefix self.prefix = prefix
if fixers_applied: if fixers_applied:
@ -265,12 +233,13 @@ def __init__(
else: else:
self.fixers_applied = None self.fixers_applied = None
def __repr__(self) -> str: def __repr__(self):
"""Return a canonical string representation.""" """Return a canonical string representation."""
assert self.type is not None return "%s(%s, %r)" % (self.__class__.__name__,
return f"{self.__class__.__name__}({type_repr(self.type)}, {self.children!r})" type_repr(self.type),
self.children)
def __str__(self) -> str: def __unicode__(self):
""" """
Return a pretty string representation. Return a pretty string representation.
@ -278,33 +247,32 @@ def __str__(self) -> str:
""" """
return "".join(map(str, self.children)) return "".join(map(str, self.children))
def _eq(self, other: Base) -> bool: if sys.version_info > (3, 0):
__str__ = __unicode__
def _eq(self, other):
"""Compare two nodes for equality.""" """Compare two nodes for equality."""
return (self.type, self.children) == (other.type, other.children) return (self.type, self.children) == (other.type, other.children)
def clone(self) -> "Node": def clone(self):
assert self.type is not None
"""Return a cloned (deep) copy of self.""" """Return a cloned (deep) copy of self."""
return Node( return Node(self.type, [ch.clone() for ch in self.children],
self.type, fixers_applied=self.fixers_applied)
[ch.clone() for ch in self.children],
fixers_applied=self.fixers_applied,
)
def post_order(self) -> Iterator[NL]: def post_order(self):
"""Return a post-order iterator for the tree.""" """Return a post-order iterator for the tree."""
for child in self.children: for child in self.children:
yield from child.post_order() yield from child.post_order()
yield self yield self
def pre_order(self) -> Iterator[NL]: def pre_order(self):
"""Return a pre-order iterator for the tree.""" """Return a pre-order iterator for the tree."""
yield self yield self
for child in self.children: for child in self.children:
yield from child.pre_order() yield from child.pre_order()
@property @property
def prefix(self) -> str: def prefix(self):
""" """
The whitespace and comments preceding this node in the input. The whitespace and comments preceding this node in the input.
""" """
@ -313,11 +281,11 @@ def prefix(self) -> str:
return self.children[0].prefix return self.children[0].prefix
@prefix.setter @prefix.setter
def prefix(self, prefix: str) -> None: def prefix(self, prefix):
if self.children: if self.children:
self.children[0].prefix = prefix self.children[0].prefix = prefix
def set_child(self, i: int, child: NL) -> None: def set_child(self, i, child):
""" """
Equivalent to 'node.children[i] = child'. This method also sets the Equivalent to 'node.children[i] = child'. This method also sets the
child's parent attribute appropriately. child's parent attribute appropriately.
@ -326,9 +294,8 @@ def set_child(self, i: int, child: NL) -> None:
self.children[i].parent = None self.children[i].parent = None
self.children[i] = child self.children[i] = child
self.changed() self.changed()
self.invalidate_sibling_maps()
def insert_child(self, i: int, child: NL) -> None: def insert_child(self, i, child):
""" """
Equivalent to 'node.children.insert(i, child)'. This method also sets Equivalent to 'node.children.insert(i, child)'. This method also sets
the child's parent attribute appropriately. the child's parent attribute appropriately.
@ -336,9 +303,8 @@ def insert_child(self, i: int, child: NL) -> None:
child.parent = self child.parent = self
self.children.insert(i, child) self.children.insert(i, child)
self.changed() self.changed()
self.invalidate_sibling_maps()
def append_child(self, child: NL) -> None: def append_child(self, child):
""" """
Equivalent to 'node.children.append(child)'. This method also sets the Equivalent to 'node.children.append(child)'. This method also sets the
child's parent attribute appropriately. child's parent attribute appropriately.
@ -346,60 +312,27 @@ def append_child(self, child: NL) -> None:
child.parent = self child.parent = self
self.children.append(child) self.children.append(child)
self.changed() self.changed()
self.invalidate_sibling_maps()
def invalidate_sibling_maps(self) -> None:
self.prev_sibling_map: Optional[dict[int, Optional[NL]]] = None
self.next_sibling_map: Optional[dict[int, Optional[NL]]] = None
def update_sibling_maps(self) -> None:
_prev: dict[int, Optional[NL]] = {}
_next: dict[int, Optional[NL]] = {}
self.prev_sibling_map = _prev
self.next_sibling_map = _next
previous: Optional[NL] = None
for current in self.children:
_prev[id(current)] = previous
_next[id(previous)] = current
previous = current
_next[id(current)] = None
class Leaf(Base): class Leaf(Base):
"""Concrete implementation for leaf nodes.""" """Concrete implementation for leaf nodes."""
# Default values for instance variables # Default values for instance variables
value: str
fixers_applied: list[Any]
bracket_depth: int
# Changed later in brackets.py
opening_bracket: Optional["Leaf"] = None
used_names: Optional[set[str]]
_prefix = "" # Whitespace and comments preceding this token in the input _prefix = "" # Whitespace and comments preceding this token in the input
lineno: int = 0 # Line where this token starts in the input lineno = 0 # Line where this token starts in the input
column: int = 0 # Column where this token starts in the input column = 0 # Column where this token tarts in the input
# If not None, this Leaf is created by converting a block of fmt off/skip
# code, and `fmt_pass_converted_first_leaf` points to the first Leaf in the
# converted code.
fmt_pass_converted_first_leaf: Optional["Leaf"] = None
def __init__( def __init__(self, type, value,
self, context=None,
type: int, prefix=None,
value: str, fixers_applied=[]):
context: Optional[Context] = None,
prefix: Optional[str] = None,
fixers_applied: list[Any] = [],
opening_bracket: Optional["Leaf"] = None,
fmt_pass_converted_first_leaf: Optional["Leaf"] = None,
) -> None:
""" """
Initializer. Initializer.
Takes a type constant (a token number < 256), a string value, and an Takes a type constant (a token number < 256), a string value, and an
optional context keyword argument. optional context keyword argument.
""" """
assert 0 <= type < 256, type assert 0 <= type < 256, type
if context is not None: if context is not None:
self._prefix, (self.lineno, self.column) = context self._prefix, (self.lineno, self.column) = context
@ -407,68 +340,60 @@ def __init__(
self.value = value self.value = value
if prefix is not None: if prefix is not None:
self._prefix = prefix self._prefix = prefix
self.fixers_applied: Optional[list[Any]] = fixers_applied[:] self.fixers_applied = fixers_applied[:]
self.children = []
self.opening_bracket = opening_bracket
self.fmt_pass_converted_first_leaf = fmt_pass_converted_first_leaf
def __repr__(self) -> str: def __repr__(self):
"""Return a canonical string representation.""" """Return a canonical string representation."""
from .pgen2.token import tok_name from .pgen2.token import tok_name
return "%s(%s, %r)" % (self.__class__.__name__,
tok_name.get(self.type, self.type),
self.value)
assert self.type is not None def __unicode__(self):
return (
f"{self.__class__.__name__}({tok_name.get(self.type, self.type)},"
f" {self.value!r})"
)
def __str__(self) -> str:
""" """
Return a pretty string representation. Return a pretty string representation.
This reproduces the input source exactly. This reproduces the input source exactly.
""" """
return self._prefix + str(self.value) return self.prefix + str(self.value)
def _eq(self, other: "Leaf") -> bool: if sys.version_info > (3, 0):
__str__ = __unicode__
def _eq(self, other):
"""Compare two nodes for equality.""" """Compare two nodes for equality."""
return (self.type, self.value) == (other.type, other.value) return (self.type, self.value) == (other.type, other.value)
def clone(self) -> "Leaf": def clone(self):
assert self.type is not None
"""Return a cloned (deep) copy of self.""" """Return a cloned (deep) copy of self."""
return Leaf( return Leaf(self.type, self.value,
self.type, (self.prefix, (self.lineno, self.column)),
self.value, fixers_applied=self.fixers_applied)
(self.prefix, (self.lineno, self.column)),
fixers_applied=self.fixers_applied,
)
def leaves(self) -> Iterator["Leaf"]: def leaves(self):
yield self yield self
def post_order(self) -> Iterator["Leaf"]: def post_order(self):
"""Return a post-order iterator for the tree.""" """Return a post-order iterator for the tree."""
yield self yield self
def pre_order(self) -> Iterator["Leaf"]: def pre_order(self):
"""Return a pre-order iterator for the tree.""" """Return a pre-order iterator for the tree."""
yield self yield self
@property @property
def prefix(self) -> str: def prefix(self):
""" """
The whitespace and comments preceding this token in the input. The whitespace and comments preceding this token in the input.
""" """
return self._prefix return self._prefix
@prefix.setter @prefix.setter
def prefix(self, prefix: str) -> None: def prefix(self, prefix):
self.changed() self.changed()
self._prefix = prefix self._prefix = prefix
def convert(gr, raw_node):
def convert(gr: Grammar, raw_node: RawNode) -> NL:
""" """
Convert raw node information to a Node or Leaf instance. Convert raw node information to a Node or Leaf instance.
@ -480,18 +405,15 @@ def convert(gr: Grammar, raw_node: RawNode) -> NL:
if children or type in gr.number2symbol: if children or type in gr.number2symbol:
# If there's exactly one child, return that child instead of # If there's exactly one child, return that child instead of
# creating a new node. # creating a new node.
assert children is not None
if len(children) == 1: if len(children) == 1:
return children[0] return children[0]
return Node(type, children, context=context) return Node(type, children, context=context)
else: else:
return Leaf(type, value or "", context=context) return Leaf(type, value, context=context)
_Results = dict[str, NL] class BasePattern(object):
class BasePattern:
""" """
A pattern is a tree matching pattern. A pattern is a tree matching pattern.
@ -507,27 +429,22 @@ class BasePattern:
""" """
# Defaults for instance variables # Defaults for instance variables
type: Optional[int] type = None # Node type (token if < 256, symbol if >= 256)
type = None # Node type (token if < 256, symbol if >= 256) content = None # Optional content matching pattern
content: Any = None # Optional content matching pattern name = None # Optional name used to store match in results dict
name: Optional[str] = None # Optional name used to store match in results dict
def __new__(cls, *args, **kwds): def __new__(cls, *args, **kwds):
"""Constructor that prevents BasePattern from being instantiated.""" """Constructor that prevents BasePattern from being instantiated."""
assert cls is not BasePattern, "Cannot instantiate BasePattern" assert cls is not BasePattern, "Cannot instantiate BasePattern"
return object.__new__(cls) return object.__new__(cls)
def __repr__(self) -> str: def __repr__(self):
assert self.type is not None
args = [type_repr(self.type), self.content, self.name] args = [type_repr(self.type), self.content, self.name]
while args and args[-1] is None: while args and args[-1] is None:
del args[-1] del args[-1]
return f"{self.__class__.__name__}({', '.join(map(repr, args))})" return "%s(%s)" % (self.__class__.__name__, ", ".join(map(repr, args)))
def _submatch(self, node, results=None) -> bool: def optimize(self):
raise NotImplementedError
def optimize(self) -> "BasePattern":
""" """
A subclass can define this as a hook for optimizations. A subclass can define this as a hook for optimizations.
@ -535,7 +452,7 @@ def optimize(self) -> "BasePattern":
""" """
return self return self
def match(self, node: NL, results: Optional[_Results] = None) -> bool: def match(self, node, results=None):
""" """
Does this pattern exactly match a node? Does this pattern exactly match a node?
@ -549,19 +466,18 @@ def match(self, node: NL, results: Optional[_Results] = None) -> bool:
if self.type is not None and node.type != self.type: if self.type is not None and node.type != self.type:
return False return False
if self.content is not None: if self.content is not None:
r: Optional[_Results] = None r = None
if results is not None: if results is not None:
r = {} r = {}
if not self._submatch(node, r): if not self._submatch(node, r):
return False return False
if r: if r:
assert results is not None
results.update(r) results.update(r)
if results is not None and self.name: if results is not None and self.name:
results[self.name] = node results[self.name] = node
return True return True
def match_seq(self, nodes: list[NL], results: Optional[_Results] = None) -> bool: def match_seq(self, nodes, results=None):
""" """
Does this pattern exactly match a sequence of nodes? Does this pattern exactly match a sequence of nodes?
@ -571,24 +487,20 @@ def match_seq(self, nodes: list[NL], results: Optional[_Results] = None) -> bool
return False return False
return self.match(nodes[0], results) return self.match(nodes[0], results)
def generate_matches(self, nodes: list[NL]) -> Iterator[tuple[int, _Results]]: def generate_matches(self, nodes):
""" """
Generator yielding all matches for this pattern. Generator yielding all matches for this pattern.
Default implementation for non-wildcard patterns. Default implementation for non-wildcard patterns.
""" """
r: _Results = {} r = {}
if nodes and self.match(nodes[0], r): if nodes and self.match(nodes[0], r):
yield 1, r yield 1, r
class LeafPattern(BasePattern): class LeafPattern(BasePattern):
def __init__(
self, def __init__(self, type=None, content=None, name=None):
type: Optional[int] = None,
content: Optional[str] = None,
name: Optional[str] = None,
) -> None:
""" """
Initializer. Takes optional type, content, and name. Initializer. Takes optional type, content, and name.
@ -608,7 +520,7 @@ def __init__(
self.content = content self.content = content
self.name = name self.name = name
def match(self, node: NL, results=None) -> bool: def match(self, node, results=None):
"""Override match() to insist on a leaf node.""" """Override match() to insist on a leaf node."""
if not isinstance(node, Leaf): if not isinstance(node, Leaf):
return False return False
@ -631,14 +543,10 @@ def _submatch(self, node, results=None):
class NodePattern(BasePattern): class NodePattern(BasePattern):
wildcards: bool = False
def __init__( wildcards = False
self,
type: Optional[int] = None, def __init__(self, type=None, content=None, name=None):
content: Optional[Iterable[str]] = None,
name: Optional[str] = None,
) -> None:
""" """
Initializer. Takes optional type, content, and name. Initializer. Takes optional type, content, and name.
@ -658,19 +566,16 @@ def __init__(
assert type >= 256, type assert type >= 256, type
if content is not None: if content is not None:
assert not isinstance(content, str), repr(content) assert not isinstance(content, str), repr(content)
newcontent = list(content) content = list(content)
for i, item in enumerate(newcontent): for i, item in enumerate(content):
assert isinstance(item, BasePattern), (i, item) assert isinstance(item, BasePattern), (i, item)
# I don't even think this code is used anywhere, but it does cause if isinstance(item, WildcardPattern):
# unreachable errors from mypy. This function's signature does look self.wildcards = True
# odd though *shrug*.
if isinstance(item, WildcardPattern): # type: ignore[unreachable]
self.wildcards = True # type: ignore[unreachable]
self.type = type self.type = type
self.content = newcontent # TODO: this is unbound when content is None self.content = content
self.name = name self.name = name
def _submatch(self, node, results=None) -> bool: def _submatch(self, node, results=None):
""" """
Match the pattern's content to the node's children. Match the pattern's content to the node's children.
@ -699,6 +604,7 @@ def _submatch(self, node, results=None) -> bool:
class WildcardPattern(BasePattern): class WildcardPattern(BasePattern):
""" """
A wildcard pattern can match zero or more nodes. A wildcard pattern can match zero or more nodes.
@ -711,16 +617,7 @@ class WildcardPattern(BasePattern):
except it always uses non-greedy matching. except it always uses non-greedy matching.
""" """
min: int def __init__(self, content=None, min=0, max=HUGE, name=None):
max: int
def __init__(
self,
content: Optional[str] = None,
min: int = 0,
max: int = HUGE,
name: Optional[str] = None,
) -> None:
""" """
Initializer. Initializer.
@ -745,52 +642,40 @@ def __init__(
""" """
assert 0 <= min <= max <= HUGE, (min, max) assert 0 <= min <= max <= HUGE, (min, max)
if content is not None: if content is not None:
f = lambda s: tuple(s) content = tuple(map(tuple, content)) # Protect against alterations
wrapped_content = tuple(map(f, content)) # Protect against alterations
# Check sanity of alternatives # Check sanity of alternatives
assert len(wrapped_content), repr( assert len(content), repr(content) # Can't have zero alternatives
wrapped_content for alt in content:
) # Can't have zero alternatives assert len(alt), repr(alt) # Can have empty alternatives
for alt in wrapped_content: self.content = content
assert len(alt), repr(alt) # Can have empty alternatives
self.content = wrapped_content
self.min = min self.min = min
self.max = max self.max = max
self.name = name self.name = name
def optimize(self) -> Any: def optimize(self):
"""Optimize certain stacked wildcard patterns.""" """Optimize certain stacked wildcard patterns."""
subpattern = None subpattern = None
if ( if (self.content is not None and
self.content is not None len(self.content) == 1 and len(self.content[0]) == 1):
and len(self.content) == 1
and len(self.content[0]) == 1
):
subpattern = self.content[0][0] subpattern = self.content[0][0]
if self.min == 1 and self.max == 1: if self.min == 1 and self.max == 1:
if self.content is None: if self.content is None:
return NodePattern(name=self.name) return NodePattern(name=self.name)
if subpattern is not None and self.name == subpattern.name: if subpattern is not None and self.name == subpattern.name:
return subpattern.optimize() return subpattern.optimize()
if ( if (self.min <= 1 and isinstance(subpattern, WildcardPattern) and
self.min <= 1 subpattern.min <= 1 and self.name == subpattern.name):
and isinstance(subpattern, WildcardPattern) return WildcardPattern(subpattern.content,
and subpattern.min <= 1 self.min*subpattern.min,
and self.name == subpattern.name self.max*subpattern.max,
): subpattern.name)
return WildcardPattern(
subpattern.content,
self.min * subpattern.min,
self.max * subpattern.max,
subpattern.name,
)
return self return self
def match(self, node, results=None) -> bool: def match(self, node, results=None):
"""Does this pattern exactly match a node?""" """Does this pattern exactly match a node?"""
return self.match_seq([node], results) return self.match_seq([node], results)
def match_seq(self, nodes, results=None) -> bool: def match_seq(self, nodes, results=None):
"""Does this pattern exactly match a sequence of nodes?""" """Does this pattern exactly match a sequence of nodes?"""
for c, r in self.generate_matches(nodes): for c, r in self.generate_matches(nodes):
if c == len(nodes): if c == len(nodes):
@ -801,7 +686,7 @@ def match_seq(self, nodes, results=None) -> bool:
return True return True
return False return False
def generate_matches(self, nodes) -> Iterator[tuple[int, _Results]]: def generate_matches(self, nodes):
""" """
Generator yielding matches for a sequence of nodes. Generator yielding matches for a sequence of nodes.
@ -846,7 +731,7 @@ def generate_matches(self, nodes) -> Iterator[tuple[int, _Results]]:
if hasattr(sys, "getrefcount"): if hasattr(sys, "getrefcount"):
sys.stderr = save_stderr sys.stderr = save_stderr
def _iterative_matches(self, nodes) -> Iterator[tuple[int, _Results]]: def _iterative_matches(self, nodes):
"""Helper to iteratively yield the matches.""" """Helper to iteratively yield the matches."""
nodelen = len(nodes) nodelen = len(nodes)
if 0 >= self.min: if 0 >= self.min:
@ -875,10 +760,10 @@ def _iterative_matches(self, nodes) -> Iterator[tuple[int, _Results]]:
new_results.append((c0 + c1, r)) new_results.append((c0 + c1, r))
results = new_results results = new_results
def _bare_name_matches(self, nodes) -> tuple[int, _Results]: def _bare_name_matches(self, nodes):
"""Special optimized matcher for bare_name.""" """Special optimized matcher for bare_name."""
count = 0 count = 0
r = {} # type: _Results r = {}
done = False done = False
max = len(nodes) max = len(nodes)
while not done and count < max: while not done and count < max:
@ -888,11 +773,10 @@ def _bare_name_matches(self, nodes) -> tuple[int, _Results]:
count += 1 count += 1
done = False done = False
break break
assert self.name is not None
r[self.name] = nodes[:count] r[self.name] = nodes[:count]
return count, r return count, r
def _recursive_matches(self, nodes, count) -> Iterator[tuple[int, _Results]]: def _recursive_matches(self, nodes, count):
"""Helper to recursively yield the matches.""" """Helper to recursively yield the matches."""
assert self.content is not None assert self.content is not None
if count >= self.min: if count >= self.min:
@ -900,7 +784,7 @@ def _recursive_matches(self, nodes, count) -> Iterator[tuple[int, _Results]]:
if count < self.max: if count < self.max:
for alt in self.content: for alt in self.content:
for c0, r0 in generate_matches(alt, nodes): for c0, r0 in generate_matches(alt, nodes):
for c1, r1 in self._recursive_matches(nodes[c0:], count + 1): for c1, r1 in self._recursive_matches(nodes[c0:], count+1):
r = {} r = {}
r.update(r0) r.update(r0)
r.update(r1) r.update(r1)
@ -908,7 +792,8 @@ def _recursive_matches(self, nodes, count) -> Iterator[tuple[int, _Results]]:
class NegatedPattern(BasePattern): class NegatedPattern(BasePattern):
def __init__(self, content: Optional[BasePattern] = None) -> None:
def __init__(self, content=None):
""" """
Initializer. Initializer.
@ -921,15 +806,15 @@ def __init__(self, content: Optional[BasePattern] = None) -> None:
assert isinstance(content, BasePattern), repr(content) assert isinstance(content, BasePattern), repr(content)
self.content = content self.content = content
def match(self, node, results=None) -> bool: def match(self, node):
# We never match a node in its entirety # We never match a node in its entirety
return False return False
def match_seq(self, nodes, results=None) -> bool: def match_seq(self, nodes):
# We only match an empty sequence of nodes in its entirety # We only match an empty sequence of nodes in its entirety
return len(nodes) == 0 return len(nodes) == 0
def generate_matches(self, nodes: list[NL]) -> Iterator[tuple[int, _Results]]: def generate_matches(self, nodes):
if self.content is None: if self.content is None:
# Return a match if there is an empty sequence # Return a match if there is an empty sequence
if len(nodes) == 0: if len(nodes) == 0:
@ -941,9 +826,7 @@ def generate_matches(self, nodes: list[NL]) -> Iterator[tuple[int, _Results]]:
yield 0, {} yield 0, {}
def generate_matches( def generate_matches(patterns, nodes):
patterns: list[BasePattern], nodes: list[NL]
) -> Iterator[tuple[int, _Results]]:
""" """
Generator yielding matches for a sequence of patterns and nodes. Generator yielding matches for a sequence of patterns and nodes.
@ -955,7 +838,7 @@ def generate_matches(
(count, results) tuples where: (count, results) tuples where:
count: the entire sequence of patterns matches nodes[:count]; count: the entire sequence of patterns matches nodes[:count];
results: dict containing named submatches. results: dict containing named submatches.
""" """
if not patterns: if not patterns:
yield 0, {} yield 0, {}
else: else:

89
blib2to3/pytree.pyi Normal file
View File

@ -0,0 +1,89 @@
# Stubs for lib2to3.pytree (Python 3.6)
import sys
from typing import Any, Callable, Dict, Iterator, List, Optional, Text, Tuple, TypeVar, Union
from blib2to3.pgen2.grammar import Grammar
_P = TypeVar('_P')
_NL = Union[Node, Leaf]
_Context = Tuple[Text, int, int]
_Results = Dict[Text, _NL]
_RawNode = Tuple[int, Text, _Context, Optional[List[_NL]]]
_Convert = Callable[[Grammar, _RawNode], Any]
HUGE: int
def type_repr(type_num: int) -> Text: ...
class Base:
type: int
parent: Optional[Node]
prefix: Text
children: List[_NL]
was_changed: bool
was_checked: bool
def __eq__(self, other: Any) -> bool: ...
def _eq(self: _P, other: _P) -> bool: ...
def clone(self: _P) -> _P: ...
def post_order(self) -> Iterator[_NL]: ...
def pre_order(self) -> Iterator[_NL]: ...
def replace(self, new: Union[_NL, List[_NL]]) -> None: ...
def get_lineno(self) -> int: ...
def changed(self) -> None: ...
def remove(self) -> Optional[int]: ...
@property
def next_sibling(self) -> Optional[_NL]: ...
@property
def prev_sibling(self) -> Optional[_NL]: ...
def leaves(self) -> Iterator[Leaf]: ...
def depth(self) -> int: ...
def get_suffix(self) -> Text: ...
if sys.version_info < (3,):
def get_prefix(self) -> Text: ...
def set_prefix(self, prefix: Text) -> None: ...
class Node(Base):
fixers_applied: List[Any]
def __init__(self, type: int, children: List[_NL], context: Optional[Any] = ..., prefix: Optional[Text] = ..., fixers_applied: Optional[List[Any]] = ...) -> None: ...
def set_child(self, i: int, child: _NL) -> None: ...
def insert_child(self, i: int, child: _NL) -> None: ...
def append_child(self, child: _NL) -> None: ...
class Leaf(Base):
lineno: int
column: int
value: Text
fixers_applied: List[Any]
def __init__(self, type: int, value: Text, context: Optional[_Context] = ..., prefix: Optional[Text] = ..., fixers_applied: List[Any] = ...) -> None: ...
# bolted on attributes by Black
bracket_depth: int
opening_bracket: Leaf
def convert(gr: Grammar, raw_node: _RawNode) -> _NL: ...
class BasePattern:
type: int
content: Optional[Text]
name: Optional[Text]
def optimize(self) -> BasePattern: ... # sic, subclasses are free to optimize themselves into different patterns
def match(self, node: _NL, results: Optional[_Results] = ...) -> bool: ...
def match_seq(self, nodes: List[_NL], results: Optional[_Results] = ...) -> bool: ...
def generate_matches(self, nodes: List[_NL]) -> Iterator[Tuple[int, _Results]]: ...
class LeafPattern(BasePattern):
def __init__(self, type: Optional[int] = ..., content: Optional[Text] = ..., name: Optional[Text] = ...) -> None: ...
class NodePattern(BasePattern):
wildcards: bool
def __init__(self, type: Optional[int] = ..., content: Optional[Text] = ..., name: Optional[Text] = ...) -> None: ...
class WildcardPattern(BasePattern):
min: int
max: int
def __init__(self, content: Optional[Text] = ..., min: int = ..., max: int = ..., name: Optional[Text] = ...) -> None: ...
class NegatedPattern(BasePattern):
def __init__(self, content: Optional[Text] = ...) -> None: ...
def generate_matches(patterns: List[BasePattern], nodes: List[_NL]) -> Iterator[Tuple[int, _Results]]: ...

View File

@ -17,4 +17,4 @@ help:
# Catch-all target: route all unknown targets to Sphinx using the new # Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile %: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)

View File

@ -1 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="78" height="20"><linearGradient id="b" x2="0" y2="100%"><stop offset="0" stop-color="#bbb" stop-opacity=".1"/><stop offset="1" stop-opacity=".1"/></linearGradient><clipPath id="a"><rect width="78" height="20" rx="3" fill="#fff"/></clipPath><g clip-path="url(#a)"><path fill="#555" d="M0 0h47v20H0z"/><path fill="#7900CA" d="M47 0h31v20H47z"/><path fill="url(#b)" d="M0 0h78v20H0z"/></g><g fill="#fff" text-anchor="middle" font-family="DejaVu Sans,Verdana,Geneva,sans-serif" font-size="110"><text x="245" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="370">license</text><text x="245" y="140" transform="scale(.1)" textLength="370">license</text><text x="615" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="210">MIT</text><text x="615" y="140" transform="scale(.1)" textLength="210">MIT</text></g> </svg> <svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="78" height="20"><linearGradient id="b" x2="0" y2="100%"><stop offset="0" stop-color="#bbb" stop-opacity=".1"/><stop offset="1" stop-opacity=".1"/></linearGradient><clipPath id="a"><rect width="78" height="20" rx="3" fill="#fff"/></clipPath><g clip-path="url(#a)"><path fill="#555" d="M0 0h47v20H0z"/><path fill="#7900CA" d="M47 0h31v20H47z"/><path fill="url(#b)" d="M0 0h78v20H0z"/></g><g fill="#fff" text-anchor="middle" font-family="DejaVu Sans,Verdana,Geneva,sans-serif" font-size="110"><text x="245" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="370">license</text><text x="245" y="140" transform="scale(.1)" textLength="370">license</text><text x="615" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="210">MIT</text><text x="615" y="140" transform="scale(.1)" textLength="210">MIT</text></g> </svg>

Before

Width:  |  Height:  |  Size: 950 B

After

Width:  |  Height:  |  Size: 949 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 97 KiB

After

Width:  |  Height:  |  Size: 79 KiB

View File

@ -1,3 +0,0 @@
```{include} ../AUTHORS.md
```

1
docs/authors.md Symbolic link
View File

@ -0,0 +1 @@
_build/generated/authors.md

View File

@ -1,3 +0,0 @@
```{include} ../CHANGES.md
```

1
docs/change_log.md Symbolic link
View File

@ -0,0 +1 @@
_build/generated/change_log.md

View File

@ -1,3 +0,0 @@
[flake8]
max-line-length = 88
extend-ignore = E203,E701

View File

@ -1,3 +0,0 @@
[flake8]
max-line-length = 88
extend-ignore = E203,E701

View File

@ -1,3 +0,0 @@
[flake8]
max-line-length = 88
extend-ignore = E203,E701

View File

@ -1,2 +0,0 @@
[*.py]
profile = black

View File

@ -1,2 +0,0 @@
[settings]
profile = black

View File

@ -1,2 +0,0 @@
[tool.isort]
profile = 'black'

View File

@ -1,2 +0,0 @@
[isort]
profile = black

View File

@ -1,3 +0,0 @@
[pycodestyle]
max-line-length = 88
ignore = E203,E701

View File

@ -1,3 +0,0 @@
[pycodestyle]
max-line-length = 88
ignore = E203,E701

View File

@ -1,3 +0,0 @@
[pycodestyle]
max-line-length = 88
ignore = E203,E701

View File

@ -1,2 +0,0 @@
[format]
max-line-length = 88

View File

@ -1,2 +0,0 @@
[tool.pylint.format]
max-line-length = "88"

View File

@ -1,2 +0,0 @@
[pylint]
max-line-length = 88

View File

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# #
# Configuration file for the Sphinx documentation builder. # Configuration file for the Sphinx documentation builder.
# #
@ -11,143 +12,174 @@
# add these directories to sys.path here. If the directory is relative to the # add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here. # documentation root, use os.path.abspath to make it absolute, like shown here.
# #
import ast
import os
import re
import string
from importlib.metadata import version
from pathlib import Path from pathlib import Path
import re
import shutil
import string
from recommonmark.parser import CommonMarkParser
from sphinx.application import Sphinx
CURRENT_DIR = Path(__file__).parent CURRENT_DIR = Path(__file__).parent
def make_pypi_svg(version: str) -> None: def get_version():
template: Path = CURRENT_DIR / "_static" / "pypi_template.svg" black_py = CURRENT_DIR / '..' / 'black.py'
target: Path = CURRENT_DIR / "_static" / "pypi.svg" _version_re = re.compile(r'__version__\s+=\s+(?P<version>.*)')
with open(str(template), encoding="utf8") as f: with open(str(black_py), 'r', encoding='utf8') as f:
svg: str = string.Template(f.read()).substitute(version=version) version = _version_re.search(f.read()).group('version')
with open(str(target), "w", encoding="utf8") as f: return str(ast.literal_eval(version))
def make_pypi_svg(version):
template = CURRENT_DIR / '_static' / 'pypi_template.svg'
target = CURRENT_DIR / '_static' / 'pypi.svg'
with open(str(template), 'r', encoding='utf8') as f:
svg = string.Template(f.read()).substitute(version=version)
with open(str(target), 'w', encoding='utf8') as f:
f.write(svg) f.write(svg)
def replace_pr_numbers_with_links(content: str) -> str: def make_filename(line):
"""Replaces all PR numbers with the corresponding GitHub link.""" non_letters = re.compile(r'[^a-z]+')
return re.sub(r"#(\d+)", r"[#\1](https://github.com/psf/black/pull/\1)", content) filename = line[3:].rstrip().lower()
filename = non_letters.sub('_', filename)
if filename.startswith('_'):
filename = filename[1:]
if filename.endswith('_'):
filename = filename[:-1]
return filename + '.md'
def handle_include_read( def generate_sections_from_readme():
app: Sphinx, target_dir = CURRENT_DIR / '_build' / 'generated'
relative_path: Path, readme = CURRENT_DIR / '..' / 'README.md'
parent_docname: str, shutil.rmtree(str(target_dir), ignore_errors=True)
content: list[str], target_dir.mkdir(parents=True)
) -> None:
"""Handler for the include-read sphinx event."""
if parent_docname == "change_log":
content[0] = replace_pr_numbers_with_links(content[0])
output = None
target_dir = target_dir.relative_to(CURRENT_DIR)
with open(str(readme), 'r', encoding='utf8') as f:
for line in f:
if line.startswith('## '):
if output is not None:
output.close()
filename = make_filename(line)
output_path = CURRENT_DIR / filename
if output_path.is_symlink() or output_path.is_file():
output_path.unlink()
output_path.symlink_to(target_dir / filename)
output = open(str(output_path), 'w', encoding='utf8')
output.write(
'[//]: # (NOTE: THIS FILE IS AUTOGENERATED FROM README.md)\n\n'
)
def setup(app: Sphinx) -> None: if output is None:
"""Sets up a minimal sphinx extension.""" continue
app.connect("include-read", handle_include_read)
if line.startswith('##'):
line = line[1:]
output.write(line)
# Necessary so Click doesn't hit an encode error when called by
# sphinxcontrib-programoutput on Windows.
os.putenv("pythonioencoding", "utf-8")
# -- Project information ----------------------------------------------------- # -- Project information -----------------------------------------------------
project = "Black" project = 'Black'
copyright = "2018-Present, Łukasz Langa and contributors to Black" copyright = '2018, Łukasz Langa and contributors to Black'
author = "Łukasz Langa and contributors to Black" author = 'Łukasz Langa and contributors to Black'
# Autopopulate version # Autopopulate version
# The version, including alpha/beta/rc tags, but not commit hash and datestamps # The full version, including alpha/beta/rc tags.
release = version("black").split("+")[0] release = get_version()
# The short X.Y version. # The short X.Y version.
version = release version = release
for sp in "abcfr": for sp in 'abcfr':
version = version.split(sp)[0] version = version.split(sp)[0]
make_pypi_svg(release) make_pypi_svg(release)
generate_sections_from_readme()
# -- General configuration --------------------------------------------------- # -- General configuration ---------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here. # If your documentation needs a minimal Sphinx version, state it here.
needs_sphinx = "4.4" #
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be # Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones. # ones.
extensions = [ extensions = [
"sphinx.ext.autodoc", 'sphinx.ext.autodoc',
"sphinx.ext.intersphinx", 'sphinx.ext.intersphinx',
"sphinx.ext.napoleon", 'sphinx.ext.napoleon',
"myst_parser",
"sphinxcontrib.programoutput",
"sphinx_copybutton",
] ]
# If you need extensions of a certain version or higher, list them here.
needs_extensions = {"myst_parser": "0.13.7"}
# Add any paths that contain templates here, relative to this directory. # Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"] templates_path = ['_templates']
source_parsers = {
'.md': CommonMarkParser,
}
# The suffix(es) of source filenames. # The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string: # You can specify multiple suffix as a list of string:
source_suffix = [".rst", ".md"] source_suffix = ['.rst', '.md']
# The master toctree document. # The master toctree document.
master_doc = "index" master_doc = 'index'
# The language for content autogenerated by Sphinx. Refer to documentation # The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages. # for a list of supported languages.
# #
# This is also used if you do content translation via gettext catalogs. # This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases. # Usually you set "language" from the command line for these cases.
language = "en" language = None
# List of patterns, relative to source directory, that match files and # List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files. # directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path . # This pattern also affects html_static_path and html_extra_path .
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# The name of the Pygments (syntax highlighting) style to use. # The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx" pygments_style = 'sphinx'
# We need headers to be linkable to so ask MyST-Parser to autogenerate anchor IDs for
# headers up to and including level 3.
myst_heading_anchors = 3
# Prettier support formatting some MyST syntax but not all, so let's disable the
# unsupported yet still enabled by default ones.
myst_disable_syntax = [
"colon_fence",
"myst_block_break",
"myst_line_comment",
"math_block",
]
# Optional MyST Syntaxes
myst_enable_extensions = []
# -- Options for HTML output ------------------------------------------------- # -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for # The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes. # a list of builtin themes.
# #
html_theme = "furo" html_theme = 'alabaster'
html_logo = "_static/logo2-readme.png"
html_sidebars = {
'**': [
'about.html',
'navigation.html',
'relations.html',
'sourcelink.html',
'searchbox.html'
]
}
html_theme_options = {
'show_related': False,
'description': '“Any color you like.”',
'github_button': True,
'github_user': 'ambv',
'github_repo': 'black',
'github_type': 'star',
'show_powered_by': True,
'fixed_sidebar': True,
'logo': 'logo2.png',
}
# Add any paths that contain custom static files (such as style sheets) here, # Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files, # relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css". # so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"] html_static_path = ['_static']
# Custom sidebar templates, must be a dictionary that maps document names # Custom sidebar templates, must be a dictionary that maps document names
# to template names. # to template names.
@ -163,28 +195,46 @@ def setup(app: Sphinx) -> None:
# -- Options for HTMLHelp output --------------------------------------------- # -- Options for HTMLHelp output ---------------------------------------------
# Output file base name for HTML help builder. # Output file base name for HTML help builder.
htmlhelp_basename = "blackdoc" htmlhelp_basename = 'blackdoc'
# -- Options for LaTeX output ------------------------------------------------ # -- Options for LaTeX output ------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples # Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, # (source start file, target name, title,
# author, documentclass [howto, manual, or own class]). # author, documentclass [howto, manual, or own class]).
latex_documents = [( latex_documents = [
master_doc, (master_doc, 'black.tex', 'Documentation for Black',
"black.tex", 'Łukasz Langa and contributors to Black', 'manual'),
"Documentation for Black", ]
"Łukasz Langa and contributors to Black",
"manual",
)]
# -- Options for manual page output ------------------------------------------ # -- Options for manual page output ------------------------------------------
# One entry per manual page. List of tuples # One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section). # (source start file, name, description, authors, manual section).
man_pages = [(master_doc, "black", "Documentation for Black", [author], 1)] man_pages = [
(master_doc, 'black', 'Documentation for Black',
[author], 1)
]
# -- Options for Texinfo output ---------------------------------------------- # -- Options for Texinfo output ----------------------------------------------
@ -192,15 +242,11 @@ def setup(app: Sphinx) -> None:
# Grouping the document tree into Texinfo files. List of tuples # Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author, # (source start file, target name, title, author,
# dir menu entry, description, category) # dir menu entry, description, category)
texinfo_documents = [( texinfo_documents = [
master_doc, (master_doc, 'Black', 'Documentation for Black',
"Black", author, 'Black', 'The uncompromising Python code formatter',
"Documentation for Black", 'Miscellaneous'),
author, ]
"Black",
"The uncompromising Python code formatter",
"Miscellaneous",
)]
# -- Options for Epub output ------------------------------------------------- # -- Options for Epub output -------------------------------------------------
@ -221,21 +267,14 @@ def setup(app: Sphinx) -> None:
# epub_uid = '' # epub_uid = ''
# A list of files that should not be packed into the epub file. # A list of files that should not be packed into the epub file.
epub_exclude_files = ["search.html"] epub_exclude_files = ['search.html']
# -- Extension configuration ------------------------------------------------- # -- Extension configuration -------------------------------------------------
autodoc_member_order = "bysource" autodoc_member_order = 'bysource'
# -- sphinx-copybutton configuration ----------------------------------------
copybutton_prompt_text = (
r">>> |\.\.\. |> |\$ |\# | In \[\d*\]: | {2,5}\.\.\.: | {5,8}: "
)
copybutton_prompt_is_regexp = True
copybutton_remove_prompts = True
# -- Options for intersphinx extension --------------------------------------- # -- Options for intersphinx extension ---------------------------------------
# Example configuration for intersphinx: refer to the Python standard library. # Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {"<name>": ("https://docs.python.org/3/", None)} intersphinx_mapping = {'https://docs.python.org/3/': None}

1
docs/contributing.md Symbolic link
View File

@ -0,0 +1 @@
../CONTRIBUTING.md

Some files were not shown because too many files have changed in this diff Show More