Compare commits
No commits in common. "main" and "18.6b3" have entirely different histories.
10
.appveyor.yml
Normal file
10
.appveyor.yml
Normal file
@ -0,0 +1,10 @@
|
||||
install:
|
||||
- C:\Python36\python.exe -m pip install mypy
|
||||
- C:\Python36\python.exe -m pip install -e .
|
||||
|
||||
# Not a C# project
|
||||
build: off
|
||||
|
||||
test_script:
|
||||
- C:\Python36\python.exe tests/test_black.py
|
||||
- C:\Python36\python.exe -m mypy black.py tests/test_black.py
|
4
.coveragerc
Normal file
4
.coveragerc
Normal file
@ -0,0 +1,4 @@
|
||||
[report]
|
||||
omit =
|
||||
blib2to3/*
|
||||
*/site-packages/*
|
8
.flake8
8
.flake8
@ -1,8 +1,8 @@
|
||||
# This is an example .flake8 config, used when developing *Black* itself.
|
||||
# Keep in sync with setup.cfg which is used for source packages.
|
||||
|
||||
[flake8]
|
||||
# B905 should be enabled when we drop support for 3.9
|
||||
ignore = E203, E266, E501, E701, E704, W503, B905, B907
|
||||
# line length is intentionally set to 80 here because black uses Bugbear
|
||||
# See https://black.readthedocs.io/en/stable/guides/using_black_with_other_tools.html#bugbear for more details
|
||||
ignore = E203, E266, E501, W503
|
||||
max-line-length = 80
|
||||
max-complexity = 18
|
||||
select = B,C,E,F,W,T4,B9
|
||||
|
@ -1,3 +0,0 @@
|
||||
node: $Format:%H$
|
||||
node-date: $Format:%cI$
|
||||
describe-name: $Format:%(describe:tags=true,match=[0-9]*)$
|
2
.gitattributes
vendored
2
.gitattributes
vendored
@ -1,2 +0,0 @@
|
||||
.git_archival.txt export-subst
|
||||
*.py diff=python
|
16
.github/CODE_OF_CONDUCT.md
vendored
16
.github/CODE_OF_CONDUCT.md
vendored
@ -1,11 +1,13 @@
|
||||
# Treat each other well
|
||||
|
||||
Everyone participating in the _Black_ project, and in particular in the issue tracker,
|
||||
pull requests, and social media activity, is expected to treat other people with respect
|
||||
and more generally to follow the guidelines articulated in the
|
||||
[Python Community Code of Conduct](https://www.python.org/psf/codeofconduct/).
|
||||
Everyone participating in the *Black* project, and in particular in the
|
||||
issue tracker, pull requests, and social media activity, is expected
|
||||
to treat other people with respect and more generally to follow the
|
||||
guidelines articulated in the [Python Community Code of
|
||||
Conduct](https://www.python.org/psf/codeofconduct/).
|
||||
|
||||
At the same time, humor is encouraged. In fact, basic familiarity with Monty Python's
|
||||
Flying Circus is expected. We are not savages.
|
||||
At the same time, humor is encouraged. In fact, basic familiarity with
|
||||
Monty Python's Flying Circus is expected. We are not savages.
|
||||
|
||||
And if you _really_ need to slap somebody, do it with a fish while dancing.
|
||||
And if you *really* need to slap somebody, do it with a fish while
|
||||
dancing.
|
||||
|
14
.github/ISSUE_TEMPLATE.md
vendored
Normal file
14
.github/ISSUE_TEMPLATE.md
vendored
Normal file
@ -0,0 +1,14 @@
|
||||
Howdy! Sorry you're having trouble. To expedite your experience,
|
||||
provide some basics for me:
|
||||
|
||||
Operating system:
|
||||
Python version:
|
||||
*Black* version:
|
||||
Does also happen on master:
|
||||
|
||||
To answer the last question, follow these steps:
|
||||
* create a new virtualenv (make sure it's the same Python version);
|
||||
* clone this repository;
|
||||
* run `pip install -e .`;
|
||||
* make sure it's sane by running `python setup.py test`; and
|
||||
* run `black` like you did last time.
|
66
.github/ISSUE_TEMPLATE/bug_report.md
vendored
66
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@ -1,66 +0,0 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve Black's quality
|
||||
title: ""
|
||||
labels: "T: bug"
|
||||
assignees: ""
|
||||
---
|
||||
|
||||
<!--
|
||||
Please make sure that the bug is not already fixed either in newer versions or the
|
||||
current development version. To confirm this, you have three options:
|
||||
|
||||
1. Update Black's version if a newer release exists: `pip install -U black`
|
||||
2. Use the online formatter at <https://black.vercel.app/?version=main>, which will use
|
||||
the latest main branch. Note that the online formatter currently runs on
|
||||
an older version of Python and may not support newer syntax, such as the
|
||||
extended f-string syntax added in Python 3.12.
|
||||
3. Or run _Black_ on your machine:
|
||||
- create a new virtualenv (make sure it's the same Python version);
|
||||
- clone this repository;
|
||||
- run `pip install -e .[d]`;
|
||||
- run `pip install -r test_requirements.txt`
|
||||
- make sure it's sane by running `python -m pytest`; and
|
||||
- run `black` like you did last time.
|
||||
-->
|
||||
|
||||
**Describe the bug**
|
||||
|
||||
<!-- A clear and concise description of what the bug is. -->
|
||||
|
||||
**To Reproduce**
|
||||
|
||||
<!--
|
||||
Minimal steps to reproduce the behavior with source code and Black's configuration.
|
||||
-->
|
||||
|
||||
For example, take this code:
|
||||
|
||||
```python
|
||||
this = "code"
|
||||
```
|
||||
|
||||
And run it with these arguments:
|
||||
|
||||
```sh
|
||||
$ black file.py --target-version py39
|
||||
```
|
||||
|
||||
The resulting error is:
|
||||
|
||||
> cannot format file.py: INTERNAL ERROR: ...
|
||||
|
||||
**Expected behavior**
|
||||
|
||||
<!-- A clear and concise description of what you expected to happen. -->
|
||||
|
||||
**Environment**
|
||||
|
||||
<!-- Please complete the following information: -->
|
||||
|
||||
- Black's version: <!-- e.g. [main] -->
|
||||
- OS and Python version: <!-- e.g. [Linux/Python 3.7.4rc1] -->
|
||||
|
||||
**Additional context**
|
||||
|
||||
<!-- Add any other context about the problem here. -->
|
12
.github/ISSUE_TEMPLATE/config.yml
vendored
12
.github/ISSUE_TEMPLATE/config.yml
vendored
@ -1,12 +0,0 @@
|
||||
# See also: https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
|
||||
|
||||
# This is the default and blank issues are useful so let's keep 'em.
|
||||
blank_issues_enabled: true
|
||||
|
||||
contact_links:
|
||||
- name: Chat on Python Discord
|
||||
url: https://discord.gg/RtVdv86PrH
|
||||
about: |
|
||||
User support, questions, and other lightweight requests can be
|
||||
handled via the #black-formatter text channel we have on Python
|
||||
Discord.
|
27
.github/ISSUE_TEMPLATE/docs-issue.md
vendored
27
.github/ISSUE_TEMPLATE/docs-issue.md
vendored
@ -1,27 +0,0 @@
|
||||
---
|
||||
name: Documentation
|
||||
about: Report a problem with or suggest something for the documentation
|
||||
title: ""
|
||||
labels: "T: documentation"
|
||||
assignees: ""
|
||||
---
|
||||
|
||||
**Is this related to a problem? Please describe.**
|
||||
|
||||
<!-- A clear and concise description of what the problem is.
|
||||
e.g. I'm always frustrated when [...] / I wished that [...] -->
|
||||
|
||||
**Describe the solution you'd like**
|
||||
|
||||
<!-- A clear and concise description of what you want to
|
||||
happen or see changed. -->
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
|
||||
<!-- A clear and concise description of any
|
||||
alternative solutions or features you've considered. -->
|
||||
|
||||
**Additional context**
|
||||
|
||||
<!-- Add any other context or screenshots about the issue
|
||||
here. -->
|
27
.github/ISSUE_TEMPLATE/feature_request.md
vendored
27
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@ -1,27 +0,0 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ""
|
||||
labels: "T: enhancement"
|
||||
assignees: ""
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
|
||||
<!-- A clear and concise description of what the problem is.
|
||||
e.g. I'm always frustrated when [...] -->
|
||||
|
||||
**Describe the solution you'd like**
|
||||
|
||||
<!-- A clear and concise description of what you want to
|
||||
happen. -->
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
|
||||
<!-- A clear and concise description of any
|
||||
alternative solutions or features you've considered. -->
|
||||
|
||||
**Additional context**
|
||||
|
||||
<!-- Add any other context or screenshots about the feature request
|
||||
here. -->
|
37
.github/ISSUE_TEMPLATE/style_issue.md
vendored
37
.github/ISSUE_TEMPLATE/style_issue.md
vendored
@ -1,37 +0,0 @@
|
||||
---
|
||||
name: Code style issue
|
||||
about: Help us improve the Black code style
|
||||
title: ""
|
||||
labels: "T: style"
|
||||
assignees: ""
|
||||
---
|
||||
|
||||
**Describe the style change**
|
||||
|
||||
<!-- A clear and concise description of how the style can be
|
||||
improved. -->
|
||||
|
||||
**Examples in the current _Black_ style**
|
||||
|
||||
<!-- Think of some short code snippets that show
|
||||
how the current _Black_ style is not great: -->
|
||||
|
||||
```python
|
||||
def f():
|
||||
"Make sure this code is blackened"""
|
||||
pass
|
||||
```
|
||||
|
||||
**Desired style**
|
||||
|
||||
<!-- How do you think _Black_ should format the above snippets: -->
|
||||
|
||||
```python
|
||||
def f(
|
||||
):
|
||||
pass
|
||||
```
|
||||
|
||||
**Additional context**
|
||||
|
||||
<!-- Add any other context about the problem here. -->
|
36
.github/PULL_REQUEST_TEMPLATE.md
vendored
36
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -1,36 +0,0 @@
|
||||
<!-- Hello! Thanks for submitting a PR. To help make things go a bit more
|
||||
smoothly we would appreciate that you go through this template. -->
|
||||
|
||||
### Description
|
||||
|
||||
<!-- Good things to put here include: reasoning for the change (please link
|
||||
any relevant issues!), any noteworthy (or hacky) choices to be aware of,
|
||||
or what the problem resolved here looked like ... we won't mind a ranty
|
||||
story :) -->
|
||||
|
||||
### Checklist - did you ...
|
||||
|
||||
<!-- If any of the following items aren't relevant for your contribution
|
||||
please still tick them so we know you've gone through the checklist.
|
||||
|
||||
All user-facing changes should get an entry. Otherwise, signal to us
|
||||
this should get the magical label to silence the CHANGELOG entry check.
|
||||
Tests are required for bugfixes and new features. Documentation changes
|
||||
are necessary for formatting and most enhancement changes. -->
|
||||
|
||||
- [ ] Add an entry in `CHANGES.md` if necessary?
|
||||
- [ ] Add / update tests if necessary?
|
||||
- [ ] Add new / update outdated documentation?
|
||||
|
||||
<!-- Just as a reminder, everyone in all psf/black spaces including PRs
|
||||
must follow the PSF Code of Conduct (link below).
|
||||
|
||||
Finally, once again thanks for your time and effort. If you have any
|
||||
feedback in regards to your experience contributing here, please
|
||||
let us know!
|
||||
|
||||
Helpful links:
|
||||
|
||||
PSF COC: https://www.python.org/psf/conduct/
|
||||
Contributing docs: https://black.readthedocs.io/en/latest/contributing/index.html
|
||||
Chat on Python Discord: https://discord.gg/RtVdv86PrH -->
|
16
.github/dependabot.yml
vendored
16
.github/dependabot.yml
vendored
@ -1,16 +0,0 @@
|
||||
# https://docs.github.com/en/code-security/supply-chain-security/keeping-your-dependencies-updated-automatically/configuration-options-for-dependency-updates
|
||||
|
||||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "github-actions"
|
||||
# Workflow files in .github/workflows will be checked
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
labels: ["skip news", "C: dependencies"]
|
||||
|
||||
- package-ecosystem: "pip"
|
||||
directory: "docs/"
|
||||
schedule:
|
||||
interval: "weekly"
|
||||
labels: ["skip news", "C: dependencies", "T: documentation"]
|
24
.github/workflows/changelog.yml
vendored
24
.github/workflows/changelog.yml
vendored
@ -1,24 +0,0 @@
|
||||
name: changelog
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened, synchronize, labeled, unlabeled, reopened]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: Changelog Entry Check
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Grep CHANGES.md for PR number
|
||||
if: contains(github.event.pull_request.labels.*.name, 'skip news') != true
|
||||
run: |
|
||||
grep -Pz "\((\n\s*)?#${{ github.event.pull_request.number }}(\n\s*)?\)" CHANGES.md || \
|
||||
(echo "Please add '(#${{ github.event.pull_request.number }})' change line to CHANGES.md (or if appropriate, ask a maintainer to add the 'skip news' label)" && \
|
||||
exit 1)
|
155
.github/workflows/diff_shades.yml
vendored
155
.github/workflows/diff_shades.yml
vendored
@ -1,155 +0,0 @@
|
||||
name: diff-shades
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
paths: ["src/**", "pyproject.toml", ".github/workflows/*"]
|
||||
|
||||
pull_request:
|
||||
paths: ["src/**", "pyproject.toml", ".github/workflows/*"]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
configure:
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
matrix: ${{ steps.set-config.outputs.matrix }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11"
|
||||
|
||||
- name: Install diff-shades and support dependencies
|
||||
run: |
|
||||
python -m pip install 'click>=8.1.7' packaging urllib3
|
||||
python -m pip install https://github.com/ichard26/diff-shades/archive/stable.zip
|
||||
|
||||
- name: Calculate run configuration & metadata
|
||||
id: set-config
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ github.token }}
|
||||
run: >
|
||||
python scripts/diff_shades_gha_helper.py config ${{ github.event_name }}
|
||||
${{ matrix.mode }}
|
||||
|
||||
analysis:
|
||||
name: analysis / ${{ matrix.mode }}
|
||||
needs: configure
|
||||
runs-on: ubuntu-latest
|
||||
env:
|
||||
HATCH_BUILD_HOOKS_ENABLE: "1"
|
||||
# Clang is less picky with the C code it's given than gcc (and may
|
||||
# generate faster binaries too).
|
||||
CC: clang-18
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include: ${{ fromJson(needs.configure.outputs.matrix) }}
|
||||
|
||||
steps:
|
||||
- name: Checkout this repository (full clone)
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
# The baseline revision could be rather old so a full clone is ideal.
|
||||
fetch-depth: 0
|
||||
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.11"
|
||||
|
||||
- name: Install diff-shades and support dependencies
|
||||
run: |
|
||||
python -m pip install https://github.com/ichard26/diff-shades/archive/stable.zip
|
||||
python -m pip install 'click>=8.1.7' packaging urllib3
|
||||
# After checking out old revisions, this might not exist so we'll use a copy.
|
||||
cat scripts/diff_shades_gha_helper.py > helper.py
|
||||
git config user.name "diff-shades-gha"
|
||||
git config user.email "diff-shades-gha@example.com"
|
||||
|
||||
- name: Attempt to use cached baseline analysis
|
||||
id: baseline-cache
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ${{ matrix.baseline-analysis }}
|
||||
key: ${{ matrix.baseline-cache-key }}
|
||||
|
||||
- name: Build and install baseline revision
|
||||
if: steps.baseline-cache.outputs.cache-hit != 'true'
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ github.token }}
|
||||
run: >
|
||||
${{ matrix.baseline-setup-cmd }}
|
||||
&& python -m pip install .
|
||||
|
||||
- name: Analyze baseline revision
|
||||
if: steps.baseline-cache.outputs.cache-hit != 'true'
|
||||
run: >
|
||||
diff-shades analyze -v --work-dir projects-cache/
|
||||
${{ matrix.baseline-analysis }} ${{ matrix.force-flag }}
|
||||
|
||||
- name: Build and install target revision
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ github.token }}
|
||||
run: >
|
||||
${{ matrix.target-setup-cmd }}
|
||||
&& python -m pip install .
|
||||
|
||||
- name: Analyze target revision
|
||||
run: >
|
||||
diff-shades analyze -v --work-dir projects-cache/
|
||||
${{ matrix.target-analysis }} --repeat-projects-from
|
||||
${{ matrix.baseline-analysis }} ${{ matrix.force-flag }}
|
||||
|
||||
- name: Generate HTML diff report
|
||||
run: >
|
||||
diff-shades --dump-html diff.html compare --diff
|
||||
${{ matrix.baseline-analysis }} ${{ matrix.target-analysis }}
|
||||
|
||||
- name: Upload diff report
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ${{ matrix.mode }}-diff.html
|
||||
path: diff.html
|
||||
|
||||
- name: Upload baseline analysis
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ${{ matrix.baseline-analysis }}
|
||||
path: ${{ matrix.baseline-analysis }}
|
||||
|
||||
- name: Upload target analysis
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ${{ matrix.target-analysis }}
|
||||
path: ${{ matrix.target-analysis }}
|
||||
|
||||
- name: Generate summary file (PR only)
|
||||
if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes'
|
||||
run: >
|
||||
python helper.py comment-body ${{ matrix.baseline-analysis }}
|
||||
${{ matrix.target-analysis }} ${{ matrix.baseline-sha }}
|
||||
${{ matrix.target-sha }} ${{ github.event.pull_request.number }}
|
||||
|
||||
- name: Upload summary file (PR only)
|
||||
if: github.event_name == 'pull_request' && matrix.mode == 'preview-changes'
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: .pr-comment.json
|
||||
path: .pr-comment.json
|
||||
|
||||
- name: Verify zero changes (PR only)
|
||||
if: matrix.mode == 'assert-no-changes'
|
||||
run: >
|
||||
diff-shades compare --check ${{ matrix.baseline-analysis }} ${{ matrix.target-analysis }}
|
||||
|| (echo "Please verify you didn't change the stable code style unintentionally!" && exit 1)
|
||||
|
||||
- name: Check for failed files for target revision
|
||||
# Even if the previous step failed, we should still check for failed files.
|
||||
if: always()
|
||||
run: >
|
||||
diff-shades show-failed --check --show-log ${{ matrix.target-analysis }}
|
49
.github/workflows/diff_shades_comment.yml
vendored
49
.github/workflows/diff_shades_comment.yml
vendored
@ -1,49 +0,0 @@
|
||||
name: diff-shades-comment
|
||||
|
||||
on:
|
||||
workflow_run:
|
||||
workflows: [diff-shades]
|
||||
types: [completed]
|
||||
|
||||
permissions:
|
||||
pull-requests: write
|
||||
|
||||
jobs:
|
||||
comment:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "*"
|
||||
|
||||
- name: Install support dependencies
|
||||
run: |
|
||||
python -m pip install pip --upgrade
|
||||
python -m pip install click packaging urllib3
|
||||
|
||||
- name: Get details from initial workflow run
|
||||
id: metadata
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ github.token }}
|
||||
run: >
|
||||
python scripts/diff_shades_gha_helper.py comment-details
|
||||
${{github.event.workflow_run.id }}
|
||||
|
||||
- name: Try to find pre-existing PR comment
|
||||
if: steps.metadata.outputs.needs-comment == 'true'
|
||||
id: find-comment
|
||||
uses: peter-evans/find-comment@3eae4d37986fb5a8592848f6a574fdf654e61f9e
|
||||
with:
|
||||
issue-number: ${{ steps.metadata.outputs.pr-number }}
|
||||
comment-author: "github-actions[bot]"
|
||||
body-includes: "diff-shades"
|
||||
|
||||
- name: Create or update PR comment
|
||||
if: steps.metadata.outputs.needs-comment == 'true'
|
||||
uses: peter-evans/create-or-update-comment@71345be0265236311c031f5c7866368bd1eff043
|
||||
with:
|
||||
comment-id: ${{ steps.find-comment.outputs.comment-id }}
|
||||
issue-number: ${{ steps.metadata.outputs.pr-number }}
|
||||
body: ${{ steps.metadata.outputs.comment-body }}
|
||||
edit-mode: replace
|
40
.github/workflows/doc.yml
vendored
40
.github/workflows/doc.yml
vendored
@ -1,40 +0,0 @@
|
||||
name: Documentation
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
build:
|
||||
# We want to run on external PRs, but not on our own internal PRs as they'll be run
|
||||
# by the push to the branch. Without this if check, checks are duplicated since
|
||||
# internal PRs match both the push and pull_request events.
|
||||
if:
|
||||
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
|
||||
github.repository
|
||||
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
os: [ubuntu-latest, windows-latest]
|
||||
|
||||
runs-on: ${{ matrix.os }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up latest Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.13"
|
||||
allow-prereleases: true
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install uv
|
||||
python -m uv venv
|
||||
python -m uv pip install -e ".[d]"
|
||||
python -m uv pip install -r "docs/requirements.txt"
|
||||
|
||||
- name: Build documentation
|
||||
run: sphinx-build -a -b html -W --keep-going docs/ docs/_build
|
69
.github/workflows/docker.yml
vendored
69
.github/workflows/docker.yml
vendored
@ -1,69 +0,0 @@
|
||||
name: docker
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- "main"
|
||||
release:
|
||||
types: [published]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
docker:
|
||||
if: github.repository == 'psf/black'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Set up QEMU
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Login to DockerHub
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Check + set version tag
|
||||
run:
|
||||
echo "GIT_TAG=$(git describe --candidates=0 --tags 2> /dev/null || echo
|
||||
latest_non_release)" >> $GITHUB_ENV
|
||||
|
||||
- name: Build and push
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: pyfound/black:latest,pyfound/black:${{ env.GIT_TAG }}
|
||||
|
||||
- name: Build and push latest_release tag
|
||||
if:
|
||||
${{ github.event_name == 'release' && github.event.action == 'published' &&
|
||||
!github.event.release.prerelease }}
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: pyfound/black:latest_release
|
||||
|
||||
- name: Build and push latest_prerelease tag
|
||||
if:
|
||||
${{ github.event_name == 'release' && github.event.action == 'published' &&
|
||||
github.event.release.prerelease }}
|
||||
uses: docker/build-push-action@v6
|
||||
with:
|
||||
context: .
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: true
|
||||
tags: pyfound/black:latest_prerelease
|
||||
|
||||
- name: Image digest
|
||||
run: echo ${{ steps.docker_build.outputs.digest }}
|
43
.github/workflows/fuzz.yml
vendored
43
.github/workflows/fuzz.yml
vendored
@ -1,43 +0,0 @@
|
||||
name: Fuzz
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}
|
||||
cancel-in-progress: true
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
build:
|
||||
# We want to run on external PRs, but not on our own internal PRs as they'll be run
|
||||
# by the push to the branch. Without this if check, checks are duplicated since
|
||||
# internal PRs match both the push and pull_request events.
|
||||
if:
|
||||
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
|
||||
github.repository
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
python-version: ["3.9", "3.10", "3.11", "3.12.4", "3.13"]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
allow-prereleases: true
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
python -m pip install --upgrade tox
|
||||
|
||||
- name: Run fuzz tests
|
||||
run: |
|
||||
tox -e fuzz
|
48
.github/workflows/lint.yml
vendored
48
.github/workflows/lint.yml
vendored
@ -1,48 +0,0 @@
|
||||
name: Lint + format ourselves
|
||||
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
# We want to run on external PRs, but not on our own internal PRs as they'll be run
|
||||
# by the push to the branch. Without this if check, checks are duplicated since
|
||||
# internal PRs match both the push and pull_request events.
|
||||
if:
|
||||
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
|
||||
github.repository
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Assert PR target is main
|
||||
if: github.event_name == 'pull_request' && github.repository == 'psf/black'
|
||||
run: |
|
||||
if [ "$GITHUB_BASE_REF" != "main" ]; then
|
||||
echo "::error::PR targeting '$GITHUB_BASE_REF', please refile targeting 'main'." && exit 1
|
||||
fi
|
||||
|
||||
- name: Set up latest Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.13"
|
||||
allow-prereleases: true
|
||||
|
||||
- name: Install dependencies
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
python -m pip install -e '.'
|
||||
python -m pip install tox
|
||||
|
||||
- name: Run pre-commit hooks
|
||||
uses: pre-commit/action@v3.0.1
|
||||
|
||||
- name: Format ourselves
|
||||
run: |
|
||||
tox -e run_self
|
||||
|
||||
- name: Regenerate schema
|
||||
run: |
|
||||
tox -e generate_schema
|
||||
git diff --exit-code
|
130
.github/workflows/pypi_upload.yml
vendored
130
.github/workflows/pypi_upload.yml
vendored
@ -1,130 +0,0 @@
|
||||
name: Build and publish
|
||||
|
||||
on:
|
||||
release:
|
||||
types: [published]
|
||||
pull_request:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
main:
|
||||
name: sdist + pure wheel
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'release'
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up latest Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.13"
|
||||
allow-prereleases: true
|
||||
|
||||
- name: Install latest pip, build, twine
|
||||
run: |
|
||||
python -m pip install --upgrade --disable-pip-version-check pip
|
||||
python -m pip install --upgrade build twine
|
||||
|
||||
- name: Build wheel and source distributions
|
||||
run: python -m build
|
||||
|
||||
- if: github.event_name == 'release'
|
||||
name: Upload to PyPI via Twine
|
||||
env:
|
||||
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
|
||||
run: twine upload --verbose -u '__token__' dist/*
|
||||
|
||||
generate_wheels_matrix:
|
||||
name: generate wheels matrix
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
include: ${{ steps.set-matrix.outputs.include }}
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
# Keep cibuildwheel version in sync with below
|
||||
- name: Install cibuildwheel and pypyp
|
||||
run: |
|
||||
pipx install cibuildwheel==2.22.0
|
||||
pipx install pypyp==1.3.0
|
||||
- name: generate matrix
|
||||
if: github.event_name != 'pull_request'
|
||||
run: |
|
||||
{
|
||||
cibuildwheel --print-build-identifiers --platform linux \
|
||||
| pyp 'json.dumps({"only": x, "os": "ubuntu-latest"})' \
|
||||
&& cibuildwheel --print-build-identifiers --platform macos \
|
||||
| pyp 'json.dumps({"only": x, "os": "macos-latest"})' \
|
||||
&& cibuildwheel --print-build-identifiers --platform windows \
|
||||
| pyp 'json.dumps({"only": x, "os": "windows-latest"})'
|
||||
} | pyp 'json.dumps(list(map(json.loads, lines)))' > /tmp/matrix
|
||||
env:
|
||||
CIBW_ARCHS_LINUX: x86_64
|
||||
CIBW_ARCHS_MACOS: x86_64 arm64
|
||||
CIBW_ARCHS_WINDOWS: AMD64
|
||||
- name: generate matrix (PR)
|
||||
if: github.event_name == 'pull_request'
|
||||
run: |
|
||||
{
|
||||
cibuildwheel --print-build-identifiers --platform linux \
|
||||
| pyp 'json.dumps({"only": x, "os": "ubuntu-latest"})'
|
||||
} | pyp 'json.dumps(list(map(json.loads, lines)))' > /tmp/matrix
|
||||
env:
|
||||
CIBW_BUILD: "cp39-* cp313-*"
|
||||
CIBW_ARCHS_LINUX: x86_64
|
||||
- id: set-matrix
|
||||
run: echo "include=$(cat /tmp/matrix)" | tee -a $GITHUB_OUTPUT
|
||||
|
||||
mypyc:
|
||||
name: mypyc wheels ${{ matrix.only }}
|
||||
needs: generate_wheels_matrix
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include: ${{ fromJson(needs.generate_wheels_matrix.outputs.include) }}
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
# Keep cibuildwheel version in sync with above
|
||||
- uses: pypa/cibuildwheel@v2.23.3
|
||||
with:
|
||||
only: ${{ matrix.only }}
|
||||
|
||||
- name: Upload wheels as workflow artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ${{ matrix.only }}-mypyc-wheels
|
||||
path: ./wheelhouse/*.whl
|
||||
|
||||
- if: github.event_name == 'release'
|
||||
name: Upload wheels to PyPI via Twine
|
||||
env:
|
||||
TWINE_PASSWORD: ${{ secrets.PYPI_TOKEN }}
|
||||
run: pipx run twine upload --verbose -u '__token__' wheelhouse/*.whl
|
||||
|
||||
update-stable-branch:
|
||||
name: Update stable branch
|
||||
needs: [main, mypyc]
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'release'
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
steps:
|
||||
- name: Checkout stable branch
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
ref: stable
|
||||
fetch-depth: 0
|
||||
|
||||
- if: github.event_name == 'release'
|
||||
name: Update stable branch to release tag & push
|
||||
run: |
|
||||
git reset --hard ${{ github.event.release.tag_name }}
|
||||
git push
|
56
.github/workflows/release_tests.yml
vendored
56
.github/workflows/release_tests.yml
vendored
@ -1,56 +0,0 @@
|
||||
name: Release tool CI
|
||||
|
||||
on:
|
||||
push:
|
||||
paths:
|
||||
- .github/workflows/release_tests.yml
|
||||
- release.py
|
||||
- release_tests.py
|
||||
pull_request:
|
||||
paths:
|
||||
- .github/workflows/release_tests.yml
|
||||
- release.py
|
||||
- release_tests.py
|
||||
|
||||
jobs:
|
||||
build:
|
||||
# We want to run on external PRs, but not on our own internal PRs as they'll be run
|
||||
# by the push to the branch. Without this if check, checks are duplicated since
|
||||
# internal PRs match both the push and pull_request events.
|
||||
if:
|
||||
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
|
||||
github.repository
|
||||
|
||||
name: Running python ${{ matrix.python-version }} on ${{matrix.os}}
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ["3.13"]
|
||||
os: [macOS-latest, ubuntu-latest, windows-latest]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
# Give us all history, branches and tags
|
||||
fetch-depth: 0
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
allow-prereleases: true
|
||||
|
||||
- name: Print Python Version
|
||||
run: python --version --version && which python
|
||||
|
||||
- name: Print Git Version
|
||||
run: git --version && which git
|
||||
|
||||
- name: Update pip, setuptools + wheels
|
||||
run: |
|
||||
python -m pip install --upgrade pip setuptools wheel
|
||||
|
||||
- name: Run unit tests via coverage + print report
|
||||
run: |
|
||||
python -m pip install coverage
|
||||
coverage run scripts/release_tests.py
|
||||
coverage report --show-missing
|
110
.github/workflows/test.yml
vendored
110
.github/workflows/test.yml
vendored
@ -1,110 +0,0 @@
|
||||
name: Test
|
||||
|
||||
on:
|
||||
push:
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
- "*.md"
|
||||
|
||||
pull_request:
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
- "*.md"
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.run_id }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
main:
|
||||
# We want to run on external PRs, but not on our own internal PRs as they'll be run
|
||||
# by the push to the branch. Without this if check, checks are duplicated since
|
||||
# internal PRs match both the push and pull_request events.
|
||||
if:
|
||||
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
|
||||
github.repository
|
||||
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
python-version: ["3.9", "3.10", "3.11", "3.12.4", "3.13", "pypy-3.9"]
|
||||
os: [ubuntu-latest, macOS-latest, windows-latest]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Python ${{ matrix.python-version }}
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: ${{ matrix.python-version }}
|
||||
allow-prereleases: true
|
||||
|
||||
- name: Install tox
|
||||
run: |
|
||||
python -m pip install --upgrade pip
|
||||
python -m pip install --upgrade tox
|
||||
|
||||
- name: Unit tests
|
||||
if: "!startsWith(matrix.python-version, 'pypy')"
|
||||
run:
|
||||
tox -e ci-py$(echo ${{ matrix.python-version }} | tr -d '.') -- -v --color=yes
|
||||
|
||||
- name: Unit tests (pypy)
|
||||
if: "startsWith(matrix.python-version, 'pypy')"
|
||||
run: tox -e ci-pypy3 -- -v --color=yes
|
||||
|
||||
- name: Upload coverage to Coveralls
|
||||
# Upload coverage if we are on the main repository and
|
||||
# we're running on Linux (this action only supports Linux)
|
||||
if:
|
||||
github.repository == 'psf/black' && matrix.os == 'ubuntu-latest' &&
|
||||
!startsWith(matrix.python-version, 'pypy')
|
||||
uses: AndreMiras/coveralls-python-action@ac868b9540fad490f7ca82b8ca00480fd751ed19
|
||||
with:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
parallel: true
|
||||
flag-name: py${{ matrix.python-version }}-${{ matrix.os }}
|
||||
debug: true
|
||||
|
||||
coveralls-finish:
|
||||
needs: main
|
||||
if: github.repository == 'psf/black'
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Send finished signal to Coveralls
|
||||
uses: AndreMiras/coveralls-python-action@ac868b9540fad490f7ca82b8ca00480fd751ed19
|
||||
with:
|
||||
parallel-finished: true
|
||||
debug: true
|
||||
|
||||
uvloop:
|
||||
if:
|
||||
github.event_name == 'push' || github.event.pull_request.head.repo.full_name !=
|
||||
github.repository
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
os: [ubuntu-latest, macOS-latest]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up latest Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.12.4"
|
||||
|
||||
- name: Install black with uvloop
|
||||
run: |
|
||||
python -m pip install pip --upgrade --disable-pip-version-check
|
||||
python -m pip install -e ".[uvloop]"
|
||||
|
||||
- name: Format ourselves
|
||||
run: python -m black --check src/ tests/
|
63
.github/workflows/upload_binary.yml
vendored
63
.github/workflows/upload_binary.yml
vendored
@ -1,63 +0,0 @@
|
||||
name: Publish executables
|
||||
|
||||
on:
|
||||
release:
|
||||
types: [published]
|
||||
|
||||
permissions:
|
||||
contents: write # actions/upload-release-asset needs this.
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
os: [windows-2019, ubuntu-22.04, macos-latest]
|
||||
include:
|
||||
- os: windows-2019
|
||||
pathsep: ";"
|
||||
asset_name: black_windows.exe
|
||||
executable_mime: "application/vnd.microsoft.portable-executable"
|
||||
- os: ubuntu-22.04
|
||||
pathsep: ":"
|
||||
asset_name: black_linux
|
||||
executable_mime: "application/x-executable"
|
||||
- os: macos-latest
|
||||
pathsep: ":"
|
||||
asset_name: black_macos
|
||||
executable_mime: "application/x-mach-binary"
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up latest Python
|
||||
uses: actions/setup-python@v5
|
||||
with:
|
||||
python-version: "3.12.4"
|
||||
|
||||
- name: Install Black and PyInstaller
|
||||
run: |
|
||||
python -m pip install --upgrade pip wheel
|
||||
python -m pip install .[colorama]
|
||||
python -m pip install pyinstaller
|
||||
|
||||
- name: Build executable with PyInstaller
|
||||
run: >
|
||||
python -m PyInstaller -F --name ${{ matrix.asset_name }} --add-data
|
||||
'src/blib2to3${{ matrix.pathsep }}blib2to3' src/black/__main__.py
|
||||
|
||||
- name: Quickly test executable
|
||||
run: |
|
||||
./dist/${{ matrix.asset_name }} --version
|
||||
./dist/${{ matrix.asset_name }} src --verbose
|
||||
|
||||
- name: Upload binary as release asset
|
||||
uses: actions/upload-release-asset@v1
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
upload_url: ${{ github.event.release.upload_url }}
|
||||
asset_path: dist/${{ matrix.asset_name }}
|
||||
asset_name: ${{ matrix.asset_name }}
|
||||
asset_content_type: ${{ matrix.executable_mime }}
|
20
.gitignore
vendored
20
.gitignore
vendored
@ -1,28 +1,8 @@
|
||||
.venv
|
||||
.coverage
|
||||
.coverage.*
|
||||
_build
|
||||
.DS_Store
|
||||
.vscode
|
||||
.python-version
|
||||
docs/_static/pypi.svg
|
||||
.tox
|
||||
__pycache__
|
||||
|
||||
# Packaging artifacts
|
||||
black.egg-info
|
||||
black.dist-info
|
||||
build/
|
||||
dist/
|
||||
pip-wheel-metadata/
|
||||
.eggs
|
||||
|
||||
src/_black_version.py
|
||||
.idea
|
||||
|
||||
.dmypy.json
|
||||
*.swp
|
||||
.hypothesis/
|
||||
venv/
|
||||
.ipynb_checkpoints/
|
||||
node_modules/
|
||||
|
@ -1,83 +1,19 @@
|
||||
# Note: don't use this config for your own repositories. Instead, see
|
||||
# "Version control integration" in docs/integrations/source_version_control.md
|
||||
exclude: ^(profiling/|tests/data/)
|
||||
repos:
|
||||
- repo: local
|
||||
# "Version control integration" in README.md.
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: check-pre-commit-rev-in-example
|
||||
name: Check pre-commit rev in example
|
||||
language: python
|
||||
entry: python -m scripts.check_pre_commit_rev_in_example
|
||||
files: '(CHANGES\.md|source_version_control\.md)$'
|
||||
additional_dependencies:
|
||||
&version_check_dependencies [
|
||||
commonmark==0.9.1,
|
||||
pyyaml==6.0.1,
|
||||
beautifulsoup4==4.9.3,
|
||||
]
|
||||
|
||||
- id: check-version-in-the-basics-example
|
||||
name: Check black version in the basics example
|
||||
language: python
|
||||
entry: python -m scripts.check_version_in_basics_example
|
||||
files: '(CHANGES\.md|the_basics\.md)$'
|
||||
additional_dependencies: *version_check_dependencies
|
||||
|
||||
- repo: https://github.com/pycqa/isort
|
||||
rev: 6.0.1
|
||||
hooks:
|
||||
- id: isort
|
||||
|
||||
- repo: https://github.com/pycqa/flake8
|
||||
rev: 7.2.0
|
||||
hooks:
|
||||
- id: flake8
|
||||
additional_dependencies:
|
||||
- flake8-bugbear==24.2.6
|
||||
- flake8-comprehensions
|
||||
- flake8-simplify
|
||||
exclude: ^src/blib2to3/
|
||||
|
||||
- repo: https://github.com/pre-commit/mirrors-mypy
|
||||
rev: v1.15.0
|
||||
hooks:
|
||||
- id: mypy
|
||||
exclude: ^(docs/conf.py|scripts/generate_schema.py)$
|
||||
args: []
|
||||
additional_dependencies: &mypy_deps
|
||||
- types-PyYAML
|
||||
- types-atheris
|
||||
- tomli >= 0.2.6, < 2.0.0
|
||||
- click >= 8.2.0
|
||||
# Click is intentionally out-of-sync with pyproject.toml
|
||||
# v8.2 has breaking changes. We work around them at runtime, but we need the newer stubs.
|
||||
- packaging >= 22.0
|
||||
- platformdirs >= 2.1.0
|
||||
- pytokens >= 0.1.10
|
||||
- pytest
|
||||
- hypothesis
|
||||
- aiohttp >= 3.7.4
|
||||
- types-commonmark
|
||||
- urllib3
|
||||
- hypothesmith
|
||||
- id: mypy
|
||||
name: mypy (Python 3.10)
|
||||
files: scripts/generate_schema.py
|
||||
args: ["--python-version=3.10"]
|
||||
additional_dependencies: *mypy_deps
|
||||
|
||||
- repo: https://github.com/rbubley/mirrors-prettier
|
||||
rev: v3.5.3
|
||||
hooks:
|
||||
- id: prettier
|
||||
types_or: [markdown, yaml, json]
|
||||
exclude: \.github/workflows/diff_shades\.yml
|
||||
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v5.0.0
|
||||
hooks:
|
||||
- id: end-of-file-fixer
|
||||
- id: trailing-whitespace
|
||||
|
||||
ci:
|
||||
autoupdate_schedule: quarterly
|
||||
- id: black
|
||||
name: black
|
||||
language: system
|
||||
entry: python3 -m black
|
||||
files: ^(black|setup|tests/test_black|docs/conf)\.py$
|
||||
- id: flake8
|
||||
name: flake8
|
||||
language: system
|
||||
entry: flake8
|
||||
files: ^(black|setup|tests/test_black)\.py$
|
||||
- id: mypy
|
||||
name: mypy
|
||||
language: system
|
||||
entry: mypy
|
||||
files: ^(black|setup|tests/test_black)\.py$
|
||||
|
@ -1,20 +1,7 @@
|
||||
# Note that we recommend using https://github.com/psf/black-pre-commit-mirror instead
|
||||
# This will work about 2x as fast as using the hooks in this repository
|
||||
- id: black
|
||||
name: black
|
||||
description: "Black: The uncompromising Python code formatter"
|
||||
entry: black
|
||||
language: python
|
||||
minimum_pre_commit_version: 2.9.2
|
||||
require_serial: true
|
||||
types_or: [python, pyi]
|
||||
- id: black-jupyter
|
||||
name: black-jupyter
|
||||
description:
|
||||
"Black: The uncompromising Python code formatter (with Jupyter Notebook support)"
|
||||
entry: black
|
||||
language: python
|
||||
minimum_pre_commit_version: 2.9.2
|
||||
require_serial: true
|
||||
types_or: [python, pyi, jupyter]
|
||||
additional_dependencies: [".[jupyter]"]
|
||||
- id: black
|
||||
name: black
|
||||
description: 'Black: The uncompromising Python code formatter'
|
||||
entry: black
|
||||
language: python
|
||||
language_version: python3.6
|
||||
types: [python]
|
||||
|
@ -1,3 +0,0 @@
|
||||
proseWrap: always
|
||||
printWidth: 88
|
||||
endOfLine: auto
|
@ -1,21 +0,0 @@
|
||||
version: 2
|
||||
|
||||
formats:
|
||||
- htmlzip
|
||||
|
||||
build:
|
||||
os: ubuntu-22.04
|
||||
tools:
|
||||
python: "3.11"
|
||||
|
||||
python:
|
||||
install:
|
||||
- requirements: docs/requirements.txt
|
||||
|
||||
- method: pip
|
||||
path: .
|
||||
extra_requirements:
|
||||
- d
|
||||
|
||||
sphinx:
|
||||
configuration: docs/conf.py
|
25
.travis.yml
Normal file
25
.travis.yml
Normal file
@ -0,0 +1,25 @@
|
||||
sudo: required
|
||||
dist: xenial
|
||||
language: python
|
||||
cache: pip
|
||||
before_install:
|
||||
- if [[ $TRAVIS_PYTHON_VERSION == '3.7-dev' ]]; then sudo add-apt-repository ppa:deadsnakes/ppa -y; fi
|
||||
- if [[ $TRAVIS_PYTHON_VERSION == '3.7-dev' ]]; then sudo sudo apt-get update; fi
|
||||
install:
|
||||
- pip install coverage coveralls flake8 flake8-bugbear mypy
|
||||
- pip install -e .
|
||||
script:
|
||||
- coverage run tests/test_black.py
|
||||
- if [[ $TRAVIS_PYTHON_VERSION == '3.6' ]]; then mypy black.py tests/test_black.py; fi
|
||||
- if [[ $TRAVIS_PYTHON_VERSION == '3.6-dev' ]]; then flake8 black.py tests/test_black.py; fi
|
||||
- if [[ $TRAVIS_PYTHON_VERSION == '3.7-dev' ]]; then black --check --verbose .; fi
|
||||
after_success:
|
||||
- coveralls
|
||||
notifications:
|
||||
on_success: change
|
||||
on_failure: always
|
||||
matrix:
|
||||
include:
|
||||
- python: 3.6
|
||||
- python: 3.6-dev
|
||||
- python: 3.7-dev
|
197
AUTHORS.md
197
AUTHORS.md
@ -1,197 +0,0 @@
|
||||
# Authors
|
||||
|
||||
Glued together by [Łukasz Langa](mailto:lukasz@langa.pl).
|
||||
|
||||
Maintained with:
|
||||
|
||||
- [Carol Willing](mailto:carolcode@willingconsulting.com)
|
||||
- [Carl Meyer](mailto:carl@oddbird.net)
|
||||
- [Jelle Zijlstra](mailto:jelle.zijlstra@gmail.com)
|
||||
- [Mika Naylor](mailto:mail@autophagy.io)
|
||||
- [Zsolt Dollenstein](mailto:zsol.zsol@gmail.com)
|
||||
- [Cooper Lees](mailto:me@cooperlees.com)
|
||||
- [Richard Si](mailto:sichard26@gmail.com)
|
||||
- [Felix Hildén](mailto:felix.hilden@gmail.com)
|
||||
- [Batuhan Taskaya](mailto:batuhan@python.org)
|
||||
- [Shantanu Jain](mailto:hauntsaninja@gmail.com)
|
||||
|
||||
Multiple contributions by:
|
||||
|
||||
- [Abdur-Rahmaan Janhangeer](mailto:arj.python@gmail.com)
|
||||
- [Adam Johnson](mailto:me@adamj.eu)
|
||||
- [Adam Williamson](mailto:adamw@happyassassin.net)
|
||||
- [Alexander Huynh](mailto:ahrex-gh-psf-black@e.sc)
|
||||
- [Alexandr Artemyev](mailto:mogost@gmail.com)
|
||||
- [Alex Vandiver](mailto:github@chmrr.net)
|
||||
- [Allan Simon](mailto:allan.simon@supinfo.com)
|
||||
- Anders-Petter Ljungquist
|
||||
- [Amethyst Reese](mailto:amy@n7.gg)
|
||||
- [Andrew Thorp](mailto:andrew.thorp.dev@gmail.com)
|
||||
- [Andrew Zhou](mailto:andrewfzhou@gmail.com)
|
||||
- [Andrey](mailto:dyuuus@yandex.ru)
|
||||
- [Andy Freeland](mailto:andy@andyfreeland.net)
|
||||
- [Anthony Sottile](mailto:asottile@umich.edu)
|
||||
- [Antonio Ossa Guerra](mailto:aaossa+black@uc.cl)
|
||||
- [Arjaan Buijk](mailto:arjaan.buijk@gmail.com)
|
||||
- [Arnav Borbornah](mailto:arnavborborah11@gmail.com)
|
||||
- [Artem Malyshev](mailto:proofit404@gmail.com)
|
||||
- [Asger Hautop Drewsen](mailto:asgerdrewsen@gmail.com)
|
||||
- [Augie Fackler](mailto:raf@durin42.com)
|
||||
- [Aviskar KC](mailto:aviskarkc10@gmail.com)
|
||||
- Batuhan Taşkaya
|
||||
- [Benjamin Wohlwend](mailto:bw@piquadrat.ch)
|
||||
- [Benjamin Woodruff](mailto:github@benjam.info)
|
||||
- [Bharat Raghunathan](mailto:bharatraghunthan9767@gmail.com)
|
||||
- [Brandt Bucher](mailto:brandtbucher@gmail.com)
|
||||
- [Brett Cannon](mailto:brett@python.org)
|
||||
- [Bryan Bugyi](mailto:bryan.bugyi@rutgers.edu)
|
||||
- [Bryan Forbes](mailto:bryan@reigndropsfall.net)
|
||||
- [Calum Lind](mailto:calumlind@gmail.com)
|
||||
- [Charles](mailto:peacech@gmail.com)
|
||||
- Charles Reid
|
||||
- [Christian Clauss](mailto:cclauss@bluewin.ch)
|
||||
- [Christian Heimes](mailto:christian@python.org)
|
||||
- [Chuck Wooters](mailto:chuck.wooters@microsoft.com)
|
||||
- [Chris Rose](mailto:offline@offby1.net)
|
||||
- Codey Oxley
|
||||
- [Cong](mailto:congusbongus@gmail.com)
|
||||
- [Cooper Ry Lees](mailto:me@cooperlees.com)
|
||||
- [Dan Davison](mailto:dandavison7@gmail.com)
|
||||
- [Daniel Hahler](mailto:github@thequod.de)
|
||||
- [Daniel M. Capella](mailto:polycitizen@gmail.com)
|
||||
- Daniele Esposti
|
||||
- [David Hotham](mailto:david.hotham@metaswitch.com)
|
||||
- [David Lukes](mailto:dafydd.lukes@gmail.com)
|
||||
- [David Szotten](mailto:davidszotten@gmail.com)
|
||||
- [Denis Laxalde](mailto:denis@laxalde.org)
|
||||
- [Douglas Thor](mailto:dthor@transphormusa.com)
|
||||
- dylanjblack
|
||||
- [Eli Treuherz](mailto:eli@treuherz.com)
|
||||
- [Emil Hessman](mailto:emil@hessman.se)
|
||||
- [Felix Kohlgrüber](mailto:felix.kohlgrueber@gmail.com)
|
||||
- [Florent Thiery](mailto:fthiery@gmail.com)
|
||||
- Francisco
|
||||
- [Giacomo Tagliabue](mailto:giacomo.tag@gmail.com)
|
||||
- [Greg Gandenberger](mailto:ggandenberger@shoprunner.com)
|
||||
- [Gregory P. Smith](mailto:greg@krypto.org)
|
||||
- Gustavo Camargo
|
||||
- hauntsaninja
|
||||
- [Hadi Alqattan](mailto:alqattanhadizaki@gmail.com)
|
||||
- [Hassan Abouelela](mailto:hassan@hassanamr.com)
|
||||
- [Heaford](mailto:dan@heaford.com)
|
||||
- [Hugo Barrera](mailto::hugo@barrera.io)
|
||||
- Hugo van Kemenade
|
||||
- [Hynek Schlawack](mailto:hs@ox.cx)
|
||||
- [Ionite](mailto:dev@ionite.io)
|
||||
- [Ivan Katanić](mailto:ivan.katanic@gmail.com)
|
||||
- [Jakub Kadlubiec](mailto:jakub.kadlubiec@skyscanner.net)
|
||||
- [Jakub Warczarek](mailto:jakub.warczarek@gmail.com)
|
||||
- [Jan Hnátek](mailto:jan.hnatek@gmail.com)
|
||||
- [Jason Fried](mailto:me@jasonfried.info)
|
||||
- [Jason Friedland](mailto:jason@friedland.id.au)
|
||||
- [jgirardet](mailto:ijkl@netc.fr)
|
||||
- Jim Brännlund
|
||||
- [Jimmy Jia](mailto:tesrin@gmail.com)
|
||||
- [Joe Antonakakis](mailto:jma353@cornell.edu)
|
||||
- [Jon Dufresne](mailto:jon.dufresne@gmail.com)
|
||||
- [Jonas Obrist](mailto:ojiidotch@gmail.com)
|
||||
- [Jonty Wareing](mailto:jonty@jonty.co.uk)
|
||||
- [Jose Nazario](mailto:jose.monkey.org@gmail.com)
|
||||
- [Joseph Larson](mailto:larson.joseph@gmail.com)
|
||||
- [Josh Bode](mailto:joshbode@fastmail.com)
|
||||
- [Josh Holland](mailto:anowlcalledjosh@gmail.com)
|
||||
- [Joshua Cannon](mailto:joshdcannon@gmail.com)
|
||||
- [José Padilla](mailto:jpadilla@webapplicate.com)
|
||||
- [Juan Luis Cano Rodríguez](mailto:hello@juanlu.space)
|
||||
- [kaiix](mailto:kvn.hou@gmail.com)
|
||||
- [Katie McLaughlin](mailto:katie@glasnt.com)
|
||||
- Katrin Leinweber
|
||||
- [Keith Smiley](mailto:keithbsmiley@gmail.com)
|
||||
- [Kenyon Ralph](mailto:kenyon@kenyonralph.com)
|
||||
- [Kevin Kirsche](mailto:Kev.Kirsche+GitHub@gmail.com)
|
||||
- [Kyle Hausmann](mailto:kyle.hausmann@gmail.com)
|
||||
- [Kyle Sunden](mailto:sunden@wisc.edu)
|
||||
- Lawrence Chan
|
||||
- [Linus Groh](mailto:mail@linusgroh.de)
|
||||
- [Loren Carvalho](mailto:comradeloren@gmail.com)
|
||||
- [Luka Sterbic](mailto:luka.sterbic@gmail.com)
|
||||
- [LukasDrude](mailto:mail@lukas-drude.de)
|
||||
- Mahmoud Hossam
|
||||
- Mariatta
|
||||
- [Matt VanEseltine](mailto:vaneseltine@gmail.com)
|
||||
- [Matthew Clapp](mailto:itsayellow+dev@gmail.com)
|
||||
- [Matthew Walster](mailto:matthew@walster.org)
|
||||
- Max Smolens
|
||||
- [Michael Aquilina](mailto:michaelaquilina@gmail.com)
|
||||
- [Michael Flaxman](mailto:michael.flaxman@gmail.com)
|
||||
- [Michael J. Sullivan](mailto:sully@msully.net)
|
||||
- [Michael McClimon](mailto:michael@mcclimon.org)
|
||||
- [Miguel Gaiowski](mailto:miggaiowski@gmail.com)
|
||||
- [Mike](mailto:roshi@fedoraproject.org)
|
||||
- [mikehoyio](mailto:mikehoy@gmail.com)
|
||||
- [Min ho Kim](mailto:minho42@gmail.com)
|
||||
- [Miroslav Shubernetskiy](mailto:miroslav@miki725.com)
|
||||
- MomIsBestFriend
|
||||
- [Nathan Goldbaum](mailto:ngoldbau@illinois.edu)
|
||||
- [Nathan Hunt](mailto:neighthan.hunt@gmail.com)
|
||||
- [Neraste](mailto:neraste.herr10@gmail.com)
|
||||
- [Nikolaus Waxweiler](mailto:madigens@gmail.com)
|
||||
- [Ofek Lev](mailto:ofekmeister@gmail.com)
|
||||
- [Osaetin Daniel](mailto:osaetindaniel@gmail.com)
|
||||
- [otstrel](mailto:otstrel@gmail.com)
|
||||
- [Pablo Galindo](mailto:Pablogsal@gmail.com)
|
||||
- [Paul Ganssle](mailto:p.ganssle@gmail.com)
|
||||
- [Paul Meinhardt](mailto:mnhrdt@gmail.com)
|
||||
- [Peter Bengtsson](mailto:mail@peterbe.com)
|
||||
- [Peter Grayson](mailto:pete@jpgrayson.net)
|
||||
- [Peter Stensmyr](mailto:peter.stensmyr@gmail.com)
|
||||
- pmacosta
|
||||
- [Quentin Pradet](mailto:quentin@pradet.me)
|
||||
- [Ralf Schmitt](mailto:ralf@systemexit.de)
|
||||
- [Ramón Valles](mailto:mroutis@protonmail.com)
|
||||
- [Richard Fearn](mailto:richardfearn@gmail.com)
|
||||
- [Rishikesh Jha](mailto:rishijha424@gmail.com)
|
||||
- [Rupert Bedford](mailto:rupert@rupertb.com)
|
||||
- Russell Davis
|
||||
- [Sagi Shadur](mailto:saroad2@gmail.com)
|
||||
- [Rémi Verschelde](mailto:rverschelde@gmail.com)
|
||||
- [Sami Salonen](mailto:sakki@iki.fi)
|
||||
- [Samuel Cormier-Iijima](mailto:samuel@cormier-iijima.com)
|
||||
- [Sanket Dasgupta](mailto:sanketdasgupta@gmail.com)
|
||||
- Sergi
|
||||
- [Scott Stevenson](mailto:scott@stevenson.io)
|
||||
- Shantanu
|
||||
- [shaoran](mailto:shaoran@sakuranohana.org)
|
||||
- [Shinya Fujino](mailto:shf0811@gmail.com)
|
||||
- springstan
|
||||
- [Stavros Korokithakis](mailto:hi@stavros.io)
|
||||
- [Stephen Rosen](mailto:sirosen@globus.org)
|
||||
- [Steven M. Vascellaro](mailto:S.Vascellaro@gmail.com)
|
||||
- [Sunil Kapil](mailto:snlkapil@gmail.com)
|
||||
- [Sébastien Eustace](mailto:sebastien.eustace@gmail.com)
|
||||
- [Tal Amuyal](mailto:TalAmuyal@gmail.com)
|
||||
- [Terrance](mailto:git@terrance.allofti.me)
|
||||
- [Thom Lu](mailto:thomas.c.lu@gmail.com)
|
||||
- [Thomas Grainger](mailto:tagrain@gmail.com)
|
||||
- [Tim Gates](mailto:tim.gates@iress.com)
|
||||
- [Tim Swast](mailto:swast@google.com)
|
||||
- [Timo](mailto:timo_tk@hotmail.com)
|
||||
- Toby Fleming
|
||||
- [Tom Christie](mailto:tom@tomchristie.com)
|
||||
- [Tony Narlock](mailto:tony@git-pull.com)
|
||||
- [Tsuyoshi Hombashi](mailto:tsuyoshi.hombashi@gmail.com)
|
||||
- [Tushar Chandra](mailto:tusharchandra2018@u.northwestern.edu)
|
||||
- [Tushar Sadhwani](mailto:tushar.sadhwani000@gmail.com)
|
||||
- [Tzu-ping Chung](mailto:uranusjr@gmail.com)
|
||||
- [Utsav Shah](mailto:ukshah2@illinois.edu)
|
||||
- utsav-dbx
|
||||
- vezeli
|
||||
- [Ville Skyttä](mailto:ville.skytta@iki.fi)
|
||||
- [Vishwas B Sharma](mailto:sharma.vishwas88@gmail.com)
|
||||
- [Vlad Emelianov](mailto:volshebnyi@gmail.com)
|
||||
- [williamfzc](mailto:178894043@qq.com)
|
||||
- [wouter bolsterlee](mailto:wouter@bolsterl.ee)
|
||||
- Yazdan
|
||||
- [Yngve Høiseth](mailto:yngve@hoiseth.net)
|
||||
- [Yurii Karabas](mailto:1998uriyyo@gmail.com)
|
||||
- [Zac Hatfield-Dodds](mailto:zac@zhd.dev)
|
1997
CHANGES.md
1997
CHANGES.md
File diff suppressed because it is too large
Load Diff
22
CITATION.cff
22
CITATION.cff
@ -1,22 +0,0 @@
|
||||
cff-version: 1.2.0
|
||||
title: "Black: The uncompromising Python code formatter"
|
||||
message: >-
|
||||
If you use this software, please cite it using the metadata from this file.
|
||||
type: software
|
||||
authors:
|
||||
- family-names: Langa
|
||||
given-names: Łukasz
|
||||
- name: "contributors to Black"
|
||||
repository-code: "https://github.com/psf/black"
|
||||
url: "https://black.readthedocs.io/en/stable/"
|
||||
abstract: >-
|
||||
Black is the uncompromising Python code formatter. By using it, you agree to cede
|
||||
control over minutiae of hand-formatting. In return, Black gives you speed,
|
||||
determinism, and freedom from pycodestyle nagging about formatting. You will save time
|
||||
and mental energy for more important matters.
|
||||
|
||||
Blackened code looks the same regardless of the project you're reading. Formatting
|
||||
becomes transparent after a while and you can focus on the content instead.
|
||||
|
||||
Black makes code review faster by producing the smallest diffs possible.
|
||||
license: MIT
|
@ -1,13 +1,60 @@
|
||||
# Contributing to _Black_
|
||||
# Contributing to *Black*
|
||||
|
||||
Welcome future contributor! We're happy to see you willing to make the project better.
|
||||
Welcome! Happy to see you willing to make the project better. Have you
|
||||
read the entire [user documentation](https://black.readthedocs.io/en/latest/)
|
||||
yet?
|
||||
|
||||
If you aren't familiar with _Black_, or are looking for documentation on something
|
||||
specific, the [user documentation](https://black.readthedocs.io/en/latest/) is the best
|
||||
place to look.
|
||||
|
||||
For getting started on contributing, please read the
|
||||
[contributing documentation](https://black.readthedocs.org/en/latest/contributing/) for
|
||||
all you need to know.
|
||||
## Bird's eye view
|
||||
|
||||
Thank you, and we look forward to your contributions!
|
||||
In terms of inspiration, *Black* is about as configurable as *gofmt*.
|
||||
This is deliberate.
|
||||
|
||||
Bug reports and fixes are always welcome! Please follow the [issue
|
||||
template on GitHub](https://github.com/ambv/black/issues/new) for best
|
||||
results.
|
||||
|
||||
Before you suggest a new feature or configuration knob, ask yourself why
|
||||
you want it. If it enables better integration with some workflow, fixes
|
||||
an inconsistency, speeds things up, and so on - go for it! On the other
|
||||
hand, if your answer is "because I don't like a particular formatting"
|
||||
then you're not ready to embrace *Black* yet. Such changes are unlikely
|
||||
to get accepted. You can still try but prepare to be disappointed.
|
||||
|
||||
|
||||
## Technicalities
|
||||
|
||||
Development on the latest version of Python is preferred. As of this
|
||||
writing it's 3.6.5. You can use any operating system. I am using macOS
|
||||
myself and CentOS at work.
|
||||
|
||||
Install all development dependencies using:
|
||||
```
|
||||
$ pipenv install --dev
|
||||
$ pre-commit install
|
||||
```
|
||||
If you haven't used `pipenv` before but are comfortable with virtualenvs,
|
||||
just run `pip install pipenv` in the virtualenv you're already using and
|
||||
invoke the command above from the cloned *Black* repo. It will do the
|
||||
correct thing.
|
||||
|
||||
Before submitting pull requests, run tests with:
|
||||
```
|
||||
$ python setup.py test
|
||||
```
|
||||
|
||||
|
||||
## Hygiene
|
||||
|
||||
If you're fixing a bug, add a test. Run it first to confirm it fails,
|
||||
then fix the bug, run it again to confirm it's really fixed.
|
||||
|
||||
If adding a new feature, add a test. In fact, always add a test. But
|
||||
wait, before adding any large feature, first open an issue for us to
|
||||
discuss the idea first.
|
||||
|
||||
|
||||
## Finally
|
||||
|
||||
Thanks again for your interest in improving the project! You're taking
|
||||
action when most people decide to sit and watch.
|
||||
|
22
Dockerfile
22
Dockerfile
@ -1,22 +0,0 @@
|
||||
FROM python:3.12-slim AS builder
|
||||
|
||||
RUN mkdir /src
|
||||
COPY . /src/
|
||||
ENV VIRTUAL_ENV=/opt/venv
|
||||
ENV HATCH_BUILD_HOOKS_ENABLE=1
|
||||
# Install build tools to compile black + dependencies
|
||||
RUN apt update && apt install -y build-essential git python3-dev
|
||||
RUN python -m venv $VIRTUAL_ENV
|
||||
RUN python -m pip install --no-cache-dir hatch hatch-fancy-pypi-readme hatch-vcs
|
||||
RUN . /opt/venv/bin/activate && pip install --no-cache-dir --upgrade pip setuptools \
|
||||
&& cd /src && hatch build -t wheel \
|
||||
&& pip install --no-cache-dir dist/*-cp* \
|
||||
&& pip install black[colorama,d,uvloop]
|
||||
|
||||
FROM python:3.12-slim
|
||||
|
||||
# copy only Python packages to limit the image size
|
||||
COPY --from=builder /opt/venv /opt/venv
|
||||
ENV PATH="/opt/venv/bin:$PATH"
|
||||
|
||||
CMD ["/opt/venv/bin/black"]
|
3
MANIFEST.in
Normal file
3
MANIFEST.in
Normal file
@ -0,0 +1,3 @@
|
||||
include *.rst *.md LICENSE
|
||||
recursive-include blib2to3 *.txt *.py LICENSE
|
||||
recursive-include tests *.txt *.out *.diff *.py *.pyi *.pie *.toml
|
24
Pipfile
Normal file
24
Pipfile
Normal file
@ -0,0 +1,24 @@
|
||||
[[source]]
|
||||
url = "https://pypi.python.org/simple"
|
||||
verify_ssl = true
|
||||
name = "pypi"
|
||||
|
||||
[packages]
|
||||
attrs = ">=17.4.0"
|
||||
click = ">=6.5"
|
||||
appdirs = "*"
|
||||
toml = ">=0.9.4"
|
||||
|
||||
[dev-packages]
|
||||
pre-commit = "*"
|
||||
coverage = "*"
|
||||
flake8 = "*"
|
||||
flake8-bugbear = "*"
|
||||
flake8-mypy = "*"
|
||||
mypy = "*"
|
||||
readme_renderer = "*"
|
||||
recommonmark = "*"
|
||||
Sphinx = "*"
|
||||
setuptools = ">=39.2.0"
|
||||
twine = ">=1.11.0"
|
||||
wheel = ">=0.31.1"
|
543
Pipfile.lock
generated
Normal file
543
Pipfile.lock
generated
Normal file
@ -0,0 +1,543 @@
|
||||
{
|
||||
"_meta": {
|
||||
"hash": {
|
||||
"sha256": "ac93441465d67f28d5888486c43ced0b49c4a42c315a8d453064a6441fbf3de0"
|
||||
},
|
||||
"pipfile-spec": 6,
|
||||
"requires": {},
|
||||
"sources": [
|
||||
{
|
||||
"name": "pypi",
|
||||
"url": "https://pypi.python.org/simple",
|
||||
"verify_ssl": true
|
||||
}
|
||||
]
|
||||
},
|
||||
"default": {
|
||||
"appdirs": {
|
||||
"hashes": [
|
||||
"sha256:9e5896d1372858f8dd3344faf4e5014d21849c756c8d5701f78f8a103b372d92",
|
||||
"sha256:d8b24664561d0d34ddfaec54636d502d7cea6e29c3eaf68f3df6180863e2166e"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==1.4.3"
|
||||
},
|
||||
"attrs": {
|
||||
"hashes": [
|
||||
"sha256:4b90b09eeeb9b88c35bc642cbac057e45a5fd85367b985bd2809c62b7b939265",
|
||||
"sha256:e0d0eb91441a3b53dab4d9b743eafc1ac44476296a2053b6ca3af0b139faf87b"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==18.1.0"
|
||||
},
|
||||
"click": {
|
||||
"hashes": [
|
||||
"sha256:29f99fc6125fbc931b758dc053b3114e55c77a6e4c6c3a2674a2dc986016381d",
|
||||
"sha256:f15516df478d5a56180fbf80e68f206010e6d160fc39fa508b65e035fd75130b"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==6.7"
|
||||
},
|
||||
"toml": {
|
||||
"hashes": [
|
||||
"sha256:8e86bd6ce8cc11b9620cb637466453d94f5d57ad86f17e98a98d1f73e3baab2d"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==0.9.4"
|
||||
}
|
||||
},
|
||||
"develop": {
|
||||
"alabaster": {
|
||||
"hashes": [
|
||||
"sha256:2eef172f44e8d301d25aff8068fddd65f767a3f04b5f15b0f4922f113aa1c732",
|
||||
"sha256:37cdcb9e9954ed60912ebc1ca12a9d12178c26637abdf124e3cde2341c257fe0"
|
||||
],
|
||||
"version": "==0.7.10"
|
||||
},
|
||||
"aspy.yaml": {
|
||||
"hashes": [
|
||||
"sha256:04d26279513618f1024e1aba46471db870b3b33aef204c2d09bcf93bea9ba13f",
|
||||
"sha256:0a77e23fafe7b242068ffc0252cee130d3e509040908fc678d9d1060e7494baa"
|
||||
],
|
||||
"version": "==1.1.1"
|
||||
},
|
||||
"attrs": {
|
||||
"hashes": [
|
||||
"sha256:4b90b09eeeb9b88c35bc642cbac057e45a5fd85367b985bd2809c62b7b939265",
|
||||
"sha256:e0d0eb91441a3b53dab4d9b743eafc1ac44476296a2053b6ca3af0b139faf87b"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==18.1.0"
|
||||
},
|
||||
"babel": {
|
||||
"hashes": [
|
||||
"sha256:6778d85147d5d85345c14a26aada5e478ab04e39b078b0745ee6870c2b5cf669",
|
||||
"sha256:8cba50f48c529ca3fa18cf81fa9403be176d374ac4d60738b839122dfaaa3d23"
|
||||
],
|
||||
"version": "==2.6.0"
|
||||
},
|
||||
"bleach": {
|
||||
"hashes": [
|
||||
"sha256:b8fa79e91f96c2c2cd9fd1f9eda906efb1b88b483048978ba62fef680e962b34",
|
||||
"sha256:eb7386f632349d10d9ce9d4a838b134d4731571851149f9cc2c05a9a837a9a44"
|
||||
],
|
||||
"version": "==2.1.3"
|
||||
},
|
||||
"cached-property": {
|
||||
"hashes": [
|
||||
"sha256:67acb3ee8234245e8aea3784a492272239d9c4b487eba2fdcce9d75460d34520",
|
||||
"sha256:bf093e640b7294303c7cc7ba3212f00b7a07d0416c1d923465995c9ef860a139"
|
||||
],
|
||||
"version": "==1.4.2"
|
||||
},
|
||||
"certifi": {
|
||||
"hashes": [
|
||||
"sha256:13e698f54293db9f89122b0581843a782ad0934a4fe0172d2a980ba77fc61bb7",
|
||||
"sha256:9fa520c1bacfb634fa7af20a76bcbd3d5fb390481724c597da32c719a7dca4b0"
|
||||
],
|
||||
"version": "==2018.4.16"
|
||||
},
|
||||
"cffi": {
|
||||
"hashes": [
|
||||
"sha256:151b7eefd035c56b2b2e1eb9963c90c6302dc15fbd8c1c0a83a163ff2c7d7743",
|
||||
"sha256:1553d1e99f035ace1c0544050622b7bc963374a00c467edafac50ad7bd276aef",
|
||||
"sha256:1b0493c091a1898f1136e3f4f991a784437fac3673780ff9de3bcf46c80b6b50",
|
||||
"sha256:2ba8a45822b7aee805ab49abfe7eec16b90587f7f26df20c71dd89e45a97076f",
|
||||
"sha256:3c85641778460581c42924384f5e68076d724ceac0f267d66c757f7535069c93",
|
||||
"sha256:3eb6434197633b7748cea30bf0ba9f66727cdce45117a712b29a443943733257",
|
||||
"sha256:4c91af6e967c2015729d3e69c2e51d92f9898c330d6a851bf8f121236f3defd3",
|
||||
"sha256:770f3782b31f50b68627e22f91cb182c48c47c02eb405fd689472aa7b7aa16dc",
|
||||
"sha256:79f9b6f7c46ae1f8ded75f68cf8ad50e5729ed4d590c74840471fc2823457d04",
|
||||
"sha256:7a33145e04d44ce95bcd71e522b478d282ad0eafaf34fe1ec5bbd73e662f22b6",
|
||||
"sha256:857959354ae3a6fa3da6651b966d13b0a8bed6bbc87a0de7b38a549db1d2a359",
|
||||
"sha256:87f37fe5130574ff76c17cab61e7d2538a16f843bb7bca8ebbc4b12de3078596",
|
||||
"sha256:95d5251e4b5ca00061f9d9f3d6fe537247e145a8524ae9fd30a2f8fbce993b5b",
|
||||
"sha256:9d1d3e63a4afdc29bd76ce6aa9d58c771cd1599fbba8cf5057e7860b203710dd",
|
||||
"sha256:a36c5c154f9d42ec176e6e620cb0dd275744aa1d804786a71ac37dc3661a5e95",
|
||||
"sha256:ae5e35a2c189d397b91034642cb0eab0e346f776ec2eb44a49a459e6615d6e2e",
|
||||
"sha256:b0f7d4a3df8f06cf49f9f121bead236e328074de6449866515cea4907bbc63d6",
|
||||
"sha256:b75110fb114fa366b29a027d0c9be3709579602ae111ff61674d28c93606acca",
|
||||
"sha256:ba5e697569f84b13640c9e193170e89c13c6244c24400fc57e88724ef610cd31",
|
||||
"sha256:be2a9b390f77fd7676d80bc3cdc4f8edb940d8c198ed2d8c0be1319018c778e1",
|
||||
"sha256:d5d8555d9bfc3f02385c1c37e9f998e2011f0db4f90e250e5bc0c0a85a813085",
|
||||
"sha256:e55e22ac0a30023426564b1059b035973ec82186ddddbac867078435801c7801",
|
||||
"sha256:e90f17980e6ab0f3c2f3730e56d1fe9bcba1891eeea58966e89d352492cc74f4",
|
||||
"sha256:ecbb7b01409e9b782df5ded849c178a0aa7c906cf8c5a67368047daab282b184",
|
||||
"sha256:ed01918d545a38998bfa5902c7c00e0fee90e957ce036a4000a88e3fe2264917",
|
||||
"sha256:edabd457cd23a02965166026fd9bfd196f4324fe6032e866d0f3bd0301cd486f",
|
||||
"sha256:fdf1c1dc5bafc32bc5d08b054f94d659422b05aba244d6be4ddc1c72d9aa70fb"
|
||||
],
|
||||
"version": "==1.11.5"
|
||||
},
|
||||
"cfgv": {
|
||||
"hashes": [
|
||||
"sha256:73f48a752bd7aab103c4b882d6596c6360b7aa63b34073dd2c35c7b4b8f93010",
|
||||
"sha256:d1791caa9ff5c0c7bce80e7ecc1921752a2eb7c2463a08ed9b6c96b85a2f75aa"
|
||||
],
|
||||
"version": "==1.1.0"
|
||||
},
|
||||
"chardet": {
|
||||
"hashes": [
|
||||
"sha256:84ab92ed1c4d4f16916e05906b6b75a6c0fb5db821cc65e70cbd64a3e2a5eaae",
|
||||
"sha256:fc323ffcaeaed0e0a02bf4d117757b98aed530d9ed4531e3e15460124c106691"
|
||||
],
|
||||
"version": "==3.0.4"
|
||||
},
|
||||
"cmarkgfm": {
|
||||
"hashes": [
|
||||
"sha256:0186dccca79483e3405217993b83b914ba4559fe9a8396efc4eea56561b74061",
|
||||
"sha256:1a625afc6f62da428df96ec325dc30866cc5781520cbd904ff4ec44cf018171c",
|
||||
"sha256:275905bb371a99285c74931700db3f0c078e7603bed383e8cf1a09f3ee05a3de",
|
||||
"sha256:50098f1c4950722521f0671e54139e0edc1837d63c990cf0f3d2c49607bb51a2",
|
||||
"sha256:50ed116d0b60a07df0dc7b180c28569064b9d37d1578d4c9021cff04d725cb63",
|
||||
"sha256:61a72def110eed903cd1848245897bcb80d295cd9d13944d4f9f30cba5b76655",
|
||||
"sha256:64186fb75d973a06df0e6ea12879533b71f6e7ba1ab01ffee7fc3e7534758889",
|
||||
"sha256:665303d34d7f14f10d7b0651082f25ebf7107f29ef3d699490cac16cdc0fc8ce",
|
||||
"sha256:70b18f843aec58e4e64aadce48a897fe7c50426718b7753aaee399e72df64190",
|
||||
"sha256:761ee7b04d1caee2931344ac6bfebf37102ffb203b136b676b0a71a3f0ea3c87",
|
||||
"sha256:811527e9b7280b136734ed6cb6845e5fbccaeaa132ddf45f0246cbe544016957",
|
||||
"sha256:987b0e157f70c72a84f3c2f9ef2d7ab0f26c08f2bf326c12c087ff9eebcb3ff5",
|
||||
"sha256:9fc6a2183d0a9b0974ec7cdcdad42bd78a3be674cc3e65f87dd694419b3b0ab7",
|
||||
"sha256:c573ea89dd95d41b6d8cf36799c34b6d5b1eac4aed0212dee0f0a11fb7b01e8f",
|
||||
"sha256:c5f1b9e8592d2c448c44e6bc0d91224b16ea5f8293908b1561de1f6d2d0658b1",
|
||||
"sha256:cbe581456357d8f0674d6a590b1aaf46c11d01dd0a23af147a51a798c3818034",
|
||||
"sha256:cf219bec69e601fe27e3974b7307d2f06082ab385d42752738ad2eb630a47d65",
|
||||
"sha256:d08bad67fa18f7e8ff738c090628ee0cbf0505d74a991c848d6d04abfe67b697",
|
||||
"sha256:d6f716d7b1182bf35862b5065112f933f43dd1aa4f8097c9bcfb246f71528a34",
|
||||
"sha256:e08e479102627641c7cb4ece421c6ed4124820b1758765db32201136762282d9",
|
||||
"sha256:e20ac21418af0298437d29599f7851915497ce9f2866bc8e86b084d8911ee061",
|
||||
"sha256:e25f53c37e319241b9a412382140dffac98ca756ba8f360ac7ab5e30cad9670a",
|
||||
"sha256:f20900f16377f2109783ae9348d34bc80530808439591c3d3df73d5c7ef1a00c"
|
||||
],
|
||||
"version": "==0.4.2"
|
||||
},
|
||||
"commonmark": {
|
||||
"hashes": [
|
||||
"sha256:34d73ec8085923c023930dfc0bcd1c4286e28a2a82de094bb72fabcc0281cbe5"
|
||||
],
|
||||
"version": "==0.5.4"
|
||||
},
|
||||
"coverage": {
|
||||
"hashes": [
|
||||
"sha256:03481e81d558d30d230bc12999e3edffe392d244349a90f4ef9b88425fac74ba",
|
||||
"sha256:0b136648de27201056c1869a6c0d4e23f464750fd9a9ba9750b8336a244429ed",
|
||||
"sha256:104ab3934abaf5be871a583541e8829d6c19ce7bde2923b2751e0d3ca44db60a",
|
||||
"sha256:15b111b6a0f46ee1a485414a52a7ad1d703bdf984e9ed3c288a4414d3871dcbd",
|
||||
"sha256:198626739a79b09fa0a2f06e083ffd12eb55449b5f8bfdbeed1df4910b2ca640",
|
||||
"sha256:1c383d2ef13ade2acc636556fd544dba6e14fa30755f26812f54300e401f98f2",
|
||||
"sha256:28b2191e7283f4f3568962e373b47ef7f0392993bb6660d079c62bd50fe9d162",
|
||||
"sha256:2eb564bbf7816a9d68dd3369a510be3327f1c618d2357fa6b1216994c2e3d508",
|
||||
"sha256:337ded681dd2ef9ca04ef5d93cfc87e52e09db2594c296b4a0a3662cb1b41249",
|
||||
"sha256:3a2184c6d797a125dca8367878d3b9a178b6fdd05fdc2d35d758c3006a1cd694",
|
||||
"sha256:3c79a6f7b95751cdebcd9037e4d06f8d5a9b60e4ed0cd231342aa8ad7124882a",
|
||||
"sha256:3d72c20bd105022d29b14a7d628462ebdc61de2f303322c0212a054352f3b287",
|
||||
"sha256:3eb42bf89a6be7deb64116dd1cc4b08171734d721e7a7e57ad64cc4ef29ed2f1",
|
||||
"sha256:4635a184d0bbe537aa185a34193898eee409332a8ccb27eea36f262566585000",
|
||||
"sha256:56e448f051a201c5ebbaa86a5efd0ca90d327204d8b059ab25ad0f35fbfd79f1",
|
||||
"sha256:5a13ea7911ff5e1796b6d5e4fbbf6952381a611209b736d48e675c2756f3f74e",
|
||||
"sha256:69bf008a06b76619d3c3f3b1983f5145c75a305a0fea513aca094cae5c40a8f5",
|
||||
"sha256:6bc583dc18d5979dc0f6cec26a8603129de0304d5ae1f17e57a12834e7235062",
|
||||
"sha256:701cd6093d63e6b8ad7009d8a92425428bc4d6e7ab8d75efbb665c806c1d79ba",
|
||||
"sha256:7608a3dd5d73cb06c531b8925e0ef8d3de31fed2544a7de6c63960a1e73ea4bc",
|
||||
"sha256:76ecd006d1d8f739430ec50cc872889af1f9c1b6b8f48e29941814b09b0fd3cc",
|
||||
"sha256:7aa36d2b844a3e4a4b356708d79fd2c260281a7390d678a10b91ca595ddc9e99",
|
||||
"sha256:7d3f553904b0c5c016d1dad058a7554c7ac4c91a789fca496e7d8347ad040653",
|
||||
"sha256:7e1fe19bd6dce69d9fd159d8e4a80a8f52101380d5d3a4d374b6d3eae0e5de9c",
|
||||
"sha256:8c3cb8c35ec4d9506979b4cf90ee9918bc2e49f84189d9bf5c36c0c1119c6558",
|
||||
"sha256:9d6dd10d49e01571bf6e147d3b505141ffc093a06756c60b053a859cb2128b1f",
|
||||
"sha256:9e112fcbe0148a6fa4f0a02e8d58e94470fc6cb82a5481618fea901699bf34c4",
|
||||
"sha256:ac4fef68da01116a5c117eba4dd46f2e06847a497de5ed1d64bb99a5fda1ef91",
|
||||
"sha256:b8815995e050764c8610dbc82641807d196927c3dbed207f0a079833ffcf588d",
|
||||
"sha256:be6cfcd8053d13f5f5eeb284aa8a814220c3da1b0078fa859011c7fffd86dab9",
|
||||
"sha256:c1bb572fab8208c400adaf06a8133ac0712179a334c09224fb11393e920abcdd",
|
||||
"sha256:de4418dadaa1c01d497e539210cb6baa015965526ff5afc078c57ca69160108d",
|
||||
"sha256:e05cb4d9aad6233d67e0541caa7e511fa4047ed7750ec2510d466e806e0255d6",
|
||||
"sha256:e4d96c07229f58cb686120f168276e434660e4358cc9cf3b0464210b04913e77",
|
||||
"sha256:f3f501f345f24383c0000395b26b726e46758b71393267aeae0bd36f8b3ade80",
|
||||
"sha256:f8a923a85cb099422ad5a2e345fe877bbc89a8a8b23235824a93488150e45f6e"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==4.5.1"
|
||||
},
|
||||
"docutils": {
|
||||
"hashes": [
|
||||
"sha256:02aec4bd92ab067f6ff27a38a38a41173bf01bed8f89157768c1573f53e474a6",
|
||||
"sha256:51e64ef2ebfb29cae1faa133b3710143496eca21c530f3f71424d77687764274",
|
||||
"sha256:7a4bd47eaf6596e1295ecb11361139febe29b084a87bf005bf899f9a42edc3c6"
|
||||
],
|
||||
"version": "==0.14"
|
||||
},
|
||||
"flake8": {
|
||||
"hashes": [
|
||||
"sha256:7253265f7abd8b313e3892944044a365e3f4ac3fcdcfb4298f55ee9ddf188ba0",
|
||||
"sha256:c7841163e2b576d435799169b78703ad6ac1bbb0f199994fc05f700b2a90ea37"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==3.5.0"
|
||||
},
|
||||
"flake8-bugbear": {
|
||||
"hashes": [
|
||||
"sha256:541746f0f3b2f1a8d7278e1d2d218df298996b60b02677708560db7c7e620e3b",
|
||||
"sha256:5f14a99d458e29cb92be9079c970030e0dd398b2decb179d76d39a5266ea1578"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==18.2.0"
|
||||
},
|
||||
"flake8-mypy": {
|
||||
"hashes": [
|
||||
"sha256:47120db63aff631ee1f84bac6fe8e64731dc66da3efc1c51f85e15ade4a3ba18",
|
||||
"sha256:cff009f4250e8391bf48990093cff85802778c345c8449d6498b62efefeebcbc"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==17.8.0"
|
||||
},
|
||||
"future": {
|
||||
"hashes": [
|
||||
"sha256:e39ced1ab767b5936646cedba8bcce582398233d6a627067d4c6a454c90cfedb"
|
||||
],
|
||||
"version": "==0.16.0"
|
||||
},
|
||||
"html5lib": {
|
||||
"hashes": [
|
||||
"sha256:20b159aa3badc9d5ee8f5c647e5efd02ed2a66ab8d354930bd9ff139fc1dc0a3",
|
||||
"sha256:66cb0dcfdbbc4f9c3ba1a63fdb511ffdbd4f513b2b6d81b80cd26ce6b3fb3736"
|
||||
],
|
||||
"version": "==1.0.1"
|
||||
},
|
||||
"identify": {
|
||||
"hashes": [
|
||||
"sha256:067c206bb7a6926d30de0e77d6297729a176c0aa8b2d810a5be809cb46b045b2",
|
||||
"sha256:5eae91e34881bed02ea4f8c3886df8bd1232536d6f0dbf0405ff734268b7f425"
|
||||
],
|
||||
"version": "==1.0.18"
|
||||
},
|
||||
"idna": {
|
||||
"hashes": [
|
||||
"sha256:2c6a5de3089009e3da7c5dde64a141dbc8551d5b7f6cf4ed7c2568d0cc520a8f",
|
||||
"sha256:8c7309c718f94b3a625cb648ace320157ad16ff131ae0af362c9f21b80ef6ec4"
|
||||
],
|
||||
"version": "==2.6"
|
||||
},
|
||||
"imagesize": {
|
||||
"hashes": [
|
||||
"sha256:3620cc0cadba3f7475f9940d22431fc4d407269f1be59ec9b8edcca26440cf18",
|
||||
"sha256:5b326e4678b6925158ccc66a9fa3122b6106d7c876ee32d7de6ce59385b96315"
|
||||
],
|
||||
"version": "==1.0.0"
|
||||
},
|
||||
"jinja2": {
|
||||
"hashes": [
|
||||
"sha256:74c935a1b8bb9a3947c50a54766a969d4846290e1e788ea44c1392163723c3bd",
|
||||
"sha256:f84be1bb0040caca4cea721fcbbbbd61f9be9464ca236387158b0feea01914a4"
|
||||
],
|
||||
"version": "==2.10"
|
||||
},
|
||||
"markupsafe": {
|
||||
"hashes": [
|
||||
"sha256:a6be69091dac236ea9c6bc7d012beab42010fa914c459791d627dad4910eb665"
|
||||
],
|
||||
"version": "==1.0"
|
||||
},
|
||||
"mccabe": {
|
||||
"hashes": [
|
||||
"sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42",
|
||||
"sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"
|
||||
],
|
||||
"version": "==0.6.1"
|
||||
},
|
||||
"mypy": {
|
||||
"hashes": [
|
||||
"sha256:01cf289838f266ae7c6550c813181ee77d21eac9459dbf067e7a95a0a2db9721",
|
||||
"sha256:bc251cb31bc236d9fe4bcc442c994c45fff2541f7161ee52dc949741fe9ca3dd"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==0.600"
|
||||
},
|
||||
"nodeenv": {
|
||||
"hashes": [
|
||||
"sha256:dd0a34001090ff042cfdb4b0c8d6a6f7ec9baa49733f00b695bb8a8b4700ba6c"
|
||||
],
|
||||
"version": "==1.3.0"
|
||||
},
|
||||
"packaging": {
|
||||
"hashes": [
|
||||
"sha256:e9215d2d2535d3ae866c3d6efc77d5b24a0192cce0ff20e42896cc0664f889c0",
|
||||
"sha256:f019b770dd64e585a99714f1fd5e01c7a8f11b45635aa953fd41c689a657375b"
|
||||
],
|
||||
"version": "==17.1"
|
||||
},
|
||||
"pkginfo": {
|
||||
"hashes": [
|
||||
"sha256:5878d542a4b3f237e359926384f1dde4e099c9f5525d236b1840cf704fa8d474",
|
||||
"sha256:a39076cb3eb34c333a0dd390b568e9e1e881c7bf2cc0aee12120636816f55aee"
|
||||
],
|
||||
"version": "==1.4.2"
|
||||
},
|
||||
"pre-commit": {
|
||||
"hashes": [
|
||||
"sha256:4e7d6fde2d22abe6823e44a447197f9f2ff25822675f5245827e15be7ea1aba7",
|
||||
"sha256:dc8dc3f293d384fcd787cfb76f142c02540741eeef0822831983c827beff87ab"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==1.10.1"
|
||||
},
|
||||
"pycodestyle": {
|
||||
"hashes": [
|
||||
"sha256:682256a5b318149ca0d2a9185d365d8864a768a28db66a84a2ea946bcc426766",
|
||||
"sha256:6c4245ade1edfad79c3446fadfc96b0de2759662dc29d07d80a6f27ad1ca6ba9"
|
||||
],
|
||||
"version": "==2.3.1"
|
||||
},
|
||||
"pycparser": {
|
||||
"hashes": [
|
||||
"sha256:99a8ca03e29851d96616ad0404b4aad7d9ee16f25c9f9708a11faf2810f7b226"
|
||||
],
|
||||
"version": "==2.18"
|
||||
},
|
||||
"pyflakes": {
|
||||
"hashes": [
|
||||
"sha256:08bd6a50edf8cffa9fa09a463063c425ecaaf10d1eb0335a7e8b1401aef89e6f",
|
||||
"sha256:8d616a382f243dbf19b54743f280b80198be0bca3a5396f1d2e1fca6223e8805"
|
||||
],
|
||||
"version": "==1.6.0"
|
||||
},
|
||||
"pygments": {
|
||||
"hashes": [
|
||||
"sha256:78f3f434bcc5d6ee09020f92ba487f95ba50f1e3ef83ae96b9d5ffa1bab25c5d",
|
||||
"sha256:dbae1046def0efb574852fab9e90209b23f556367b5a320c0bcb871c77c3e8cc"
|
||||
],
|
||||
"version": "==2.2.0"
|
||||
},
|
||||
"pyparsing": {
|
||||
"hashes": [
|
||||
"sha256:0832bcf47acd283788593e7a0f542407bd9550a55a8a8435214a1960e04bcb04",
|
||||
"sha256:281683241b25fe9b80ec9d66017485f6deff1af5cde372469134b56ca8447a07",
|
||||
"sha256:8f1e18d3fd36c6795bb7e02a39fd05c611ffc2596c1e0d995d34d67630426c18",
|
||||
"sha256:9e8143a3e15c13713506886badd96ca4b579a87fbdf49e550dbfc057d6cb218e",
|
||||
"sha256:b8b3117ed9bdf45e14dcc89345ce638ec7e0e29b2b579fa1ecf32ce45ebac8a5",
|
||||
"sha256:e4d45427c6e20a59bf4f88c639dcc03ce30d193112047f94012102f235853a58",
|
||||
"sha256:fee43f17a9c4087e7ed1605bd6df994c6173c1e977d7ade7b651292fab2bd010"
|
||||
],
|
||||
"version": "==2.2.0"
|
||||
},
|
||||
"pytz": {
|
||||
"hashes": [
|
||||
"sha256:65ae0c8101309c45772196b21b74c46b2e5d11b6275c45d251b150d5da334555",
|
||||
"sha256:c06425302f2cf668f1bba7a0a03f3c1d34d4ebeef2c72003da308b3947c7f749"
|
||||
],
|
||||
"version": "==2018.4"
|
||||
},
|
||||
"pyyaml": {
|
||||
"hashes": [
|
||||
"sha256:0c507b7f74b3d2dd4d1322ec8a94794927305ab4cebbe89cc47fe5e81541e6e8",
|
||||
"sha256:16b20e970597e051997d90dc2cddc713a2876c47e3d92d59ee198700c5427736",
|
||||
"sha256:3262c96a1ca437e7e4763e2843746588a965426550f3797a79fca9c6199c431f",
|
||||
"sha256:326420cbb492172dec84b0f65c80942de6cedb5233c413dd824483989c000608",
|
||||
"sha256:4474f8ea030b5127225b8894d626bb66c01cda098d47a2b0d3429b6700af9fd8",
|
||||
"sha256:592766c6303207a20efc445587778322d7f73b161bd994f227adaa341ba212ab",
|
||||
"sha256:5ac82e411044fb129bae5cfbeb3ba626acb2af31a8d17d175004b70862a741a7",
|
||||
"sha256:5f84523c076ad14ff5e6c037fe1c89a7f73a3e04cf0377cb4d017014976433f3",
|
||||
"sha256:827dc04b8fa7d07c44de11fabbc888e627fa8293b695e0f99cb544fdfa1bf0d1",
|
||||
"sha256:b4c423ab23291d3945ac61346feeb9a0dc4184999ede5e7c43e1ffb975130ae6",
|
||||
"sha256:bc6bced57f826ca7cb5125a10b23fd0f2fff3b7c4701d64c439a300ce665fff8",
|
||||
"sha256:c01b880ec30b5a6e6aa67b09a2fe3fb30473008c85cd6a67359a1b15ed6d83a4",
|
||||
"sha256:ca233c64c6e40eaa6c66ef97058cdc80e8d0157a443655baa1b2966e812807ca",
|
||||
"sha256:e863072cdf4c72eebf179342c94e6989c67185842d9997960b3e69290b2fa269"
|
||||
],
|
||||
"version": "==3.12"
|
||||
},
|
||||
"readme-renderer": {
|
||||
"hashes": [
|
||||
"sha256:422404013378f0267ee128956021a47457db8eb487908b70b8a7de5fa935781a",
|
||||
"sha256:4547549521518be153ec428e86c0ee7c41ebd24c26b948e1d5627c94ad470808"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==21.0"
|
||||
},
|
||||
"recommonmark": {
|
||||
"hashes": [
|
||||
"sha256:6e29c723abcf5533842376d87c4589e62923ecb6002a8e059eb608345ddaff9d",
|
||||
"sha256:cd8bf902e469dae94d00367a8197fb7b81fcabc9cfb79d520e0d22d0fbeaa8b7"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==0.4.0"
|
||||
},
|
||||
"requests": {
|
||||
"hashes": [
|
||||
"sha256:6a1b267aa90cac58ac3a765d067950e7dbbf75b1da07e895d1f594193a40a38b",
|
||||
"sha256:9c443e7324ba5b85070c4a818ade28bfabedf16ea10206da1132edaa6dda237e"
|
||||
],
|
||||
"version": "==2.18.4"
|
||||
},
|
||||
"requests-toolbelt": {
|
||||
"hashes": [
|
||||
"sha256:42c9c170abc2cacb78b8ab23ac957945c7716249206f90874651971a4acff237",
|
||||
"sha256:f6a531936c6fa4c6cfce1b9c10d5c4f498d16528d2a54a22ca00011205a187b5"
|
||||
],
|
||||
"version": "==0.8.0"
|
||||
},
|
||||
"six": {
|
||||
"hashes": [
|
||||
"sha256:70e8a77beed4562e7f14fe23a786b54f6296e34344c23bc42f07b15018ff98e9",
|
||||
"sha256:832dc0e10feb1aa2c68dcc57dbb658f1c7e65b9b61af69048abc87a2db00a0eb"
|
||||
],
|
||||
"version": "==1.11.0"
|
||||
},
|
||||
"snowballstemmer": {
|
||||
"hashes": [
|
||||
"sha256:919f26a68b2c17a7634da993d91339e288964f93c274f1343e3bbbe2096e1128",
|
||||
"sha256:9f3bcd3c401c3e862ec0ebe6d2c069ebc012ce142cce209c098ccb5b09136e89"
|
||||
],
|
||||
"version": "==1.2.1"
|
||||
},
|
||||
"sphinx": {
|
||||
"hashes": [
|
||||
"sha256:85f7e32c8ef07f4ba5aeca728e0f7717bef0789fba8458b8d9c5c294cad134f3",
|
||||
"sha256:d45480a229edf70d84ca9fae3784162b1bc75ee47e480ffe04a4b7f21a95d76d"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==1.7.5"
|
||||
},
|
||||
"sphinxcontrib-websupport": {
|
||||
"hashes": [
|
||||
"sha256:68ca7ff70785cbe1e7bccc71a48b5b6d965d79ca50629606c7861a21b206d9dd",
|
||||
"sha256:9de47f375baf1ea07cdb3436ff39d7a9c76042c10a769c52353ec46e4e8fc3b9"
|
||||
],
|
||||
"version": "==1.1.0"
|
||||
},
|
||||
"toml": {
|
||||
"hashes": [
|
||||
"sha256:8e86bd6ce8cc11b9620cb637466453d94f5d57ad86f17e98a98d1f73e3baab2d"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==0.9.4"
|
||||
},
|
||||
"tqdm": {
|
||||
"hashes": [
|
||||
"sha256:224291ee0d8c52d91b037fd90806f48c79bcd9994d3b0abc9e44b946a908fccd",
|
||||
"sha256:77b8424d41b31e68f437c6dd9cd567aebc9a860507cb42fbd880a5f822d966fe"
|
||||
],
|
||||
"version": "==4.23.4"
|
||||
},
|
||||
"twine": {
|
||||
"hashes": [
|
||||
"sha256:08eb132bbaec40c6d25b358f546ec1dc96ebd2638a86eea68769d9e67fe2b129",
|
||||
"sha256:2fd9a4d9ff0bcacf41fdc40c8cb0cfaef1f1859457c9653fd1b92237cc4e9f25"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==1.11.0"
|
||||
},
|
||||
"typed-ast": {
|
||||
"hashes": [
|
||||
"sha256:0948004fa228ae071054f5208840a1e88747a357ec1101c17217bfe99b299d58",
|
||||
"sha256:25d8feefe27eb0303b73545416b13d108c6067b846b543738a25ff304824ed9a",
|
||||
"sha256:29464a177d56e4e055b5f7b629935af7f49c196be47528cc94e0a7bf83fbc2b9",
|
||||
"sha256:2e214b72168ea0275efd6c884b114ab42e316de3ffa125b267e732ed2abda892",
|
||||
"sha256:3e0d5e48e3a23e9a4d1a9f698e32a542a4a288c871d33ed8df1b092a40f3a0f9",
|
||||
"sha256:519425deca5c2b2bdac49f77b2c5625781abbaf9a809d727d3a5596b30bb4ded",
|
||||
"sha256:57fe287f0cdd9ceaf69e7b71a2e94a24b5d268b35df251a88fef5cc241bf73aa",
|
||||
"sha256:668d0cec391d9aed1c6a388b0d5b97cd22e6073eaa5fbaa6d2946603b4871efe",
|
||||
"sha256:68ba70684990f59497680ff90d18e756a47bf4863c604098f10de9716b2c0bdd",
|
||||
"sha256:6de012d2b166fe7a4cdf505eee3aaa12192f7ba365beeefaca4ec10e31241a85",
|
||||
"sha256:79b91ebe5a28d349b6d0d323023350133e927b4de5b651a8aa2db69c761420c6",
|
||||
"sha256:8550177fa5d4c1f09b5e5f524411c44633c80ec69b24e0e98906dd761941ca46",
|
||||
"sha256:a8034021801bc0440f2e027c354b4eafd95891b573e12ff0418dec385c76785c",
|
||||
"sha256:bc978ac17468fe868ee589c795d06777f75496b1ed576d308002c8a5756fb9ea",
|
||||
"sha256:c05b41bc1deade9f90ddc5d988fe506208019ebba9f2578c622516fd201f5863",
|
||||
"sha256:c9b060bd1e5a26ab6e8267fd46fc9e02b54eb15fffb16d112d4c7b1c12987559",
|
||||
"sha256:edb04bdd45bfd76c8292c4d9654568efaedf76fe78eb246dde69bdb13b2dad87",
|
||||
"sha256:f19f2a4f547505fe9072e15f6f4ae714af51b5a681a97f187971f50c283193b6"
|
||||
],
|
||||
"version": "==1.1.0"
|
||||
},
|
||||
"urllib3": {
|
||||
"hashes": [
|
||||
"sha256:06330f386d6e4b195fbfc736b297f58c5a892e4440e54d294d7004e3a9bbea1b",
|
||||
"sha256:cc44da8e1145637334317feebd728bd869a35285b93cbb4cca2577da7e62db4f"
|
||||
],
|
||||
"version": "==1.22"
|
||||
},
|
||||
"virtualenv": {
|
||||
"hashes": [
|
||||
"sha256:2ce32cd126117ce2c539f0134eb89de91a8413a29baac49cbab3eb50e2026669",
|
||||
"sha256:ca07b4c0b54e14a91af9f34d0919790b016923d157afda5efdde55c96718f752"
|
||||
],
|
||||
"version": "==16.0.0"
|
||||
},
|
||||
"webencodings": {
|
||||
"hashes": [
|
||||
"sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78",
|
||||
"sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"
|
||||
],
|
||||
"version": "==0.5.1"
|
||||
},
|
||||
"wheel": {
|
||||
"hashes": [
|
||||
"sha256:0a2e54558a0628f2145d2fc822137e322412115173e8a2ddbe1c9024338ae83c",
|
||||
"sha256:80044e51ec5bbf6c894ba0bc48d26a8c20a9ba629f4ca19ea26ecfcf87685f5f"
|
||||
],
|
||||
"index": "pypi",
|
||||
"version": "==0.31.1"
|
||||
}
|
||||
}
|
||||
}
|
11
SECURITY.md
11
SECURITY.md
@ -1,11 +0,0 @@
|
||||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
Only the latest non-prerelease version is supported.
|
||||
|
||||
## Security contact information
|
||||
|
||||
To report a security vulnerability, please use the
|
||||
[Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the
|
||||
fix and disclosure.
|
79
action.yml
79
action.yml
@ -1,79 +0,0 @@
|
||||
name: "Black"
|
||||
description: "The uncompromising Python code formatter."
|
||||
author: "Łukasz Langa and contributors to Black"
|
||||
inputs:
|
||||
options:
|
||||
description:
|
||||
"Options passed to Black. Use `black --help` to see available options. Default:
|
||||
'--check --diff'"
|
||||
required: false
|
||||
default: "--check --diff"
|
||||
src:
|
||||
description: "Source to run Black. Default: '.'"
|
||||
required: false
|
||||
default: "."
|
||||
jupyter:
|
||||
description:
|
||||
"Set this option to true to include Jupyter Notebook files. Default: false"
|
||||
required: false
|
||||
default: false
|
||||
black_args:
|
||||
description: "[DEPRECATED] Black input arguments."
|
||||
required: false
|
||||
default: ""
|
||||
deprecationMessage:
|
||||
"Input `with.black_args` is deprecated. Use `with.options` and `with.src` instead."
|
||||
version:
|
||||
description: 'Python Version specifier (PEP440) - e.g. "21.5b1"'
|
||||
required: false
|
||||
default: ""
|
||||
use_pyproject:
|
||||
description: Read Black version specifier from pyproject.toml if `true`.
|
||||
required: false
|
||||
default: "false"
|
||||
summary:
|
||||
description: "Whether to add the output to the workflow summary"
|
||||
required: false
|
||||
default: true
|
||||
branding:
|
||||
color: "black"
|
||||
icon: "check-circle"
|
||||
runs:
|
||||
using: composite
|
||||
steps:
|
||||
- name: black
|
||||
run: |
|
||||
# Even when black fails, do not close the shell
|
||||
set +e
|
||||
|
||||
if [ "$RUNNER_OS" == "Windows" ]; then
|
||||
runner="python"
|
||||
else
|
||||
runner="python3"
|
||||
fi
|
||||
|
||||
out=$(${runner} $GITHUB_ACTION_PATH/action/main.py)
|
||||
exit_code=$?
|
||||
|
||||
# Display the raw output in the step
|
||||
echo "${out}"
|
||||
|
||||
if [ "${{ inputs.summary }}" == "true" ]; then
|
||||
# Display the Markdown output in the job summary
|
||||
echo "\`\`\`python" >> $GITHUB_STEP_SUMMARY
|
||||
echo "${out}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "\`\`\`" >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
|
||||
# Exit with the exit-code returned by Black
|
||||
exit ${exit_code}
|
||||
env:
|
||||
# TODO: Remove once https://github.com/actions/runner/issues/665 is fixed.
|
||||
INPUT_OPTIONS: ${{ inputs.options }}
|
||||
INPUT_SRC: ${{ inputs.src }}
|
||||
INPUT_JUPYTER: ${{ inputs.jupyter }}
|
||||
INPUT_BLACK_ARGS: ${{ inputs.black_args }}
|
||||
INPUT_VERSION: ${{ inputs.version }}
|
||||
INPUT_USE_PYPROJECT: ${{ inputs.use_pyproject }}
|
||||
pythonioencoding: utf-8
|
||||
shell: bash
|
182
action/main.py
182
action/main.py
@ -1,182 +0,0 @@
|
||||
import os
|
||||
import re
|
||||
import shlex
|
||||
import shutil
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from subprocess import PIPE, STDOUT, run
|
||||
from typing import Union
|
||||
|
||||
ACTION_PATH = Path(os.environ["GITHUB_ACTION_PATH"])
|
||||
ENV_PATH = ACTION_PATH / ".black-env"
|
||||
ENV_BIN = ENV_PATH / ("Scripts" if sys.platform == "win32" else "bin")
|
||||
OPTIONS = os.getenv("INPUT_OPTIONS", default="")
|
||||
SRC = os.getenv("INPUT_SRC", default="")
|
||||
JUPYTER = os.getenv("INPUT_JUPYTER") == "true"
|
||||
BLACK_ARGS = os.getenv("INPUT_BLACK_ARGS", default="")
|
||||
VERSION = os.getenv("INPUT_VERSION", default="")
|
||||
USE_PYPROJECT = os.getenv("INPUT_USE_PYPROJECT") == "true"
|
||||
|
||||
BLACK_VERSION_RE = re.compile(r"^black([^A-Z0-9._-]+.*)$", re.IGNORECASE)
|
||||
EXTRAS_RE = re.compile(r"\[.*\]")
|
||||
EXPORT_SUBST_FAIL_RE = re.compile(r"\$Format:.*\$")
|
||||
|
||||
|
||||
def determine_version_specifier() -> str:
|
||||
"""Determine the version of Black to install.
|
||||
|
||||
The version can be specified either via the `with.version` input or via the
|
||||
pyproject.toml file if `with.use_pyproject` is set to `true`.
|
||||
"""
|
||||
if USE_PYPROJECT and VERSION:
|
||||
print(
|
||||
"::error::'with.version' and 'with.use_pyproject' inputs are "
|
||||
"mutually exclusive.",
|
||||
file=sys.stderr,
|
||||
flush=True,
|
||||
)
|
||||
sys.exit(1)
|
||||
if USE_PYPROJECT:
|
||||
return read_version_specifier_from_pyproject()
|
||||
elif VERSION and VERSION[0] in "0123456789":
|
||||
return f"=={VERSION}"
|
||||
else:
|
||||
return VERSION
|
||||
|
||||
|
||||
def read_version_specifier_from_pyproject() -> str:
|
||||
if sys.version_info < (3, 11):
|
||||
print(
|
||||
"::error::'with.use_pyproject' input requires Python 3.11 or later.",
|
||||
file=sys.stderr,
|
||||
flush=True,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
import tomllib # type: ignore[import-not-found,unreachable]
|
||||
|
||||
try:
|
||||
with Path("pyproject.toml").open("rb") as fp:
|
||||
pyproject = tomllib.load(fp)
|
||||
except FileNotFoundError:
|
||||
print(
|
||||
"::error::'with.use_pyproject' input requires a pyproject.toml file.",
|
||||
file=sys.stderr,
|
||||
flush=True,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
version = pyproject.get("tool", {}).get("black", {}).get("required-version")
|
||||
if version is not None:
|
||||
return f"=={version}"
|
||||
|
||||
arrays = [
|
||||
*pyproject.get("dependency-groups", {}).values(),
|
||||
pyproject.get("project", {}).get("dependencies"),
|
||||
*pyproject.get("project", {}).get("optional-dependencies", {}).values(),
|
||||
]
|
||||
for array in arrays:
|
||||
version = find_black_version_in_array(array)
|
||||
if version is not None:
|
||||
break
|
||||
|
||||
if version is None:
|
||||
print(
|
||||
"::error::'black' dependency missing from pyproject.toml.",
|
||||
file=sys.stderr,
|
||||
flush=True,
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
return version
|
||||
|
||||
|
||||
def find_black_version_in_array(array: object) -> Union[str, None]:
|
||||
if not isinstance(array, list):
|
||||
return None
|
||||
try:
|
||||
for item in array:
|
||||
# Rudimentary PEP 508 parsing.
|
||||
item = item.split(";")[0]
|
||||
item = EXTRAS_RE.sub("", item).strip()
|
||||
if item == "black":
|
||||
print(
|
||||
"::error::Version specifier missing for 'black' dependency in "
|
||||
"pyproject.toml.",
|
||||
file=sys.stderr,
|
||||
flush=True,
|
||||
)
|
||||
sys.exit(1)
|
||||
elif m := BLACK_VERSION_RE.match(item):
|
||||
return m.group(1).strip()
|
||||
except TypeError:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
run([sys.executable, "-m", "venv", str(ENV_PATH)], check=True)
|
||||
|
||||
version_specifier = determine_version_specifier()
|
||||
if JUPYTER:
|
||||
extra_deps = "[colorama,jupyter]"
|
||||
else:
|
||||
extra_deps = "[colorama]"
|
||||
if version_specifier:
|
||||
req = f"black{extra_deps}{version_specifier}"
|
||||
else:
|
||||
describe_name = ""
|
||||
with open(ACTION_PATH / ".git_archival.txt", encoding="utf-8") as fp:
|
||||
for line in fp:
|
||||
if line.startswith("describe-name: "):
|
||||
describe_name = line[len("describe-name: ") :].rstrip()
|
||||
break
|
||||
if not describe_name:
|
||||
print("::error::Failed to detect action version.", file=sys.stderr, flush=True)
|
||||
sys.exit(1)
|
||||
# expected format is one of:
|
||||
# - 23.1.0
|
||||
# - 23.1.0-51-g448bba7
|
||||
# - $Format:%(describe:tags=true,match=*[0-9]*)$ (if export-subst fails)
|
||||
if (
|
||||
describe_name.count("-") < 2
|
||||
and EXPORT_SUBST_FAIL_RE.match(describe_name) is None
|
||||
):
|
||||
# the action's commit matches a tag exactly, install exact version from PyPI
|
||||
req = f"black{extra_deps}=={describe_name}"
|
||||
else:
|
||||
# the action's commit does not match any tag, install from the local git repo
|
||||
req = f".{extra_deps}"
|
||||
print(f"Installing {req}...", flush=True)
|
||||
pip_proc = run(
|
||||
[str(ENV_BIN / "python"), "-m", "pip", "install", req],
|
||||
stdout=PIPE,
|
||||
stderr=STDOUT,
|
||||
encoding="utf-8",
|
||||
cwd=ACTION_PATH,
|
||||
)
|
||||
if pip_proc.returncode:
|
||||
print(pip_proc.stdout)
|
||||
print("::error::Failed to install Black.", file=sys.stderr, flush=True)
|
||||
sys.exit(pip_proc.returncode)
|
||||
|
||||
|
||||
base_cmd = [str(ENV_BIN / "black")]
|
||||
if BLACK_ARGS:
|
||||
# TODO: remove after a while since this is deprecated in favour of SRC + OPTIONS.
|
||||
proc = run(
|
||||
[*base_cmd, *shlex.split(BLACK_ARGS)],
|
||||
stdout=PIPE,
|
||||
stderr=STDOUT,
|
||||
encoding="utf-8",
|
||||
)
|
||||
else:
|
||||
proc = run(
|
||||
[*base_cmd, *shlex.split(OPTIONS), *shlex.split(SRC)],
|
||||
stdout=PIPE,
|
||||
stderr=STDOUT,
|
||||
encoding="utf-8",
|
||||
)
|
||||
shutil.rmtree(ENV_PATH, ignore_errors=True)
|
||||
print(proc.stdout)
|
||||
sys.exit(proc.returncode)
|
@ -1,243 +0,0 @@
|
||||
python3 << EndPython3
|
||||
import collections
|
||||
import os
|
||||
import sys
|
||||
import vim
|
||||
|
||||
def strtobool(text):
|
||||
if text.lower() in ['y', 'yes', 't', 'true', 'on', '1']:
|
||||
return True
|
||||
if text.lower() in ['n', 'no', 'f', 'false', 'off', '0']:
|
||||
return False
|
||||
raise ValueError(f"{text} is not convertible to boolean")
|
||||
|
||||
class Flag(collections.namedtuple("FlagBase", "name, cast")):
|
||||
@property
|
||||
def var_name(self):
|
||||
return self.name.replace("-", "_")
|
||||
|
||||
@property
|
||||
def vim_rc_name(self):
|
||||
name = self.var_name
|
||||
if name == "line_length":
|
||||
name = name.replace("_", "")
|
||||
return "g:black_" + name
|
||||
|
||||
|
||||
FLAGS = [
|
||||
Flag(name="line_length", cast=int),
|
||||
Flag(name="fast", cast=strtobool),
|
||||
Flag(name="skip_string_normalization", cast=strtobool),
|
||||
Flag(name="quiet", cast=strtobool),
|
||||
Flag(name="skip_magic_trailing_comma", cast=strtobool),
|
||||
Flag(name="preview", cast=strtobool),
|
||||
]
|
||||
|
||||
|
||||
def _get_python_binary(exec_prefix, pyver):
|
||||
try:
|
||||
default = vim.eval("g:pymode_python").strip()
|
||||
except vim.error:
|
||||
default = ""
|
||||
if default and os.path.exists(default):
|
||||
return default
|
||||
if sys.platform[:3] == "win":
|
||||
return exec_prefix / 'python.exe'
|
||||
bin_path = exec_prefix / "bin"
|
||||
exec_path = (bin_path / f"python{pyver[0]}.{pyver[1]}").resolve()
|
||||
if exec_path.exists():
|
||||
return exec_path
|
||||
# It is possible that some environments may only have python3
|
||||
exec_path = (bin_path / f"python3").resolve()
|
||||
if exec_path.exists():
|
||||
return exec_path
|
||||
raise ValueError("python executable not found")
|
||||
|
||||
def _get_pip(venv_path):
|
||||
if sys.platform[:3] == "win":
|
||||
return venv_path / 'Scripts' / 'pip.exe'
|
||||
return venv_path / 'bin' / 'pip'
|
||||
|
||||
def _get_virtualenv_site_packages(venv_path, pyver):
|
||||
if sys.platform[:3] == "win":
|
||||
return venv_path / 'Lib' / 'site-packages'
|
||||
return venv_path / 'lib' / f'python{pyver[0]}.{pyver[1]}' / 'site-packages'
|
||||
|
||||
def _initialize_black_env(upgrade=False):
|
||||
if vim.eval("g:black_use_virtualenv ? 'true' : 'false'") == "false":
|
||||
if upgrade:
|
||||
print("Upgrade disabled due to g:black_use_virtualenv being disabled.")
|
||||
print("Either use your system package manager (or pip) to upgrade black separately,")
|
||||
print("or modify your vimrc to have 'let g:black_use_virtualenv = 1'.")
|
||||
return False
|
||||
else:
|
||||
# Nothing needed to be done.
|
||||
return True
|
||||
|
||||
pyver = sys.version_info[:3]
|
||||
if pyver < (3, 9):
|
||||
print("Sorry, Black requires Python 3.9+ to run.")
|
||||
return False
|
||||
|
||||
from pathlib import Path
|
||||
import subprocess
|
||||
import venv
|
||||
virtualenv_path = Path(vim.eval("g:black_virtualenv")).expanduser()
|
||||
virtualenv_site_packages = str(_get_virtualenv_site_packages(virtualenv_path, pyver))
|
||||
first_install = False
|
||||
if not virtualenv_path.is_dir():
|
||||
print('Please wait, one time setup for Black.')
|
||||
_executable = sys.executable
|
||||
_base_executable = getattr(sys, "_base_executable", _executable)
|
||||
try:
|
||||
executable = str(_get_python_binary(Path(sys.exec_prefix), pyver))
|
||||
sys.executable = executable
|
||||
sys._base_executable = executable
|
||||
print(f'Creating a virtualenv in {virtualenv_path}...')
|
||||
print('(this path can be customized in .vimrc by setting g:black_virtualenv)')
|
||||
venv.create(virtualenv_path, with_pip=True)
|
||||
except Exception:
|
||||
print('Encountered exception while creating virtualenv (see traceback below).')
|
||||
print(f'Removing {virtualenv_path}...')
|
||||
import shutil
|
||||
shutil.rmtree(virtualenv_path)
|
||||
raise
|
||||
finally:
|
||||
sys.executable = _executable
|
||||
sys._base_executable = _base_executable
|
||||
first_install = True
|
||||
if first_install:
|
||||
print('Installing Black with pip...')
|
||||
if upgrade:
|
||||
print('Upgrading Black with pip...')
|
||||
if first_install or upgrade:
|
||||
subprocess.run([str(_get_pip(virtualenv_path)), 'install', '-U', 'black'], stdout=subprocess.PIPE)
|
||||
print('DONE! You are all set, thanks for waiting ✨ 🍰 ✨')
|
||||
if first_install:
|
||||
print('Pro-tip: to upgrade Black in the future, use the :BlackUpgrade command and restart Vim.\n')
|
||||
if virtualenv_site_packages not in sys.path:
|
||||
sys.path.insert(0, virtualenv_site_packages)
|
||||
return True
|
||||
|
||||
if _initialize_black_env():
|
||||
import black
|
||||
import time
|
||||
|
||||
def get_target_version(tv):
|
||||
if isinstance(tv, black.TargetVersion):
|
||||
return tv
|
||||
ret = None
|
||||
try:
|
||||
ret = black.TargetVersion[tv.upper()]
|
||||
except KeyError:
|
||||
print(f"WARNING: Target version {tv!r} not recognized by Black, using default target")
|
||||
return ret
|
||||
|
||||
def Black(**kwargs):
|
||||
"""
|
||||
kwargs allows you to override ``target_versions`` argument of
|
||||
``black.FileMode``.
|
||||
|
||||
``target_version`` needs to be cleaned because ``black.FileMode``
|
||||
expects the ``target_versions`` argument to be a set of TargetVersion enums.
|
||||
|
||||
Allow kwargs["target_version"] to be a string to allow
|
||||
to type it more quickly.
|
||||
|
||||
Using also target_version instead of target_versions to remain
|
||||
consistent to Black's documentation of the structure of pyproject.toml.
|
||||
"""
|
||||
start = time.time()
|
||||
configs = get_configs()
|
||||
|
||||
black_kwargs = {}
|
||||
if "target_version" in kwargs:
|
||||
target_version = kwargs["target_version"]
|
||||
|
||||
if not isinstance(target_version, (list, set)):
|
||||
target_version = [target_version]
|
||||
target_version = set(filter(lambda x: x, map(lambda tv: get_target_version(tv), target_version)))
|
||||
black_kwargs["target_versions"] = target_version
|
||||
|
||||
mode = black.FileMode(
|
||||
line_length=configs["line_length"],
|
||||
string_normalization=not configs["skip_string_normalization"],
|
||||
is_pyi=vim.current.buffer.name.endswith('.pyi'),
|
||||
magic_trailing_comma=not configs["skip_magic_trailing_comma"],
|
||||
preview=configs["preview"],
|
||||
**black_kwargs,
|
||||
)
|
||||
quiet = configs["quiet"]
|
||||
|
||||
buffer_str = '\n'.join(vim.current.buffer) + '\n'
|
||||
try:
|
||||
new_buffer_str = black.format_file_contents(
|
||||
buffer_str,
|
||||
fast=configs["fast"],
|
||||
mode=mode,
|
||||
)
|
||||
except black.NothingChanged:
|
||||
if not quiet:
|
||||
print(f'Black: already well formatted, good job. (took {time.time() - start:.4f}s)')
|
||||
except Exception as exc:
|
||||
print(f'Black: {exc}')
|
||||
else:
|
||||
current_buffer = vim.current.window.buffer
|
||||
cursors = []
|
||||
for i, tabpage in enumerate(vim.tabpages):
|
||||
if tabpage.valid:
|
||||
for j, window in enumerate(tabpage.windows):
|
||||
if window.valid and window.buffer == current_buffer:
|
||||
cursors.append((i, j, window.cursor))
|
||||
vim.current.buffer[:] = new_buffer_str.split('\n')[:-1]
|
||||
for i, j, cursor in cursors:
|
||||
window = vim.tabpages[i].windows[j]
|
||||
try:
|
||||
window.cursor = cursor
|
||||
except vim.error:
|
||||
window.cursor = (len(window.buffer), 0)
|
||||
if not quiet:
|
||||
print(f'Black: reformatted in {time.time() - start:.4f}s.')
|
||||
|
||||
def get_configs():
|
||||
filename = vim.eval("@%")
|
||||
path_pyproject_toml = black.find_pyproject_toml((filename,))
|
||||
if path_pyproject_toml:
|
||||
toml_config = black.parse_pyproject_toml(path_pyproject_toml)
|
||||
else:
|
||||
toml_config = {}
|
||||
|
||||
return {
|
||||
flag.var_name: toml_config.get(flag.name, flag.cast(vim.eval(flag.vim_rc_name)))
|
||||
for flag in FLAGS
|
||||
}
|
||||
|
||||
|
||||
def BlackUpgrade():
|
||||
_initialize_black_env(upgrade=True)
|
||||
|
||||
def BlackVersion():
|
||||
print(f'Black, version {black.__version__} on Python {sys.version}.')
|
||||
|
||||
EndPython3
|
||||
|
||||
function black#Black(...)
|
||||
let kwargs = {}
|
||||
for arg in a:000
|
||||
let arg_list = split(arg, '=')
|
||||
let kwargs[arg_list[0]] = arg_list[1]
|
||||
endfor
|
||||
python3 << EOF
|
||||
import vim
|
||||
kwargs = vim.eval("kwargs")
|
||||
EOF
|
||||
:py3 Black(**kwargs)
|
||||
endfunction
|
||||
|
||||
function black#BlackUpgrade()
|
||||
:py3 BlackUpgrade()
|
||||
endfunction
|
||||
|
||||
function black#BlackVersion()
|
||||
:py3 BlackVersion()
|
||||
endfunction
|
173
blib2to3/Grammar.txt
Normal file
173
blib2to3/Grammar.txt
Normal file
@ -0,0 +1,173 @@
|
||||
# Grammar for 2to3. This grammar supports Python 2.x and 3.x.
|
||||
|
||||
# NOTE WELL: You should also follow all the steps listed at
|
||||
# https://devguide.python.org/grammar/
|
||||
|
||||
# Start symbols for the grammar:
|
||||
# file_input is a module or sequence of commands read from an input file;
|
||||
# single_input is a single interactive statement;
|
||||
# eval_input is the input for the eval() and input() functions.
|
||||
# NB: compound_stmt in single_input is followed by extra NEWLINE!
|
||||
file_input: (NEWLINE | stmt)* ENDMARKER
|
||||
single_input: NEWLINE | simple_stmt | compound_stmt NEWLINE
|
||||
eval_input: testlist NEWLINE* ENDMARKER
|
||||
|
||||
decorator: '@' dotted_name [ '(' [arglist] ')' ] NEWLINE
|
||||
decorators: decorator+
|
||||
decorated: decorators (classdef | funcdef | async_funcdef)
|
||||
async_funcdef: ASYNC funcdef
|
||||
funcdef: 'def' NAME parameters ['->' test] ':' suite
|
||||
parameters: '(' [typedargslist] ')'
|
||||
typedargslist: ((tfpdef ['=' test] ',')*
|
||||
('*' [tname] (',' tname ['=' test])* [',' ['**' tname [',']]] | '**' tname [','])
|
||||
| tfpdef ['=' test] (',' tfpdef ['=' test])* [','])
|
||||
tname: NAME [':' test]
|
||||
tfpdef: tname | '(' tfplist ')'
|
||||
tfplist: tfpdef (',' tfpdef)* [',']
|
||||
varargslist: ((vfpdef ['=' test] ',')*
|
||||
('*' [vname] (',' vname ['=' test])* [',' ['**' vname [',']]] | '**' vname [','])
|
||||
| vfpdef ['=' test] (',' vfpdef ['=' test])* [','])
|
||||
vname: NAME
|
||||
vfpdef: vname | '(' vfplist ')'
|
||||
vfplist: vfpdef (',' vfpdef)* [',']
|
||||
|
||||
stmt: simple_stmt | compound_stmt
|
||||
simple_stmt: small_stmt (';' small_stmt)* [';'] NEWLINE
|
||||
small_stmt: (expr_stmt | print_stmt | del_stmt | pass_stmt | flow_stmt |
|
||||
import_stmt | global_stmt | exec_stmt | assert_stmt)
|
||||
expr_stmt: testlist_star_expr (annassign | augassign (yield_expr|testlist) |
|
||||
('=' (yield_expr|testlist_star_expr))*)
|
||||
annassign: ':' test ['=' test]
|
||||
testlist_star_expr: (test|star_expr) (',' (test|star_expr))* [',']
|
||||
augassign: ('+=' | '-=' | '*=' | '@=' | '/=' | '%=' | '&=' | '|=' | '^=' |
|
||||
'<<=' | '>>=' | '**=' | '//=')
|
||||
# For normal and annotated assignments, additional restrictions enforced by the interpreter
|
||||
print_stmt: 'print' ( [ test (',' test)* [','] ] |
|
||||
'>>' test [ (',' test)+ [','] ] )
|
||||
del_stmt: 'del' exprlist
|
||||
pass_stmt: 'pass'
|
||||
flow_stmt: break_stmt | continue_stmt | return_stmt | raise_stmt | yield_stmt
|
||||
break_stmt: 'break'
|
||||
continue_stmt: 'continue'
|
||||
return_stmt: 'return' [testlist]
|
||||
yield_stmt: yield_expr
|
||||
raise_stmt: 'raise' [test ['from' test | ',' test [',' test]]]
|
||||
import_stmt: import_name | import_from
|
||||
import_name: 'import' dotted_as_names
|
||||
import_from: ('from' ('.'* dotted_name | '.'+)
|
||||
'import' ('*' | '(' import_as_names ')' | import_as_names))
|
||||
import_as_name: NAME ['as' NAME]
|
||||
dotted_as_name: dotted_name ['as' NAME]
|
||||
import_as_names: import_as_name (',' import_as_name)* [',']
|
||||
dotted_as_names: dotted_as_name (',' dotted_as_name)*
|
||||
dotted_name: NAME ('.' NAME)*
|
||||
global_stmt: ('global' | 'nonlocal') NAME (',' NAME)*
|
||||
exec_stmt: 'exec' expr ['in' test [',' test]]
|
||||
assert_stmt: 'assert' test [',' test]
|
||||
|
||||
compound_stmt: if_stmt | while_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated | async_stmt
|
||||
async_stmt: ASYNC (funcdef | with_stmt | for_stmt)
|
||||
if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite]
|
||||
while_stmt: 'while' test ':' suite ['else' ':' suite]
|
||||
for_stmt: 'for' exprlist 'in' testlist ':' suite ['else' ':' suite]
|
||||
try_stmt: ('try' ':' suite
|
||||
((except_clause ':' suite)+
|
||||
['else' ':' suite]
|
||||
['finally' ':' suite] |
|
||||
'finally' ':' suite))
|
||||
with_stmt: 'with' with_item (',' with_item)* ':' suite
|
||||
with_item: test ['as' expr]
|
||||
with_var: 'as' expr
|
||||
# NB compile.c makes sure that the default except clause is last
|
||||
except_clause: 'except' [test [(',' | 'as') test]]
|
||||
suite: simple_stmt | NEWLINE INDENT stmt+ DEDENT
|
||||
|
||||
# Backward compatibility cruft to support:
|
||||
# [ x for x in lambda: True, lambda: False if x() ]
|
||||
# even while also allowing:
|
||||
# lambda x: 5 if x else 2
|
||||
# (But not a mix of the two)
|
||||
testlist_safe: old_test [(',' old_test)+ [',']]
|
||||
old_test: or_test | old_lambdef
|
||||
old_lambdef: 'lambda' [varargslist] ':' old_test
|
||||
|
||||
test: or_test ['if' or_test 'else' test] | lambdef
|
||||
or_test: and_test ('or' and_test)*
|
||||
and_test: not_test ('and' not_test)*
|
||||
not_test: 'not' not_test | comparison
|
||||
comparison: expr (comp_op expr)*
|
||||
comp_op: '<'|'>'|'=='|'>='|'<='|'<>'|'!='|'in'|'not' 'in'|'is'|'is' 'not'
|
||||
star_expr: '*' expr
|
||||
expr: xor_expr ('|' xor_expr)*
|
||||
xor_expr: and_expr ('^' and_expr)*
|
||||
and_expr: shift_expr ('&' shift_expr)*
|
||||
shift_expr: arith_expr (('<<'|'>>') arith_expr)*
|
||||
arith_expr: term (('+'|'-') term)*
|
||||
term: factor (('*'|'@'|'/'|'%'|'//') factor)*
|
||||
factor: ('+'|'-'|'~') factor | power
|
||||
power: [AWAIT] atom trailer* ['**' factor]
|
||||
atom: ('(' [yield_expr|testlist_gexp] ')' |
|
||||
'[' [listmaker] ']' |
|
||||
'{' [dictsetmaker] '}' |
|
||||
'`' testlist1 '`' |
|
||||
NAME | NUMBER | STRING+ | '.' '.' '.')
|
||||
listmaker: (test|star_expr) ( old_comp_for | (',' (test|star_expr))* [','] )
|
||||
testlist_gexp: (test|star_expr) ( old_comp_for | (',' (test|star_expr))* [','] )
|
||||
lambdef: 'lambda' [varargslist] ':' test
|
||||
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
|
||||
subscriptlist: subscript (',' subscript)* [',']
|
||||
subscript: test | [test] ':' [test] [sliceop]
|
||||
sliceop: ':' [test]
|
||||
exprlist: (expr|star_expr) (',' (expr|star_expr))* [',']
|
||||
testlist: test (',' test)* [',']
|
||||
dictsetmaker: ( ((test ':' test | '**' expr)
|
||||
(comp_for | (',' (test ':' test | '**' expr))* [','])) |
|
||||
((test | star_expr)
|
||||
(comp_for | (',' (test | star_expr))* [','])) )
|
||||
|
||||
classdef: 'class' NAME ['(' [arglist] ')'] ':' suite
|
||||
|
||||
arglist: argument (',' argument)* [',']
|
||||
|
||||
# "test '=' test" is really "keyword '=' test", but we have no such token.
|
||||
# These need to be in a single rule to avoid grammar that is ambiguous
|
||||
# to our LL(1) parser. Even though 'test' includes '*expr' in star_expr,
|
||||
# we explicitly match '*' here, too, to give it proper precedence.
|
||||
# Illegal combinations and orderings are blocked in ast.c:
|
||||
# multiple (test comp_for) arguments are blocked; keyword unpackings
|
||||
# that precede iterable unpackings are blocked; etc.
|
||||
argument: ( test [comp_for] |
|
||||
test '=' test |
|
||||
'**' test |
|
||||
'*' test )
|
||||
|
||||
comp_iter: comp_for | comp_if
|
||||
comp_for: [ASYNC] 'for' exprlist 'in' or_test [comp_iter]
|
||||
comp_if: 'if' old_test [comp_iter]
|
||||
|
||||
# As noted above, testlist_safe extends the syntax allowed in list
|
||||
# comprehensions and generators. We can't use it indiscriminately in all
|
||||
# derivations using a comp_for-like pattern because the testlist_safe derivation
|
||||
# contains comma which clashes with trailing comma in arglist.
|
||||
#
|
||||
# This was an issue because the parser would not follow the correct derivation
|
||||
# when parsing syntactically valid Python code. Since testlist_safe was created
|
||||
# specifically to handle list comprehensions and generator expressions enclosed
|
||||
# with parentheses, it's safe to only use it in those. That avoids the issue; we
|
||||
# can parse code like set(x for x in [],).
|
||||
#
|
||||
# The syntax supported by this set of rules is not a valid Python 3 syntax,
|
||||
# hence the prefix "old".
|
||||
#
|
||||
# See https://bugs.python.org/issue27494
|
||||
old_comp_iter: old_comp_for | old_comp_if
|
||||
old_comp_for: [ASYNC] 'for' exprlist 'in' testlist_safe [old_comp_iter]
|
||||
old_comp_if: 'if' old_test [old_comp_iter]
|
||||
|
||||
testlist1: test (',' test)*
|
||||
|
||||
# not used in grammar, but may appear in "node" passed from Parser to Compiler
|
||||
encoding_decl: NAME
|
||||
|
||||
yield_expr: 'yield' [yield_arg]
|
||||
yield_arg: 'from' test | testlist
|
13
blib2to3/README
Normal file
13
blib2to3/README
Normal file
@ -0,0 +1,13 @@
|
||||
A subset of lib2to3 taken from Python 3.7.0b2.
|
||||
Commit hash: 9c17e3a1987004b8bcfbe423953aad84493a7984
|
||||
|
||||
Reasons for forking:
|
||||
- consistent handling of f-strings for users of Python < 3.6.2
|
||||
- backport of BPO-33064 that fixes parsing files with trailing commas after
|
||||
*args and **kwargs
|
||||
- backport of GH-6143 that restores the ability to reformat legacy usage of
|
||||
`async`
|
||||
- support all types of string literals
|
||||
- better ability to debug (better reprs)
|
||||
- INDENT and DEDENT don't hold whitespace and comment prefixes
|
||||
- ability to Cythonize
|
1
blib2to3/__init__.py
Normal file
1
blib2to3/__init__.py
Normal file
@ -0,0 +1 @@
|
||||
#empty
|
1
blib2to3/__init__.pyi
Normal file
1
blib2to3/__init__.pyi
Normal file
@ -0,0 +1 @@
|
||||
# Stubs for lib2to3 (Python 3.6)
|
10
blib2to3/pgen2/__init__.pyi
Normal file
10
blib2to3/pgen2/__init__.pyi
Normal file
@ -0,0 +1,10 @@
|
||||
# Stubs for lib2to3.pgen2 (Python 3.6)
|
||||
|
||||
import os
|
||||
import sys
|
||||
from typing import Text, Union
|
||||
|
||||
if sys.version_info >= (3, 6):
|
||||
_Path = Union[Text, os.PathLike]
|
||||
else:
|
||||
_Path = Text
|
@ -1,8 +1,6 @@
|
||||
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
# mypy: ignore-errors
|
||||
|
||||
"""Convert graminit.[ch] spit out by pgen to Python code.
|
||||
|
||||
Pgen is the Python parser generator. It is useful to quickly create a
|
||||
@ -63,7 +61,7 @@ def parse_graminit_h(self, filename):
|
||||
try:
|
||||
f = open(filename)
|
||||
except OSError as err:
|
||||
print(f"Can't open {filename}: {err}")
|
||||
print("Can't open %s: %s" % (filename, err))
|
||||
return False
|
||||
self.symbol2number = {}
|
||||
self.number2symbol = {}
|
||||
@ -72,7 +70,8 @@ def parse_graminit_h(self, filename):
|
||||
lineno += 1
|
||||
mo = re.match(r"^#define\s+(\w+)\s+(\d+)$", line)
|
||||
if not mo and line.strip():
|
||||
print(f"{filename}({lineno}): can't parse {line.strip()}")
|
||||
print("%s(%s): can't parse %s" % (filename, lineno,
|
||||
line.strip()))
|
||||
else:
|
||||
symbol, number = mo.groups()
|
||||
number = int(number)
|
||||
@ -113,44 +112,45 @@ def parse_graminit_c(self, filename):
|
||||
try:
|
||||
f = open(filename)
|
||||
except OSError as err:
|
||||
print(f"Can't open {filename}: {err}")
|
||||
print("Can't open %s: %s" % (filename, err))
|
||||
return False
|
||||
# The code below essentially uses f's iterator-ness!
|
||||
lineno = 0
|
||||
|
||||
# Expect the two #include lines
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
assert line == '#include "pgenheaders.h"\n', (lineno, line)
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
assert line == '#include "grammar.h"\n', (lineno, line)
|
||||
|
||||
# Parse the state definitions
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
allarcs = {}
|
||||
states = []
|
||||
while line.startswith("static arc "):
|
||||
while line.startswith("static arc "):
|
||||
mo = re.match(r"static arc arcs_(\d+)_(\d+)\[(\d+)\] = {$", line)
|
||||
mo = re.match(r"static arc arcs_(\d+)_(\d+)\[(\d+)\] = {$",
|
||||
line)
|
||||
assert mo, (lineno, line)
|
||||
n, m, k = list(map(int, mo.groups()))
|
||||
arcs = []
|
||||
for _ in range(k):
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
mo = re.match(r"\s+{(\d+), (\d+)},$", line)
|
||||
assert mo, (lineno, line)
|
||||
i, j = list(map(int, mo.groups()))
|
||||
arcs.append((i, j))
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
assert line == "};\n", (lineno, line)
|
||||
allarcs[(n, m)] = arcs
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
mo = re.match(r"static state states_(\d+)\[(\d+)\] = {$", line)
|
||||
assert mo, (lineno, line)
|
||||
s, t = list(map(int, mo.groups()))
|
||||
assert s == len(states), (lineno, line)
|
||||
state = []
|
||||
for _ in range(t):
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
mo = re.match(r"\s+{(\d+), arcs_(\d+)_(\d+)},$", line)
|
||||
assert mo, (lineno, line)
|
||||
k, n, m = list(map(int, mo.groups()))
|
||||
@ -158,9 +158,9 @@ def parse_graminit_c(self, filename):
|
||||
assert k == len(arcs), (lineno, line)
|
||||
state.append(arcs)
|
||||
states.append(state)
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
assert line == "};\n", (lineno, line)
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
self.states = states
|
||||
|
||||
# Parse the dfas
|
||||
@ -169,8 +169,9 @@ def parse_graminit_c(self, filename):
|
||||
assert mo, (lineno, line)
|
||||
ndfas = int(mo.group(1))
|
||||
for i in range(ndfas):
|
||||
lineno, line = lineno + 1, next(f)
|
||||
mo = re.match(r'\s+{(\d+), "(\w+)", (\d+), (\d+), states_(\d+),$', line)
|
||||
lineno, line = lineno+1, next(f)
|
||||
mo = re.match(r'\s+{(\d+), "(\w+)", (\d+), (\d+), states_(\d+),$',
|
||||
line)
|
||||
assert mo, (lineno, line)
|
||||
symbol = mo.group(2)
|
||||
number, x, y, z = list(map(int, mo.group(1, 3, 4, 5)))
|
||||
@ -179,7 +180,7 @@ def parse_graminit_c(self, filename):
|
||||
assert x == 0, (lineno, line)
|
||||
state = states[z]
|
||||
assert y == len(state), (lineno, line)
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
mo = re.match(r'\s+("(?:\\\d\d\d)*")},$', line)
|
||||
assert mo, (lineno, line)
|
||||
first = {}
|
||||
@ -187,21 +188,21 @@ def parse_graminit_c(self, filename):
|
||||
for i, c in enumerate(rawbitset):
|
||||
byte = ord(c)
|
||||
for j in range(8):
|
||||
if byte & (1 << j):
|
||||
first[i * 8 + j] = 1
|
||||
if byte & (1<<j):
|
||||
first[i*8 + j] = 1
|
||||
dfas[number] = (state, first)
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
assert line == "};\n", (lineno, line)
|
||||
self.dfas = dfas
|
||||
|
||||
# Parse the labels
|
||||
labels = []
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
mo = re.match(r"static label labels\[(\d+)\] = {$", line)
|
||||
assert mo, (lineno, line)
|
||||
nlabels = int(mo.group(1))
|
||||
for i in range(nlabels):
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
mo = re.match(r'\s+{(\d+), (0|"\w+")},$', line)
|
||||
assert mo, (lineno, line)
|
||||
x, y = mo.groups()
|
||||
@ -211,35 +212,35 @@ def parse_graminit_c(self, filename):
|
||||
else:
|
||||
y = eval(y)
|
||||
labels.append((x, y))
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
assert line == "};\n", (lineno, line)
|
||||
self.labels = labels
|
||||
|
||||
# Parse the grammar struct
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
assert line == "grammar _PyParser_Grammar = {\n", (lineno, line)
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
mo = re.match(r"\s+(\d+),$", line)
|
||||
assert mo, (lineno, line)
|
||||
ndfas = int(mo.group(1))
|
||||
assert ndfas == len(self.dfas)
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
assert line == "\tdfas,\n", (lineno, line)
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
mo = re.match(r"\s+{(\d+), labels},$", line)
|
||||
assert mo, (lineno, line)
|
||||
nlabels = int(mo.group(1))
|
||||
assert nlabels == len(self.labels), (lineno, line)
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
mo = re.match(r"\s+(\d+)$", line)
|
||||
assert mo, (lineno, line)
|
||||
start = int(mo.group(1))
|
||||
assert start in self.number2symbol, (lineno, line)
|
||||
self.start = start
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
assert line == "};\n", (lineno, line)
|
||||
try:
|
||||
lineno, line = lineno + 1, next(f)
|
||||
lineno, line = lineno+1, next(f)
|
||||
except StopIteration:
|
||||
pass
|
||||
else:
|
||||
@ -247,8 +248,8 @@ def parse_graminit_c(self, filename):
|
||||
|
||||
def finish_off(self):
|
||||
"""Create additional useful structures. (Internal)."""
|
||||
self.keywords = {} # map from keyword strings to arc labels
|
||||
self.tokens = {} # map from numeric token values to arc labels
|
||||
self.keywords = {} # map from keyword strings to arc labels
|
||||
self.tokens = {} # map from numeric token values to arc labels
|
||||
for ilabel, (type, value) in enumerate(self.labels):
|
||||
if type == token.NAME and value is not None:
|
||||
self.keywords[value] = ilabel
|
@ -16,117 +16,37 @@
|
||||
__all__ = ["Driver", "load_grammar"]
|
||||
|
||||
# Python imports
|
||||
import codecs
|
||||
import io
|
||||
import logging
|
||||
import os
|
||||
import logging
|
||||
import pkgutil
|
||||
import sys
|
||||
from collections.abc import Iterable, Iterator
|
||||
from contextlib import contextmanager
|
||||
from dataclasses import dataclass, field
|
||||
from logging import Logger
|
||||
from typing import IO, Any, Optional, Union, cast
|
||||
|
||||
from blib2to3.pgen2.grammar import Grammar
|
||||
from blib2to3.pgen2.tokenize import TokenInfo
|
||||
from blib2to3.pytree import NL
|
||||
|
||||
# Pgen imports
|
||||
from . import grammar, parse, pgen, token, tokenize
|
||||
|
||||
Path = Union[str, "os.PathLike[str]"]
|
||||
from . import grammar, parse, token, tokenize, pgen
|
||||
|
||||
|
||||
@dataclass
|
||||
class ReleaseRange:
|
||||
start: int
|
||||
end: Optional[int] = None
|
||||
tokens: list[Any] = field(default_factory=list)
|
||||
class Driver(object):
|
||||
|
||||
def lock(self) -> None:
|
||||
total_eaten = len(self.tokens)
|
||||
self.end = self.start + total_eaten
|
||||
|
||||
|
||||
class TokenProxy:
|
||||
def __init__(self, generator: Any) -> None:
|
||||
self._tokens = generator
|
||||
self._counter = 0
|
||||
self._release_ranges: list[ReleaseRange] = []
|
||||
|
||||
@contextmanager
|
||||
def release(self) -> Iterator["TokenProxy"]:
|
||||
release_range = ReleaseRange(self._counter)
|
||||
self._release_ranges.append(release_range)
|
||||
try:
|
||||
yield self
|
||||
finally:
|
||||
# Lock the last release range to the final position that
|
||||
# has been eaten.
|
||||
release_range.lock()
|
||||
|
||||
def eat(self, point: int) -> Any:
|
||||
eaten_tokens = self._release_ranges[-1].tokens
|
||||
if point < len(eaten_tokens):
|
||||
return eaten_tokens[point]
|
||||
else:
|
||||
while point >= len(eaten_tokens):
|
||||
token = next(self._tokens)
|
||||
eaten_tokens.append(token)
|
||||
return token
|
||||
|
||||
def __iter__(self) -> "TokenProxy":
|
||||
return self
|
||||
|
||||
def __next__(self) -> Any:
|
||||
# If the current position is already compromised (looked up)
|
||||
# return the eaten token, if not just go further on the given
|
||||
# token producer.
|
||||
for release_range in self._release_ranges:
|
||||
assert release_range.end is not None
|
||||
|
||||
start, end = release_range.start, release_range.end
|
||||
if start <= self._counter < end:
|
||||
token = release_range.tokens[self._counter - start]
|
||||
break
|
||||
else:
|
||||
token = next(self._tokens)
|
||||
self._counter += 1
|
||||
return token
|
||||
|
||||
def can_advance(self, to: int) -> bool:
|
||||
# Try to eat, fail if it can't. The eat operation is cached
|
||||
# so there won't be any additional cost of eating here
|
||||
try:
|
||||
self.eat(to)
|
||||
except StopIteration:
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
|
||||
class Driver:
|
||||
def __init__(self, grammar: Grammar, logger: Optional[Logger] = None) -> None:
|
||||
def __init__(self, grammar, convert=None, logger=None):
|
||||
self.grammar = grammar
|
||||
if logger is None:
|
||||
logger = logging.getLogger(__name__)
|
||||
logger = logging.getLogger()
|
||||
self.logger = logger
|
||||
self.convert = convert
|
||||
|
||||
def parse_tokens(self, tokens: Iterable[TokenInfo], debug: bool = False) -> NL:
|
||||
def parse_tokens(self, tokens, debug=False):
|
||||
"""Parse a series of tokens and return the syntax tree."""
|
||||
# XXX Move the prefix computation into a wrapper around tokenize.
|
||||
proxy = TokenProxy(tokens)
|
||||
|
||||
p = parse.Parser(self.grammar)
|
||||
p.setup(proxy=proxy)
|
||||
|
||||
p = parse.Parser(self.grammar, self.convert)
|
||||
p.setup()
|
||||
lineno = 1
|
||||
column = 0
|
||||
indent_columns: list[int] = []
|
||||
indent_columns = []
|
||||
type = value = start = end = line_text = None
|
||||
prefix = ""
|
||||
|
||||
for quintuple in proxy:
|
||||
for quintuple in tokens:
|
||||
type, value, start, end, line_text = quintuple
|
||||
if start != (lineno, column):
|
||||
assert (lineno, column) <= start, ((lineno, column), start)
|
||||
@ -148,10 +68,8 @@ def parse_tokens(self, tokens: Iterable[TokenInfo], debug: bool = False) -> NL:
|
||||
if type == token.OP:
|
||||
type = grammar.opmap[value]
|
||||
if debug:
|
||||
assert type is not None
|
||||
self.logger.debug(
|
||||
"%s %r (prefix=%r)", token.tok_name[type], value, prefix
|
||||
)
|
||||
self.logger.debug("%s %r (prefix=%r)",
|
||||
token.tok_name[type], value, prefix)
|
||||
if type == token.INDENT:
|
||||
indent_columns.append(len(value))
|
||||
_prefix = prefix + value
|
||||
@ -160,7 +78,7 @@ def parse_tokens(self, tokens: Iterable[TokenInfo], debug: bool = False) -> NL:
|
||||
elif type == token.DEDENT:
|
||||
_indent_col = indent_columns.pop()
|
||||
prefix, _prefix = self._partially_consume_prefix(prefix, _indent_col)
|
||||
if p.addtoken(cast(int, type), value, (prefix, start)):
|
||||
if p.addtoken(type, value, (prefix, start)):
|
||||
if debug:
|
||||
self.logger.debug("Stop.")
|
||||
break
|
||||
@ -168,62 +86,65 @@ def parse_tokens(self, tokens: Iterable[TokenInfo], debug: bool = False) -> NL:
|
||||
if type in {token.INDENT, token.DEDENT}:
|
||||
prefix = _prefix
|
||||
lineno, column = end
|
||||
# FSTRING_MIDDLE is the only token that can end with a newline, and
|
||||
# `end` will point to the next line. For that case, don't increment lineno.
|
||||
if value.endswith("\n") and type != token.FSTRING_MIDDLE:
|
||||
if value.endswith("\n"):
|
||||
lineno += 1
|
||||
column = 0
|
||||
else:
|
||||
# We never broke out -- EOF is too soon (how can this happen???)
|
||||
assert start is not None
|
||||
raise parse.ParseError("incomplete input", type, value, (prefix, start))
|
||||
assert p.rootnode is not None
|
||||
raise parse.ParseError("incomplete input",
|
||||
type, value, (prefix, start))
|
||||
return p.rootnode
|
||||
|
||||
def parse_file(
|
||||
self, filename: Path, encoding: Optional[str] = None, debug: bool = False
|
||||
) -> NL:
|
||||
"""Parse a file and return the syntax tree."""
|
||||
with open(filename, encoding=encoding) as stream:
|
||||
text = stream.read()
|
||||
return self.parse_string(text, debug)
|
||||
|
||||
def parse_string(self, text: str, debug: bool = False) -> NL:
|
||||
"""Parse a string and return the syntax tree."""
|
||||
tokens = tokenize.tokenize(text, grammar=self.grammar)
|
||||
def parse_stream_raw(self, stream, debug=False):
|
||||
"""Parse a stream and return the syntax tree."""
|
||||
tokens = tokenize.generate_tokens(stream.readline)
|
||||
return self.parse_tokens(tokens, debug)
|
||||
|
||||
def _partially_consume_prefix(self, prefix: str, column: int) -> tuple[str, str]:
|
||||
lines: list[str] = []
|
||||
def parse_stream(self, stream, debug=False):
|
||||
"""Parse a stream and return the syntax tree."""
|
||||
return self.parse_stream_raw(stream, debug)
|
||||
|
||||
def parse_file(self, filename, encoding=None, debug=False):
|
||||
"""Parse a file and return the syntax tree."""
|
||||
with io.open(filename, "r", encoding=encoding) as stream:
|
||||
return self.parse_stream(stream, debug)
|
||||
|
||||
def parse_string(self, text, debug=False):
|
||||
"""Parse a string and return the syntax tree."""
|
||||
tokens = tokenize.generate_tokens(io.StringIO(text).readline)
|
||||
return self.parse_tokens(tokens, debug)
|
||||
|
||||
def _partially_consume_prefix(self, prefix, column):
|
||||
lines = []
|
||||
current_line = ""
|
||||
current_column = 0
|
||||
wait_for_nl = False
|
||||
for char in prefix:
|
||||
current_line += char
|
||||
if wait_for_nl:
|
||||
if char == "\n":
|
||||
if char == '\n':
|
||||
if current_line.strip() and current_column < column:
|
||||
res = "".join(lines)
|
||||
return res, prefix[len(res) :]
|
||||
res = ''.join(lines)
|
||||
return res, prefix[len(res):]
|
||||
|
||||
lines.append(current_line)
|
||||
current_line = ""
|
||||
current_column = 0
|
||||
wait_for_nl = False
|
||||
elif char in " \t":
|
||||
elif char == ' ':
|
||||
current_column += 1
|
||||
elif char == "\n":
|
||||
elif char == '\t':
|
||||
current_column += 4
|
||||
elif char == '\n':
|
||||
# unexpected empty line
|
||||
current_column = 0
|
||||
elif char == "\f":
|
||||
current_column = 0
|
||||
else:
|
||||
# indent is finished
|
||||
wait_for_nl = True
|
||||
return "".join(lines), current_line
|
||||
return ''.join(lines), current_line
|
||||
|
||||
|
||||
def _generate_pickle_name(gt: Path, cache_dir: Optional[Path] = None) -> str:
|
||||
def _generate_pickle_name(gt, cache_dir=None):
|
||||
head, tail = os.path.splitext(gt)
|
||||
if tail == ".txt":
|
||||
tail = ""
|
||||
@ -234,32 +155,28 @@ def _generate_pickle_name(gt: Path, cache_dir: Optional[Path] = None) -> str:
|
||||
return name
|
||||
|
||||
|
||||
def load_grammar(
|
||||
gt: str = "Grammar.txt",
|
||||
gp: Optional[str] = None,
|
||||
save: bool = True,
|
||||
force: bool = False,
|
||||
logger: Optional[Logger] = None,
|
||||
) -> Grammar:
|
||||
def load_grammar(gt="Grammar.txt", gp=None,
|
||||
save=True, force=False, logger=None):
|
||||
"""Load the grammar (maybe from a pickle)."""
|
||||
if logger is None:
|
||||
logger = logging.getLogger(__name__)
|
||||
logger = logging.getLogger()
|
||||
gp = _generate_pickle_name(gt) if gp is None else gp
|
||||
if force or not _newer(gp, gt):
|
||||
g: grammar.Grammar = pgen.generate_grammar(gt)
|
||||
logger.info("Generating grammar tables from %s", gt)
|
||||
g = pgen.generate_grammar(gt)
|
||||
if save:
|
||||
logger.info("Writing grammar tables to %s", gp)
|
||||
try:
|
||||
g.dump(gp)
|
||||
except OSError:
|
||||
# Ignore error, caching is not vital.
|
||||
pass
|
||||
except OSError as e:
|
||||
logger.info("Writing failed: %s", e)
|
||||
else:
|
||||
g = grammar.Grammar()
|
||||
g.load(gp)
|
||||
return g
|
||||
|
||||
|
||||
def _newer(a: str, b: str) -> bool:
|
||||
def _newer(a, b):
|
||||
"""Inquire whether file a was written since file b."""
|
||||
if not os.path.exists(a):
|
||||
return False
|
||||
@ -268,9 +185,7 @@ def _newer(a: str, b: str) -> bool:
|
||||
return os.path.getmtime(a) >= os.path.getmtime(b)
|
||||
|
||||
|
||||
def load_packaged_grammar(
|
||||
package: str, grammar_source: str, cache_dir: Optional[Path] = None
|
||||
) -> grammar.Grammar:
|
||||
def load_packaged_grammar(package, grammar_source, cache_dir=None):
|
||||
"""Normally, loads a pickled grammar by doing
|
||||
pkgutil.get_data(package, pickled_grammar)
|
||||
where *pickled_grammar* is computed from *grammar_source* by adding the
|
||||
@ -286,24 +201,23 @@ def load_packaged_grammar(
|
||||
return load_grammar(grammar_source, gp=gp)
|
||||
pickled_name = _generate_pickle_name(os.path.basename(grammar_source), cache_dir)
|
||||
data = pkgutil.get_data(package, pickled_name)
|
||||
assert data is not None
|
||||
g = grammar.Grammar()
|
||||
g.loads(data)
|
||||
return g
|
||||
|
||||
|
||||
def main(*args: str) -> bool:
|
||||
def main(*args):
|
||||
"""Main program, when run as a script: produce grammar pickle files.
|
||||
|
||||
Calls load_grammar for each argument, a path to a grammar text file.
|
||||
"""
|
||||
if not args:
|
||||
args = tuple(sys.argv[1:])
|
||||
logging.basicConfig(level=logging.INFO, stream=sys.stdout, format="%(message)s")
|
||||
args = sys.argv[1:]
|
||||
logging.basicConfig(level=logging.INFO, stream=sys.stdout,
|
||||
format='%(message)s')
|
||||
for gt in args:
|
||||
load_grammar(gt, save=True, force=True)
|
||||
return True
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(int(not main()))
|
24
blib2to3/pgen2/driver.pyi
Normal file
24
blib2to3/pgen2/driver.pyi
Normal file
@ -0,0 +1,24 @@
|
||||
# Stubs for lib2to3.pgen2.driver (Python 3.6)
|
||||
|
||||
import os
|
||||
import sys
|
||||
from typing import Any, Callable, IO, Iterable, List, Optional, Text, Tuple, Union
|
||||
|
||||
from logging import Logger
|
||||
from blib2to3.pytree import _Convert, _NL
|
||||
from blib2to3.pgen2 import _Path
|
||||
from blib2to3.pgen2.grammar import Grammar
|
||||
|
||||
|
||||
class Driver:
|
||||
grammar: Grammar
|
||||
logger: Logger
|
||||
convert: _Convert
|
||||
def __init__(self, grammar: Grammar, convert: Optional[_Convert] = ..., logger: Optional[Logger] = ...) -> None: ...
|
||||
def parse_tokens(self, tokens: Iterable[Any], debug: bool = ...) -> _NL: ...
|
||||
def parse_stream_raw(self, stream: IO[Text], debug: bool = ...) -> _NL: ...
|
||||
def parse_stream(self, stream: IO[Text], debug: bool = ...) -> _NL: ...
|
||||
def parse_file(self, filename: _Path, encoding: Optional[Text] = ..., debug: bool = ...) -> _NL: ...
|
||||
def parse_string(self, text: Text, debug: bool = ...) -> _NL: ...
|
||||
|
||||
def load_grammar(gt: Text = ..., gp: Optional[Text] = ..., save: bool = ..., force: bool = ..., logger: Optional[Logger] = ...) -> Grammar: ...
|
@ -13,22 +13,13 @@
|
||||
"""
|
||||
|
||||
# Python imports
|
||||
import os
|
||||
import pickle
|
||||
import tempfile
|
||||
from typing import Any, Optional, TypeVar, Union
|
||||
|
||||
# Local imports
|
||||
from . import token
|
||||
|
||||
_P = TypeVar("_P", bound="Grammar")
|
||||
Label = tuple[int, Optional[str]]
|
||||
DFA = list[list[tuple[int, int]]]
|
||||
DFAS = tuple[DFA, dict[int, int]]
|
||||
Path = Union[str, "os.PathLike[str]"]
|
||||
|
||||
|
||||
class Grammar:
|
||||
class Grammar(object):
|
||||
"""Pgen parsing tables conversion class.
|
||||
|
||||
Once initialized, this class supplies the grammar tables for the
|
||||
@ -82,78 +73,48 @@ class Grammar:
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.symbol2number: dict[str, int] = {}
|
||||
self.number2symbol: dict[int, str] = {}
|
||||
self.states: list[DFA] = []
|
||||
self.dfas: dict[int, DFAS] = {}
|
||||
self.labels: list[Label] = [(0, "EMPTY")]
|
||||
self.keywords: dict[str, int] = {}
|
||||
self.soft_keywords: dict[str, int] = {}
|
||||
self.tokens: dict[int, int] = {}
|
||||
self.symbol2label: dict[str, int] = {}
|
||||
self.version: tuple[int, int] = (0, 0)
|
||||
def __init__(self):
|
||||
self.symbol2number = {}
|
||||
self.number2symbol = {}
|
||||
self.states = []
|
||||
self.dfas = {}
|
||||
self.labels = [(0, "EMPTY")]
|
||||
self.keywords = {}
|
||||
self.tokens = {}
|
||||
self.symbol2label = {}
|
||||
self.start = 256
|
||||
# Python 3.7+ parses async as a keyword, not an identifier
|
||||
self.async_keywords = False
|
||||
|
||||
def dump(self, filename: Path) -> None:
|
||||
def dump(self, filename):
|
||||
"""Dump the grammar tables to a pickle file."""
|
||||
with open(filename, "wb") as f:
|
||||
pickle.dump(self.__dict__, f, pickle.HIGHEST_PROTOCOL)
|
||||
|
||||
# mypyc generates objects that don't have a __dict__, but they
|
||||
# do have __getstate__ methods that will return an equivalent
|
||||
# dictionary
|
||||
if hasattr(self, "__dict__"):
|
||||
d = self.__dict__
|
||||
else:
|
||||
d = self.__getstate__() # type: ignore
|
||||
|
||||
with tempfile.NamedTemporaryFile(
|
||||
dir=os.path.dirname(filename), delete=False
|
||||
) as f:
|
||||
pickle.dump(d, f, pickle.HIGHEST_PROTOCOL)
|
||||
os.replace(f.name, filename)
|
||||
|
||||
def _update(self, attrs: dict[str, Any]) -> None:
|
||||
for k, v in attrs.items():
|
||||
setattr(self, k, v)
|
||||
|
||||
def load(self, filename: Path) -> None:
|
||||
def load(self, filename):
|
||||
"""Load the grammar tables from a pickle file."""
|
||||
with open(filename, "rb") as f:
|
||||
d = pickle.load(f)
|
||||
self._update(d)
|
||||
self.__dict__.update(d)
|
||||
|
||||
def loads(self, pkl: bytes) -> None:
|
||||
def loads(self, pkl):
|
||||
"""Load the grammar tables from a pickle bytes object."""
|
||||
self._update(pickle.loads(pkl))
|
||||
self.__dict__.update(pickle.loads(pkl))
|
||||
|
||||
def copy(self: _P) -> _P:
|
||||
def copy(self):
|
||||
"""
|
||||
Copy the grammar.
|
||||
"""
|
||||
new = self.__class__()
|
||||
for dict_attr in (
|
||||
"symbol2number",
|
||||
"number2symbol",
|
||||
"dfas",
|
||||
"keywords",
|
||||
"soft_keywords",
|
||||
"tokens",
|
||||
"symbol2label",
|
||||
):
|
||||
for dict_attr in ("symbol2number", "number2symbol", "dfas", "keywords",
|
||||
"tokens", "symbol2label"):
|
||||
setattr(new, dict_attr, getattr(self, dict_attr).copy())
|
||||
new.labels = self.labels[:]
|
||||
new.states = self.states[:]
|
||||
new.start = self.start
|
||||
new.version = self.version
|
||||
new.async_keywords = self.async_keywords
|
||||
return new
|
||||
|
||||
def report(self) -> None:
|
||||
def report(self):
|
||||
"""Dump the grammar tables to standard output, for debugging."""
|
||||
from pprint import pprint
|
||||
|
||||
print("s2n")
|
||||
pprint(self.symbol2number)
|
||||
print("n2s")
|
||||
@ -217,8 +178,6 @@ def report(self) -> None:
|
||||
// DOUBLESLASH
|
||||
//= DOUBLESLASHEQUAL
|
||||
-> RARROW
|
||||
:= COLONEQUAL
|
||||
! BANG
|
||||
"""
|
||||
|
||||
opmap = {}
|
29
blib2to3/pgen2/grammar.pyi
Normal file
29
blib2to3/pgen2/grammar.pyi
Normal file
@ -0,0 +1,29 @@
|
||||
# Stubs for lib2to3.pgen2.grammar (Python 3.6)
|
||||
|
||||
from blib2to3.pgen2 import _Path
|
||||
|
||||
from typing import Any, Dict, List, Optional, Text, Tuple, TypeVar
|
||||
|
||||
_P = TypeVar('_P')
|
||||
_Label = Tuple[int, Optional[Text]]
|
||||
_DFA = List[List[Tuple[int, int]]]
|
||||
_DFAS = Tuple[_DFA, Dict[int, int]]
|
||||
|
||||
class Grammar:
|
||||
symbol2number: Dict[Text, int]
|
||||
number2symbol: Dict[int, Text]
|
||||
states: List[_DFA]
|
||||
dfas: Dict[int, _DFAS]
|
||||
labels: List[_Label]
|
||||
keywords: Dict[Text, int]
|
||||
tokens: Dict[int, int]
|
||||
symbol2label: Dict[Text, int]
|
||||
start: int
|
||||
def __init__(self) -> None: ...
|
||||
def dump(self, filename: _Path) -> None: ...
|
||||
def load(self, filename: _Path) -> None: ...
|
||||
def copy(self: _P) -> _P: ...
|
||||
def report(self) -> None: ...
|
||||
|
||||
opmap_raw: Text
|
||||
opmap: Dict[Text, Text]
|
@ -5,21 +5,18 @@
|
||||
|
||||
import re
|
||||
|
||||
simple_escapes: dict[str, str] = {
|
||||
"a": "\a",
|
||||
"b": "\b",
|
||||
"f": "\f",
|
||||
"n": "\n",
|
||||
"r": "\r",
|
||||
"t": "\t",
|
||||
"v": "\v",
|
||||
"'": "'",
|
||||
'"': '"',
|
||||
"\\": "\\",
|
||||
}
|
||||
simple_escapes = {"a": "\a",
|
||||
"b": "\b",
|
||||
"f": "\f",
|
||||
"n": "\n",
|
||||
"r": "\r",
|
||||
"t": "\t",
|
||||
"v": "\v",
|
||||
"'": "'",
|
||||
'"': '"',
|
||||
"\\": "\\"}
|
||||
|
||||
|
||||
def escape(m: re.Match[str]) -> str:
|
||||
def escape(m):
|
||||
all, tail = m.group(0, 1)
|
||||
assert all.startswith("\\")
|
||||
esc = simple_escapes.get(tail)
|
||||
@ -28,31 +25,29 @@ def escape(m: re.Match[str]) -> str:
|
||||
if tail.startswith("x"):
|
||||
hexes = tail[1:]
|
||||
if len(hexes) < 2:
|
||||
raise ValueError(f"invalid hex string escape ('\\{tail}')")
|
||||
raise ValueError("invalid hex string escape ('\\%s')" % tail)
|
||||
try:
|
||||
i = int(hexes, 16)
|
||||
except ValueError:
|
||||
raise ValueError(f"invalid hex string escape ('\\{tail}')") from None
|
||||
raise ValueError("invalid hex string escape ('\\%s')" % tail) from None
|
||||
else:
|
||||
try:
|
||||
i = int(tail, 8)
|
||||
except ValueError:
|
||||
raise ValueError(f"invalid octal string escape ('\\{tail}')") from None
|
||||
raise ValueError("invalid octal string escape ('\\%s')" % tail) from None
|
||||
return chr(i)
|
||||
|
||||
|
||||
def evalString(s: str) -> str:
|
||||
def evalString(s):
|
||||
assert s.startswith("'") or s.startswith('"'), repr(s[:1])
|
||||
q = s[0]
|
||||
if s[:3] == q * 3:
|
||||
q = q * 3
|
||||
assert s.endswith(q), repr(s[-len(q) :])
|
||||
assert len(s) >= 2 * len(q)
|
||||
s = s[len(q) : -len(q)]
|
||||
if s[:3] == q*3:
|
||||
q = q*3
|
||||
assert s.endswith(q), repr(s[-len(q):])
|
||||
assert len(s) >= 2*len(q)
|
||||
s = s[len(q):-len(q)]
|
||||
return re.sub(r"\\(\'|\"|\\|[abfnrtv]|x.{0,2}|[0-7]{1,3})", escape, s)
|
||||
|
||||
|
||||
def test() -> None:
|
||||
def test():
|
||||
for i in range(256):
|
||||
c = chr(i)
|
||||
s = repr(c)
|
9
blib2to3/pgen2/literals.pyi
Normal file
9
blib2to3/pgen2/literals.pyi
Normal file
@ -0,0 +1,9 @@
|
||||
# Stubs for lib2to3.pgen2.literals (Python 3.6)
|
||||
|
||||
from typing import Dict, Match, Text
|
||||
|
||||
simple_escapes: Dict[Text, Text]
|
||||
|
||||
def escape(m: Match) -> Text: ...
|
||||
def evalString(s: Text) -> Text: ...
|
||||
def test() -> None: ...
|
201
blib2to3/pgen2/parse.py
Normal file
201
blib2to3/pgen2/parse.py
Normal file
@ -0,0 +1,201 @@
|
||||
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
"""Parser engine for the grammar tables generated by pgen.
|
||||
|
||||
The grammar table must be loaded first.
|
||||
|
||||
See Parser/parser.c in the Python distribution for additional info on
|
||||
how this parsing engine works.
|
||||
|
||||
"""
|
||||
|
||||
# Local imports
|
||||
from . import token
|
||||
|
||||
class ParseError(Exception):
|
||||
"""Exception to signal the parser is stuck."""
|
||||
|
||||
def __init__(self, msg, type, value, context):
|
||||
Exception.__init__(self, "%s: type=%r, value=%r, context=%r" %
|
||||
(msg, type, value, context))
|
||||
self.msg = msg
|
||||
self.type = type
|
||||
self.value = value
|
||||
self.context = context
|
||||
|
||||
class Parser(object):
|
||||
"""Parser engine.
|
||||
|
||||
The proper usage sequence is:
|
||||
|
||||
p = Parser(grammar, [converter]) # create instance
|
||||
p.setup([start]) # prepare for parsing
|
||||
<for each input token>:
|
||||
if p.addtoken(...): # parse a token; may raise ParseError
|
||||
break
|
||||
root = p.rootnode # root of abstract syntax tree
|
||||
|
||||
A Parser instance may be reused by calling setup() repeatedly.
|
||||
|
||||
A Parser instance contains state pertaining to the current token
|
||||
sequence, and should not be used concurrently by different threads
|
||||
to parse separate token sequences.
|
||||
|
||||
See driver.py for how to get input tokens by tokenizing a file or
|
||||
string.
|
||||
|
||||
Parsing is complete when addtoken() returns True; the root of the
|
||||
abstract syntax tree can then be retrieved from the rootnode
|
||||
instance variable. When a syntax error occurs, addtoken() raises
|
||||
the ParseError exception. There is no error recovery; the parser
|
||||
cannot be used after a syntax error was reported (but it can be
|
||||
reinitialized by calling setup()).
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, grammar, convert=None):
|
||||
"""Constructor.
|
||||
|
||||
The grammar argument is a grammar.Grammar instance; see the
|
||||
grammar module for more information.
|
||||
|
||||
The parser is not ready yet for parsing; you must call the
|
||||
setup() method to get it started.
|
||||
|
||||
The optional convert argument is a function mapping concrete
|
||||
syntax tree nodes to abstract syntax tree nodes. If not
|
||||
given, no conversion is done and the syntax tree produced is
|
||||
the concrete syntax tree. If given, it must be a function of
|
||||
two arguments, the first being the grammar (a grammar.Grammar
|
||||
instance), and the second being the concrete syntax tree node
|
||||
to be converted. The syntax tree is converted from the bottom
|
||||
up.
|
||||
|
||||
A concrete syntax tree node is a (type, value, context, nodes)
|
||||
tuple, where type is the node type (a token or symbol number),
|
||||
value is None for symbols and a string for tokens, context is
|
||||
None or an opaque value used for error reporting (typically a
|
||||
(lineno, offset) pair), and nodes is a list of children for
|
||||
symbols, and None for tokens.
|
||||
|
||||
An abstract syntax tree node may be anything; this is entirely
|
||||
up to the converter function.
|
||||
|
||||
"""
|
||||
self.grammar = grammar
|
||||
self.convert = convert or (lambda grammar, node: node)
|
||||
|
||||
def setup(self, start=None):
|
||||
"""Prepare for parsing.
|
||||
|
||||
This *must* be called before starting to parse.
|
||||
|
||||
The optional argument is an alternative start symbol; it
|
||||
defaults to the grammar's start symbol.
|
||||
|
||||
You can use a Parser instance to parse any number of programs;
|
||||
each time you call setup() the parser is reset to an initial
|
||||
state determined by the (implicit or explicit) start symbol.
|
||||
|
||||
"""
|
||||
if start is None:
|
||||
start = self.grammar.start
|
||||
# Each stack entry is a tuple: (dfa, state, node).
|
||||
# A node is a tuple: (type, value, context, children),
|
||||
# where children is a list of nodes or None, and context may be None.
|
||||
newnode = (start, None, None, [])
|
||||
stackentry = (self.grammar.dfas[start], 0, newnode)
|
||||
self.stack = [stackentry]
|
||||
self.rootnode = None
|
||||
self.used_names = set() # Aliased to self.rootnode.used_names in pop()
|
||||
|
||||
def addtoken(self, type, value, context):
|
||||
"""Add a token; return True iff this is the end of the program."""
|
||||
# Map from token to label
|
||||
ilabel = self.classify(type, value, context)
|
||||
# Loop until the token is shifted; may raise exceptions
|
||||
while True:
|
||||
dfa, state, node = self.stack[-1]
|
||||
states, first = dfa
|
||||
arcs = states[state]
|
||||
# Look for a state with this label
|
||||
for i, newstate in arcs:
|
||||
t, v = self.grammar.labels[i]
|
||||
if ilabel == i:
|
||||
# Look it up in the list of labels
|
||||
assert t < 256
|
||||
# Shift a token; we're done with it
|
||||
self.shift(type, value, newstate, context)
|
||||
# Pop while we are in an accept-only state
|
||||
state = newstate
|
||||
while states[state] == [(0, state)]:
|
||||
self.pop()
|
||||
if not self.stack:
|
||||
# Done parsing!
|
||||
return True
|
||||
dfa, state, node = self.stack[-1]
|
||||
states, first = dfa
|
||||
# Done with this token
|
||||
return False
|
||||
elif t >= 256:
|
||||
# See if it's a symbol and if we're in its first set
|
||||
itsdfa = self.grammar.dfas[t]
|
||||
itsstates, itsfirst = itsdfa
|
||||
if ilabel in itsfirst:
|
||||
# Push a symbol
|
||||
self.push(t, self.grammar.dfas[t], newstate, context)
|
||||
break # To continue the outer while loop
|
||||
else:
|
||||
if (0, state) in arcs:
|
||||
# An accepting state, pop it and try something else
|
||||
self.pop()
|
||||
if not self.stack:
|
||||
# Done parsing, but another token is input
|
||||
raise ParseError("too much input",
|
||||
type, value, context)
|
||||
else:
|
||||
# No success finding a transition
|
||||
raise ParseError("bad input", type, value, context)
|
||||
|
||||
def classify(self, type, value, context):
|
||||
"""Turn a token into a label. (Internal)"""
|
||||
if type == token.NAME:
|
||||
# Keep a listing of all used names
|
||||
self.used_names.add(value)
|
||||
# Check for reserved words
|
||||
ilabel = self.grammar.keywords.get(value)
|
||||
if ilabel is not None:
|
||||
return ilabel
|
||||
ilabel = self.grammar.tokens.get(type)
|
||||
if ilabel is None:
|
||||
raise ParseError("bad token", type, value, context)
|
||||
return ilabel
|
||||
|
||||
def shift(self, type, value, newstate, context):
|
||||
"""Shift a token. (Internal)"""
|
||||
dfa, state, node = self.stack[-1]
|
||||
newnode = (type, value, context, None)
|
||||
newnode = self.convert(self.grammar, newnode)
|
||||
if newnode is not None:
|
||||
node[-1].append(newnode)
|
||||
self.stack[-1] = (dfa, newstate, node)
|
||||
|
||||
def push(self, type, newdfa, newstate, context):
|
||||
"""Push a nonterminal. (Internal)"""
|
||||
dfa, state, node = self.stack[-1]
|
||||
newnode = (type, None, context, [])
|
||||
self.stack[-1] = (dfa, newstate, node)
|
||||
self.stack.append((newdfa, 0, newnode))
|
||||
|
||||
def pop(self):
|
||||
"""Pop a nonterminal. (Internal)"""
|
||||
popdfa, popstate, popnode = self.stack.pop()
|
||||
newnode = self.convert(self.grammar, popnode)
|
||||
if newnode is not None:
|
||||
if self.stack:
|
||||
dfa, state, node = self.stack[-1]
|
||||
node[-1].append(newnode)
|
||||
else:
|
||||
self.rootnode = newnode
|
||||
self.rootnode.used_names = self.used_names
|
29
blib2to3/pgen2/parse.pyi
Normal file
29
blib2to3/pgen2/parse.pyi
Normal file
@ -0,0 +1,29 @@
|
||||
# Stubs for lib2to3.pgen2.parse (Python 3.6)
|
||||
|
||||
from typing import Any, Dict, List, Optional, Sequence, Set, Text, Tuple
|
||||
|
||||
from blib2to3.pgen2.grammar import Grammar, _DFAS
|
||||
from blib2to3.pytree import _NL, _Convert, _RawNode
|
||||
|
||||
_Context = Sequence[Any]
|
||||
|
||||
class ParseError(Exception):
|
||||
msg: Text
|
||||
type: int
|
||||
value: Optional[Text]
|
||||
context: _Context
|
||||
def __init__(self, msg: Text, type: int, value: Optional[Text], context: _Context) -> None: ...
|
||||
|
||||
class Parser:
|
||||
grammar: Grammar
|
||||
convert: _Convert
|
||||
stack: List[Tuple[_DFAS, int, _RawNode]]
|
||||
rootnode: Optional[_NL]
|
||||
used_names: Set[Text]
|
||||
def __init__(self, grammar: Grammar, convert: Optional[_Convert] = ...) -> None: ...
|
||||
def setup(self, start: Optional[int] = ...) -> None: ...
|
||||
def addtoken(self, type: int, value: Optional[Text], context: _Context) -> bool: ...
|
||||
def classify(self, type: int, value: Optional[Text], context: _Context) -> int: ...
|
||||
def shift(self, type: int, value: Optional[Text], newstate: int, context: _Context) -> None: ...
|
||||
def push(self, type: int, newdfa: _DFAS, newstate: int, context: _Context) -> None: ...
|
||||
def pop(self) -> None: ...
|
@ -1,41 +1,30 @@
|
||||
# Copyright 2004-2005 Elemental Security, Inc. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
import os
|
||||
from collections.abc import Iterator, Sequence
|
||||
from typing import IO, Any, NoReturn, Optional, Union
|
||||
|
||||
from blib2to3.pgen2 import grammar, token, tokenize
|
||||
from blib2to3.pgen2.tokenize import TokenInfo
|
||||
|
||||
Path = Union[str, "os.PathLike[str]"]
|
||||
|
||||
# Pgen imports
|
||||
from . import grammar, token, tokenize
|
||||
|
||||
class PgenGrammar(grammar.Grammar):
|
||||
pass
|
||||
|
||||
class ParserGenerator(object):
|
||||
|
||||
class ParserGenerator:
|
||||
filename: Path
|
||||
stream: IO[str]
|
||||
generator: Iterator[TokenInfo]
|
||||
first: dict[str, Optional[dict[str, int]]]
|
||||
|
||||
def __init__(self, filename: Path, stream: Optional[IO[str]] = None) -> None:
|
||||
def __init__(self, filename, stream=None):
|
||||
close_stream = None
|
||||
if stream is None:
|
||||
stream = open(filename, encoding="utf-8")
|
||||
stream = open(filename)
|
||||
close_stream = stream.close
|
||||
self.filename = filename
|
||||
self.generator = tokenize.tokenize(stream.read())
|
||||
self.gettoken() # Initialize lookahead
|
||||
self.stream = stream
|
||||
self.generator = tokenize.generate_tokens(stream.readline)
|
||||
self.gettoken() # Initialize lookahead
|
||||
self.dfas, self.startsymbol = self.parse()
|
||||
if close_stream is not None:
|
||||
close_stream()
|
||||
self.first = {} # map from symbol name to set of tokens
|
||||
self.first = {} # map from symbol name to set of tokens
|
||||
self.addfirstsets()
|
||||
|
||||
def make_grammar(self) -> PgenGrammar:
|
||||
def make_grammar(self):
|
||||
c = PgenGrammar()
|
||||
names = list(self.dfas.keys())
|
||||
names.sort()
|
||||
@ -60,9 +49,8 @@ def make_grammar(self) -> PgenGrammar:
|
||||
c.start = c.symbol2number[self.startsymbol]
|
||||
return c
|
||||
|
||||
def make_first(self, c: PgenGrammar, name: str) -> dict[int, int]:
|
||||
def make_first(self, c, name):
|
||||
rawfirst = self.first[name]
|
||||
assert rawfirst is not None
|
||||
first = {}
|
||||
for label in sorted(rawfirst):
|
||||
ilabel = self.make_label(c, label)
|
||||
@ -70,7 +58,7 @@ def make_first(self, c: PgenGrammar, name: str) -> dict[int, int]:
|
||||
first[ilabel] = 1
|
||||
return first
|
||||
|
||||
def make_label(self, c: PgenGrammar, label: str) -> int:
|
||||
def make_label(self, c, label):
|
||||
# XXX Maybe this should be a method on a subclass of converter?
|
||||
ilabel = len(c.labels)
|
||||
if label[0].isalpha():
|
||||
@ -99,21 +87,16 @@ def make_label(self, c: PgenGrammar, label: str) -> int:
|
||||
assert label[0] in ('"', "'"), label
|
||||
value = eval(label)
|
||||
if value[0].isalpha():
|
||||
if label[0] == '"':
|
||||
keywords = c.soft_keywords
|
||||
else:
|
||||
keywords = c.keywords
|
||||
|
||||
# A keyword
|
||||
if value in keywords:
|
||||
return keywords[value]
|
||||
if value in c.keywords:
|
||||
return c.keywords[value]
|
||||
else:
|
||||
c.labels.append((token.NAME, value))
|
||||
keywords[value] = ilabel
|
||||
c.keywords[value] = ilabel
|
||||
return ilabel
|
||||
else:
|
||||
# An operator (any non-numeric token)
|
||||
itoken = grammar.opmap[value] # Fails if unknown token
|
||||
itoken = grammar.opmap[value] # Fails if unknown token
|
||||
if itoken in c.tokens:
|
||||
return c.tokens[itoken]
|
||||
else:
|
||||
@ -121,49 +104,47 @@ def make_label(self, c: PgenGrammar, label: str) -> int:
|
||||
c.tokens[itoken] = ilabel
|
||||
return ilabel
|
||||
|
||||
def addfirstsets(self) -> None:
|
||||
def addfirstsets(self):
|
||||
names = list(self.dfas.keys())
|
||||
names.sort()
|
||||
for name in names:
|
||||
if name not in self.first:
|
||||
self.calcfirst(name)
|
||||
# print name, self.first[name].keys()
|
||||
#print name, self.first[name].keys()
|
||||
|
||||
def calcfirst(self, name: str) -> None:
|
||||
def calcfirst(self, name):
|
||||
dfa = self.dfas[name]
|
||||
self.first[name] = None # dummy to detect left recursion
|
||||
self.first[name] = None # dummy to detect left recursion
|
||||
state = dfa[0]
|
||||
totalset: dict[str, int] = {}
|
||||
totalset = {}
|
||||
overlapcheck = {}
|
||||
for label in state.arcs:
|
||||
for label, next in state.arcs.items():
|
||||
if label in self.dfas:
|
||||
if label in self.first:
|
||||
fset = self.first[label]
|
||||
if fset is None:
|
||||
raise ValueError(f"recursion for rule {name!r}")
|
||||
raise ValueError("recursion for rule %r" % name)
|
||||
else:
|
||||
self.calcfirst(label)
|
||||
fset = self.first[label]
|
||||
assert fset is not None
|
||||
totalset.update(fset)
|
||||
overlapcheck[label] = fset
|
||||
else:
|
||||
totalset[label] = 1
|
||||
overlapcheck[label] = {label: 1}
|
||||
inverse: dict[str, str] = {}
|
||||
inverse = {}
|
||||
for label, itsfirst in overlapcheck.items():
|
||||
for symbol in itsfirst:
|
||||
if symbol in inverse:
|
||||
raise ValueError(
|
||||
f"rule {name} is ambiguous; {symbol} is in the first sets of"
|
||||
f" {label} as well as {inverse[symbol]}"
|
||||
)
|
||||
raise ValueError("rule %s is ambiguous; %s is in the"
|
||||
" first sets of %s as well as %s" %
|
||||
(name, symbol, label, inverse[symbol]))
|
||||
inverse[symbol] = label
|
||||
self.first[name] = totalset
|
||||
|
||||
def parse(self) -> tuple[dict[str, list["DFAState"]], str]:
|
||||
def parse(self):
|
||||
dfas = {}
|
||||
startsymbol: Optional[str] = None
|
||||
startsymbol = None
|
||||
# MSTART: (NEWLINE | RULE)* ENDMARKER
|
||||
while self.type != token.ENDMARKER:
|
||||
while self.type == token.NEWLINE:
|
||||
@ -173,33 +154,30 @@ def parse(self) -> tuple[dict[str, list["DFAState"]], str]:
|
||||
self.expect(token.OP, ":")
|
||||
a, z = self.parse_rhs()
|
||||
self.expect(token.NEWLINE)
|
||||
# self.dump_nfa(name, a, z)
|
||||
#self.dump_nfa(name, a, z)
|
||||
dfa = self.make_dfa(a, z)
|
||||
# self.dump_dfa(name, dfa)
|
||||
# oldlen = len(dfa)
|
||||
#self.dump_dfa(name, dfa)
|
||||
oldlen = len(dfa)
|
||||
self.simplify_dfa(dfa)
|
||||
# newlen = len(dfa)
|
||||
newlen = len(dfa)
|
||||
dfas[name] = dfa
|
||||
# print name, oldlen, newlen
|
||||
#print name, oldlen, newlen
|
||||
if startsymbol is None:
|
||||
startsymbol = name
|
||||
assert startsymbol is not None
|
||||
return dfas, startsymbol
|
||||
|
||||
def make_dfa(self, start: "NFAState", finish: "NFAState") -> list["DFAState"]:
|
||||
def make_dfa(self, start, finish):
|
||||
# To turn an NFA into a DFA, we define the states of the DFA
|
||||
# to correspond to *sets* of states of the NFA. Then do some
|
||||
# state reduction. Let's represent sets as dicts with 1 for
|
||||
# values.
|
||||
assert isinstance(start, NFAState)
|
||||
assert isinstance(finish, NFAState)
|
||||
|
||||
def closure(state: NFAState) -> dict[NFAState, int]:
|
||||
base: dict[NFAState, int] = {}
|
||||
def closure(state):
|
||||
base = {}
|
||||
addclosure(state, base)
|
||||
return base
|
||||
|
||||
def addclosure(state: NFAState, base: dict[NFAState, int]) -> None:
|
||||
def addclosure(state, base):
|
||||
assert isinstance(state, NFAState)
|
||||
if state in base:
|
||||
return
|
||||
@ -207,10 +185,9 @@ def addclosure(state: NFAState, base: dict[NFAState, int]) -> None:
|
||||
for label, next in state.arcs:
|
||||
if label is None:
|
||||
addclosure(next, base)
|
||||
|
||||
states = [DFAState(closure(start), finish)]
|
||||
for state in states: # NB states grows while we're iterating
|
||||
arcs: dict[str, dict[NFAState, int]] = {}
|
||||
for state in states: # NB states grows while we're iterating
|
||||
arcs = {}
|
||||
for nfastate in state.nfaset:
|
||||
for label, next in nfastate.arcs:
|
||||
if label is not None:
|
||||
@ -223,9 +200,9 @@ def addclosure(state: NFAState, base: dict[NFAState, int]) -> None:
|
||||
st = DFAState(nfaset, finish)
|
||||
states.append(st)
|
||||
state.addarc(st, label)
|
||||
return states # List of DFAState instances; first one is start
|
||||
return states # List of DFAState instances; first one is start
|
||||
|
||||
def dump_nfa(self, name: str, start: "NFAState", finish: "NFAState") -> None:
|
||||
def dump_nfa(self, name, start, finish):
|
||||
print("Dump of NFA for", name)
|
||||
todo = [start]
|
||||
for i, state in enumerate(todo):
|
||||
@ -237,18 +214,18 @@ def dump_nfa(self, name: str, start: "NFAState", finish: "NFAState") -> None:
|
||||
j = len(todo)
|
||||
todo.append(next)
|
||||
if label is None:
|
||||
print(f" -> {j}")
|
||||
print(" -> %d" % j)
|
||||
else:
|
||||
print(f" {label} -> {j}")
|
||||
print(" %s -> %d" % (label, j))
|
||||
|
||||
def dump_dfa(self, name: str, dfa: Sequence["DFAState"]) -> None:
|
||||
def dump_dfa(self, name, dfa):
|
||||
print("Dump of DFA for", name)
|
||||
for i, state in enumerate(dfa):
|
||||
print(" State", i, state.isfinal and "(final)" or "")
|
||||
for label, next in sorted(state.arcs.items()):
|
||||
print(f" {label} -> {dfa.index(next)}")
|
||||
print(" %s -> %d" % (label, dfa.index(next)))
|
||||
|
||||
def simplify_dfa(self, dfa: list["DFAState"]) -> None:
|
||||
def simplify_dfa(self, dfa):
|
||||
# This is not theoretically optimal, but works well enough.
|
||||
# Algorithm: repeatedly look for two states that have the same
|
||||
# set of arcs (same labels pointing to the same nodes) and
|
||||
@ -259,17 +236,17 @@ def simplify_dfa(self, dfa: list["DFAState"]) -> None:
|
||||
while changes:
|
||||
changes = False
|
||||
for i, state_i in enumerate(dfa):
|
||||
for j in range(i + 1, len(dfa)):
|
||||
for j in range(i+1, len(dfa)):
|
||||
state_j = dfa[j]
|
||||
if state_i == state_j:
|
||||
# print " unify", i, j
|
||||
#print " unify", i, j
|
||||
del dfa[j]
|
||||
for state in dfa:
|
||||
state.unifystate(state_j, state_i)
|
||||
changes = True
|
||||
break
|
||||
|
||||
def parse_rhs(self) -> tuple["NFAState", "NFAState"]:
|
||||
def parse_rhs(self):
|
||||
# RHS: ALT ('|' ALT)*
|
||||
a, z = self.parse_alt()
|
||||
if self.value != "|":
|
||||
@ -286,16 +263,17 @@ def parse_rhs(self) -> tuple["NFAState", "NFAState"]:
|
||||
z.addarc(zz)
|
||||
return aa, zz
|
||||
|
||||
def parse_alt(self) -> tuple["NFAState", "NFAState"]:
|
||||
def parse_alt(self):
|
||||
# ALT: ITEM+
|
||||
a, b = self.parse_item()
|
||||
while self.value in ("(", "[") or self.type in (token.NAME, token.STRING):
|
||||
while (self.value in ("(", "[") or
|
||||
self.type in (token.NAME, token.STRING)):
|
||||
c, d = self.parse_item()
|
||||
b.addarc(c)
|
||||
b = d
|
||||
return a, b
|
||||
|
||||
def parse_item(self) -> tuple["NFAState", "NFAState"]:
|
||||
def parse_item(self):
|
||||
# ITEM: '[' RHS ']' | ATOM ['+' | '*']
|
||||
if self.value == "[":
|
||||
self.gettoken()
|
||||
@ -315,7 +293,7 @@ def parse_item(self) -> tuple["NFAState", "NFAState"]:
|
||||
else:
|
||||
return a, a
|
||||
|
||||
def parse_atom(self) -> tuple["NFAState", "NFAState"]:
|
||||
def parse_atom(self):
|
||||
# ATOM: '(' RHS ')' | NAME | STRING
|
||||
if self.value == "(":
|
||||
self.gettoken()
|
||||
@ -329,67 +307,65 @@ def parse_atom(self) -> tuple["NFAState", "NFAState"]:
|
||||
self.gettoken()
|
||||
return a, z
|
||||
else:
|
||||
self.raise_error(
|
||||
f"expected (...) or NAME or STRING, got {self.type}/{self.value}"
|
||||
)
|
||||
self.raise_error("expected (...) or NAME or STRING, got %s/%s",
|
||||
self.type, self.value)
|
||||
|
||||
def expect(self, type: int, value: Optional[Any] = None) -> str:
|
||||
def expect(self, type, value=None):
|
||||
if self.type != type or (value is not None and self.value != value):
|
||||
self.raise_error(f"expected {type}/{value}, got {self.type}/{self.value}")
|
||||
self.raise_error("expected %s/%s, got %s/%s",
|
||||
type, value, self.type, self.value)
|
||||
value = self.value
|
||||
self.gettoken()
|
||||
return value
|
||||
|
||||
def gettoken(self) -> None:
|
||||
def gettoken(self):
|
||||
tup = next(self.generator)
|
||||
while tup[0] in (tokenize.COMMENT, tokenize.NL):
|
||||
tup = next(self.generator)
|
||||
self.type, self.value, self.begin, self.end, self.line = tup
|
||||
# print token.tok_name[self.type], repr(self.value)
|
||||
#print token.tok_name[self.type], repr(self.value)
|
||||
|
||||
def raise_error(self, msg: str) -> NoReturn:
|
||||
raise SyntaxError(
|
||||
msg, (str(self.filename), self.end[0], self.end[1], self.line)
|
||||
)
|
||||
def raise_error(self, msg, *args):
|
||||
if args:
|
||||
try:
|
||||
msg = msg % args
|
||||
except:
|
||||
msg = " ".join([msg] + list(map(str, args)))
|
||||
raise SyntaxError(msg, (self.filename, self.end[0],
|
||||
self.end[1], self.line))
|
||||
|
||||
class NFAState(object):
|
||||
|
||||
class NFAState:
|
||||
arcs: list[tuple[Optional[str], "NFAState"]]
|
||||
def __init__(self):
|
||||
self.arcs = [] # list of (label, NFAState) pairs
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.arcs = [] # list of (label, NFAState) pairs
|
||||
|
||||
def addarc(self, next: "NFAState", label: Optional[str] = None) -> None:
|
||||
def addarc(self, next, label=None):
|
||||
assert label is None or isinstance(label, str)
|
||||
assert isinstance(next, NFAState)
|
||||
self.arcs.append((label, next))
|
||||
|
||||
class DFAState(object):
|
||||
|
||||
class DFAState:
|
||||
nfaset: dict[NFAState, Any]
|
||||
isfinal: bool
|
||||
arcs: dict[str, "DFAState"]
|
||||
|
||||
def __init__(self, nfaset: dict[NFAState, Any], final: NFAState) -> None:
|
||||
def __init__(self, nfaset, final):
|
||||
assert isinstance(nfaset, dict)
|
||||
assert isinstance(next(iter(nfaset)), NFAState)
|
||||
assert isinstance(final, NFAState)
|
||||
self.nfaset = nfaset
|
||||
self.isfinal = final in nfaset
|
||||
self.arcs = {} # map from label to DFAState
|
||||
self.arcs = {} # map from label to DFAState
|
||||
|
||||
def addarc(self, next: "DFAState", label: str) -> None:
|
||||
def addarc(self, next, label):
|
||||
assert isinstance(label, str)
|
||||
assert label not in self.arcs
|
||||
assert isinstance(next, DFAState)
|
||||
self.arcs[label] = next
|
||||
|
||||
def unifystate(self, old: "DFAState", new: "DFAState") -> None:
|
||||
def unifystate(self, old, new):
|
||||
for label, next in self.arcs.items():
|
||||
if next is old:
|
||||
self.arcs[label] = new
|
||||
|
||||
def __eq__(self, other: Any) -> bool:
|
||||
def __eq__(self, other):
|
||||
# Equality test -- ignore the nfaset instance variable
|
||||
assert isinstance(other, DFAState)
|
||||
if self.isfinal != other.isfinal:
|
||||
@ -403,9 +379,8 @@ def __eq__(self, other: Any) -> bool:
|
||||
return False
|
||||
return True
|
||||
|
||||
__hash__: Any = None # For Py3 compatibility.
|
||||
__hash__ = None # For Py3 compatibility.
|
||||
|
||||
|
||||
def generate_grammar(filename: Path = "Grammar.txt") -> PgenGrammar:
|
||||
def generate_grammar(filename="Grammar.txt"):
|
||||
p = ParserGenerator(filename)
|
||||
return p.make_grammar()
|
49
blib2to3/pgen2/pgen.pyi
Normal file
49
blib2to3/pgen2/pgen.pyi
Normal file
@ -0,0 +1,49 @@
|
||||
# Stubs for lib2to3.pgen2.pgen (Python 3.6)
|
||||
|
||||
from typing import Any, Dict, IO, Iterable, Iterator, List, Optional, Text, Tuple
|
||||
from mypy_extensions import NoReturn
|
||||
|
||||
from blib2to3.pgen2 import _Path, grammar
|
||||
from blib2to3.pgen2.tokenize import _TokenInfo
|
||||
|
||||
class PgenGrammar(grammar.Grammar): ...
|
||||
|
||||
class ParserGenerator:
|
||||
filename: _Path
|
||||
stream: IO[Text]
|
||||
generator: Iterator[_TokenInfo]
|
||||
first: Dict[Text, Dict[Text, int]]
|
||||
def __init__(self, filename: _Path, stream: Optional[IO[Text]] = ...) -> None: ...
|
||||
def make_grammar(self) -> PgenGrammar: ...
|
||||
def make_first(self, c: PgenGrammar, name: Text) -> Dict[int, int]: ...
|
||||
def make_label(self, c: PgenGrammar, label: Text) -> int: ...
|
||||
def addfirstsets(self) -> None: ...
|
||||
def calcfirst(self, name: Text) -> None: ...
|
||||
def parse(self) -> Tuple[Dict[Text, List[DFAState]], Text]: ...
|
||||
def make_dfa(self, start: NFAState, finish: NFAState) -> List[DFAState]: ...
|
||||
def dump_nfa(self, name: Text, start: NFAState, finish: NFAState) -> List[DFAState]: ...
|
||||
def dump_dfa(self, name: Text, dfa: Iterable[DFAState]) -> None: ...
|
||||
def simplify_dfa(self, dfa: List[DFAState]) -> None: ...
|
||||
def parse_rhs(self) -> Tuple[NFAState, NFAState]: ...
|
||||
def parse_alt(self) -> Tuple[NFAState, NFAState]: ...
|
||||
def parse_item(self) -> Tuple[NFAState, NFAState]: ...
|
||||
def parse_atom(self) -> Tuple[NFAState, NFAState]: ...
|
||||
def expect(self, type: int, value: Optional[Any] = ...) -> Text: ...
|
||||
def gettoken(self) -> None: ...
|
||||
def raise_error(self, msg: str, *args: Any) -> NoReturn: ...
|
||||
|
||||
class NFAState:
|
||||
arcs: List[Tuple[Optional[Text], NFAState]]
|
||||
def __init__(self) -> None: ...
|
||||
def addarc(self, next: NFAState, label: Optional[Text] = ...) -> None: ...
|
||||
|
||||
class DFAState:
|
||||
nfaset: Dict[NFAState, Any]
|
||||
isfinal: bool
|
||||
arcs: Dict[Text, DFAState]
|
||||
def __init__(self, nfaset: Dict[NFAState, Any], final: NFAState) -> None: ...
|
||||
def addarc(self, next: DFAState, label: Text) -> None: ...
|
||||
def unifystate(self, old: DFAState, new: DFAState) -> None: ...
|
||||
def __eq__(self, other: Any) -> bool: ...
|
||||
|
||||
def generate_grammar(filename: _Path = ...) -> PgenGrammar: ...
|
83
blib2to3/pgen2/token.py
Normal file
83
blib2to3/pgen2/token.py
Normal file
@ -0,0 +1,83 @@
|
||||
"""Token constants (from "token.h")."""
|
||||
|
||||
# Taken from Python (r53757) and modified to include some tokens
|
||||
# originally monkeypatched in by pgen2.tokenize
|
||||
|
||||
#--start constants--
|
||||
ENDMARKER = 0
|
||||
NAME = 1
|
||||
NUMBER = 2
|
||||
STRING = 3
|
||||
NEWLINE = 4
|
||||
INDENT = 5
|
||||
DEDENT = 6
|
||||
LPAR = 7
|
||||
RPAR = 8
|
||||
LSQB = 9
|
||||
RSQB = 10
|
||||
COLON = 11
|
||||
COMMA = 12
|
||||
SEMI = 13
|
||||
PLUS = 14
|
||||
MINUS = 15
|
||||
STAR = 16
|
||||
SLASH = 17
|
||||
VBAR = 18
|
||||
AMPER = 19
|
||||
LESS = 20
|
||||
GREATER = 21
|
||||
EQUAL = 22
|
||||
DOT = 23
|
||||
PERCENT = 24
|
||||
BACKQUOTE = 25
|
||||
LBRACE = 26
|
||||
RBRACE = 27
|
||||
EQEQUAL = 28
|
||||
NOTEQUAL = 29
|
||||
LESSEQUAL = 30
|
||||
GREATEREQUAL = 31
|
||||
TILDE = 32
|
||||
CIRCUMFLEX = 33
|
||||
LEFTSHIFT = 34
|
||||
RIGHTSHIFT = 35
|
||||
DOUBLESTAR = 36
|
||||
PLUSEQUAL = 37
|
||||
MINEQUAL = 38
|
||||
STAREQUAL = 39
|
||||
SLASHEQUAL = 40
|
||||
PERCENTEQUAL = 41
|
||||
AMPEREQUAL = 42
|
||||
VBAREQUAL = 43
|
||||
CIRCUMFLEXEQUAL = 44
|
||||
LEFTSHIFTEQUAL = 45
|
||||
RIGHTSHIFTEQUAL = 46
|
||||
DOUBLESTAREQUAL = 47
|
||||
DOUBLESLASH = 48
|
||||
DOUBLESLASHEQUAL = 49
|
||||
AT = 50
|
||||
ATEQUAL = 51
|
||||
OP = 52
|
||||
COMMENT = 53
|
||||
NL = 54
|
||||
RARROW = 55
|
||||
AWAIT = 56
|
||||
ASYNC = 57
|
||||
ERRORTOKEN = 58
|
||||
N_TOKENS = 59
|
||||
NT_OFFSET = 256
|
||||
#--end constants--
|
||||
|
||||
tok_name = {}
|
||||
for _name, _value in list(globals().items()):
|
||||
if type(_value) is type(0):
|
||||
tok_name[_value] = _name
|
||||
|
||||
|
||||
def ISTERMINAL(x):
|
||||
return x < NT_OFFSET
|
||||
|
||||
def ISNONTERMINAL(x):
|
||||
return x >= NT_OFFSET
|
||||
|
||||
def ISEOF(x):
|
||||
return x == ENDMARKER
|
73
blib2to3/pgen2/token.pyi
Normal file
73
blib2to3/pgen2/token.pyi
Normal file
@ -0,0 +1,73 @@
|
||||
# Stubs for lib2to3.pgen2.token (Python 3.6)
|
||||
|
||||
import sys
|
||||
from typing import Dict, Text
|
||||
|
||||
ENDMARKER: int
|
||||
NAME: int
|
||||
NUMBER: int
|
||||
STRING: int
|
||||
NEWLINE: int
|
||||
INDENT: int
|
||||
DEDENT: int
|
||||
LPAR: int
|
||||
RPAR: int
|
||||
LSQB: int
|
||||
RSQB: int
|
||||
COLON: int
|
||||
COMMA: int
|
||||
SEMI: int
|
||||
PLUS: int
|
||||
MINUS: int
|
||||
STAR: int
|
||||
SLASH: int
|
||||
VBAR: int
|
||||
AMPER: int
|
||||
LESS: int
|
||||
GREATER: int
|
||||
EQUAL: int
|
||||
DOT: int
|
||||
PERCENT: int
|
||||
BACKQUOTE: int
|
||||
LBRACE: int
|
||||
RBRACE: int
|
||||
EQEQUAL: int
|
||||
NOTEQUAL: int
|
||||
LESSEQUAL: int
|
||||
GREATEREQUAL: int
|
||||
TILDE: int
|
||||
CIRCUMFLEX: int
|
||||
LEFTSHIFT: int
|
||||
RIGHTSHIFT: int
|
||||
DOUBLESTAR: int
|
||||
PLUSEQUAL: int
|
||||
MINEQUAL: int
|
||||
STAREQUAL: int
|
||||
SLASHEQUAL: int
|
||||
PERCENTEQUAL: int
|
||||
AMPEREQUAL: int
|
||||
VBAREQUAL: int
|
||||
CIRCUMFLEXEQUAL: int
|
||||
LEFTSHIFTEQUAL: int
|
||||
RIGHTSHIFTEQUAL: int
|
||||
DOUBLESTAREQUAL: int
|
||||
DOUBLESLASH: int
|
||||
DOUBLESLASHEQUAL: int
|
||||
OP: int
|
||||
COMMENT: int
|
||||
NL: int
|
||||
if sys.version_info >= (3,):
|
||||
RARROW: int
|
||||
if sys.version_info >= (3, 5):
|
||||
AT: int
|
||||
ATEQUAL: int
|
||||
AWAIT: int
|
||||
ASYNC: int
|
||||
ERRORTOKEN: int
|
||||
N_TOKENS: int
|
||||
NT_OFFSET: int
|
||||
tok_name: Dict[int, Text]
|
||||
|
||||
def ISTERMINAL(x: int) -> bool: ...
|
||||
def ISNONTERMINAL(x: int) -> bool: ...
|
||||
def ISEOF(x: int) -> bool: ...
|
567
blib2to3/pgen2/tokenize.py
Normal file
567
blib2to3/pgen2/tokenize.py
Normal file
@ -0,0 +1,567 @@
|
||||
# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation.
|
||||
# All rights reserved.
|
||||
|
||||
"""Tokenization help for Python programs.
|
||||
|
||||
generate_tokens(readline) is a generator that breaks a stream of
|
||||
text into Python tokens. It accepts a readline-like method which is called
|
||||
repeatedly to get the next line of input (or "" for EOF). It generates
|
||||
5-tuples with these members:
|
||||
|
||||
the token type (see token.py)
|
||||
the token (a string)
|
||||
the starting (row, column) indices of the token (a 2-tuple of ints)
|
||||
the ending (row, column) indices of the token (a 2-tuple of ints)
|
||||
the original line (string)
|
||||
|
||||
It is designed to match the working of the Python tokenizer exactly, except
|
||||
that it produces COMMENT tokens for comments and gives type OP for all
|
||||
operators
|
||||
|
||||
Older entry points
|
||||
tokenize_loop(readline, tokeneater)
|
||||
tokenize(readline, tokeneater=printtoken)
|
||||
are the same, except instead of generating tokens, tokeneater is a callback
|
||||
function to which the 5 fields described above are passed as 5 arguments,
|
||||
each time a new token is found."""
|
||||
|
||||
__author__ = 'Ka-Ping Yee <ping@lfw.org>'
|
||||
__credits__ = \
|
||||
'GvR, ESR, Tim Peters, Thomas Wouters, Fred Drake, Skip Montanaro'
|
||||
|
||||
import re
|
||||
from codecs import BOM_UTF8, lookup
|
||||
from blib2to3.pgen2.token import *
|
||||
|
||||
from . import token
|
||||
__all__ = [x for x in dir(token) if x[0] != '_'] + ["tokenize",
|
||||
"generate_tokens", "untokenize"]
|
||||
del token
|
||||
|
||||
try:
|
||||
bytes
|
||||
except NameError:
|
||||
# Support bytes type in Python <= 2.5, so 2to3 turns itself into
|
||||
# valid Python 3 code.
|
||||
bytes = str
|
||||
|
||||
def group(*choices): return '(' + '|'.join(choices) + ')'
|
||||
def any(*choices): return group(*choices) + '*'
|
||||
def maybe(*choices): return group(*choices) + '?'
|
||||
def _combinations(*l):
|
||||
return set(
|
||||
x + y for x in l for y in l + ("",) if x.casefold() != y.casefold()
|
||||
)
|
||||
|
||||
Whitespace = r'[ \f\t]*'
|
||||
Comment = r'#[^\r\n]*'
|
||||
Ignore = Whitespace + any(r'\\\r?\n' + Whitespace) + maybe(Comment)
|
||||
Name = r'\w+' # this is invalid but it's fine because Name comes after Number in all groups
|
||||
|
||||
Binnumber = r'0[bB]_?[01]+(?:_[01]+)*'
|
||||
Hexnumber = r'0[xX]_?[\da-fA-F]+(?:_[\da-fA-F]+)*[lL]?'
|
||||
Octnumber = r'0[oO]?_?[0-7]+(?:_[0-7]+)*[lL]?'
|
||||
Decnumber = group(r'[1-9]\d*(?:_\d+)*[lL]?', '0[lL]?')
|
||||
Intnumber = group(Binnumber, Hexnumber, Octnumber, Decnumber)
|
||||
Exponent = r'[eE][-+]?\d+(?:_\d+)*'
|
||||
Pointfloat = group(r'\d+(?:_\d+)*\.(?:\d+(?:_\d+)*)?', r'\.\d+(?:_\d+)*') + maybe(Exponent)
|
||||
Expfloat = r'\d+(?:_\d+)*' + Exponent
|
||||
Floatnumber = group(Pointfloat, Expfloat)
|
||||
Imagnumber = group(r'\d+(?:_\d+)*[jJ]', Floatnumber + r'[jJ]')
|
||||
Number = group(Imagnumber, Floatnumber, Intnumber)
|
||||
|
||||
# Tail end of ' string.
|
||||
Single = r"[^'\\]*(?:\\.[^'\\]*)*'"
|
||||
# Tail end of " string.
|
||||
Double = r'[^"\\]*(?:\\.[^"\\]*)*"'
|
||||
# Tail end of ''' string.
|
||||
Single3 = r"[^'\\]*(?:(?:\\.|'(?!''))[^'\\]*)*'''"
|
||||
# Tail end of """ string.
|
||||
Double3 = r'[^"\\]*(?:(?:\\.|"(?!""))[^"\\]*)*"""'
|
||||
_litprefix = r"(?:[uUrRbBfF]|[rR][fFbB]|[fFbBuU][rR])?"
|
||||
Triple = group(_litprefix + "'''", _litprefix + '"""')
|
||||
# Single-line ' or " string.
|
||||
String = group(_litprefix + r"'[^\n'\\]*(?:\\.[^\n'\\]*)*'",
|
||||
_litprefix + r'"[^\n"\\]*(?:\\.[^\n"\\]*)*"')
|
||||
|
||||
# Because of leftmost-then-longest match semantics, be sure to put the
|
||||
# longest operators first (e.g., if = came before ==, == would get
|
||||
# recognized as two instances of =).
|
||||
Operator = group(r"\*\*=?", r">>=?", r"<<=?", r"<>", r"!=",
|
||||
r"//=?", r"->",
|
||||
r"[+\-*/%&@|^=<>]=?",
|
||||
r"~")
|
||||
|
||||
Bracket = '[][(){}]'
|
||||
Special = group(r'\r?\n', r'[:;.,`@]')
|
||||
Funny = group(Operator, Bracket, Special)
|
||||
|
||||
PlainToken = group(Number, Funny, String, Name)
|
||||
Token = Ignore + PlainToken
|
||||
|
||||
# First (or only) line of ' or " string.
|
||||
ContStr = group(_litprefix + r"'[^\n'\\]*(?:\\.[^\n'\\]*)*" +
|
||||
group("'", r'\\\r?\n'),
|
||||
_litprefix + r'"[^\n"\\]*(?:\\.[^\n"\\]*)*' +
|
||||
group('"', r'\\\r?\n'))
|
||||
PseudoExtras = group(r'\\\r?\n', Comment, Triple)
|
||||
PseudoToken = Whitespace + group(PseudoExtras, Number, Funny, ContStr, Name)
|
||||
|
||||
tokenprog = re.compile(Token, re.UNICODE)
|
||||
pseudoprog = re.compile(PseudoToken, re.UNICODE)
|
||||
single3prog = re.compile(Single3)
|
||||
double3prog = re.compile(Double3)
|
||||
|
||||
_strprefixes = (
|
||||
_combinations('r', 'R', 'f', 'F') |
|
||||
_combinations('r', 'R', 'b', 'B') |
|
||||
{'u', 'U', 'ur', 'uR', 'Ur', 'UR'}
|
||||
)
|
||||
|
||||
endprogs = {"'": re.compile(Single), '"': re.compile(Double),
|
||||
"'''": single3prog, '"""': double3prog,
|
||||
**{f"{prefix}'''": single3prog for prefix in _strprefixes},
|
||||
**{f'{prefix}"""': double3prog for prefix in _strprefixes},
|
||||
**{prefix: None for prefix in _strprefixes}}
|
||||
|
||||
triple_quoted = (
|
||||
{"'''", '"""'} |
|
||||
{f"{prefix}'''" for prefix in _strprefixes} |
|
||||
{f'{prefix}"""' for prefix in _strprefixes}
|
||||
)
|
||||
single_quoted = (
|
||||
{"'", '"'} |
|
||||
{f"{prefix}'" for prefix in _strprefixes} |
|
||||
{f'{prefix}"' for prefix in _strprefixes}
|
||||
)
|
||||
|
||||
tabsize = 8
|
||||
|
||||
class TokenError(Exception): pass
|
||||
|
||||
class StopTokenizing(Exception): pass
|
||||
|
||||
def printtoken(type, token, xxx_todo_changeme, xxx_todo_changeme1, line): # for testing
|
||||
(srow, scol) = xxx_todo_changeme
|
||||
(erow, ecol) = xxx_todo_changeme1
|
||||
print("%d,%d-%d,%d:\t%s\t%s" % \
|
||||
(srow, scol, erow, ecol, tok_name[type], repr(token)))
|
||||
|
||||
def tokenize(readline, tokeneater=printtoken):
|
||||
"""
|
||||
The tokenize() function accepts two parameters: one representing the
|
||||
input stream, and one providing an output mechanism for tokenize().
|
||||
|
||||
The first parameter, readline, must be a callable object which provides
|
||||
the same interface as the readline() method of built-in file objects.
|
||||
Each call to the function should return one line of input as a string.
|
||||
|
||||
The second parameter, tokeneater, must also be a callable object. It is
|
||||
called once for each token, with five arguments, corresponding to the
|
||||
tuples generated by generate_tokens().
|
||||
"""
|
||||
try:
|
||||
tokenize_loop(readline, tokeneater)
|
||||
except StopTokenizing:
|
||||
pass
|
||||
|
||||
# backwards compatible interface
|
||||
def tokenize_loop(readline, tokeneater):
|
||||
for token_info in generate_tokens(readline):
|
||||
tokeneater(*token_info)
|
||||
|
||||
class Untokenizer:
|
||||
|
||||
def __init__(self):
|
||||
self.tokens = []
|
||||
self.prev_row = 1
|
||||
self.prev_col = 0
|
||||
|
||||
def add_whitespace(self, start):
|
||||
row, col = start
|
||||
assert row <= self.prev_row
|
||||
col_offset = col - self.prev_col
|
||||
if col_offset:
|
||||
self.tokens.append(" " * col_offset)
|
||||
|
||||
def untokenize(self, iterable):
|
||||
for t in iterable:
|
||||
if len(t) == 2:
|
||||
self.compat(t, iterable)
|
||||
break
|
||||
tok_type, token, start, end, line = t
|
||||
self.add_whitespace(start)
|
||||
self.tokens.append(token)
|
||||
self.prev_row, self.prev_col = end
|
||||
if tok_type in (NEWLINE, NL):
|
||||
self.prev_row += 1
|
||||
self.prev_col = 0
|
||||
return "".join(self.tokens)
|
||||
|
||||
def compat(self, token, iterable):
|
||||
startline = False
|
||||
indents = []
|
||||
toks_append = self.tokens.append
|
||||
toknum, tokval = token
|
||||
if toknum in (NAME, NUMBER):
|
||||
tokval += ' '
|
||||
if toknum in (NEWLINE, NL):
|
||||
startline = True
|
||||
for tok in iterable:
|
||||
toknum, tokval = tok[:2]
|
||||
|
||||
if toknum in (NAME, NUMBER, ASYNC, AWAIT):
|
||||
tokval += ' '
|
||||
|
||||
if toknum == INDENT:
|
||||
indents.append(tokval)
|
||||
continue
|
||||
elif toknum == DEDENT:
|
||||
indents.pop()
|
||||
continue
|
||||
elif toknum in (NEWLINE, NL):
|
||||
startline = True
|
||||
elif startline and indents:
|
||||
toks_append(indents[-1])
|
||||
startline = False
|
||||
toks_append(tokval)
|
||||
|
||||
cookie_re = re.compile(r'^[ \t\f]*#.*?coding[:=][ \t]*([-\w.]+)', re.ASCII)
|
||||
blank_re = re.compile(br'^[ \t\f]*(?:[#\r\n]|$)', re.ASCII)
|
||||
|
||||
def _get_normal_name(orig_enc):
|
||||
"""Imitates get_normal_name in tokenizer.c."""
|
||||
# Only care about the first 12 characters.
|
||||
enc = orig_enc[:12].lower().replace("_", "-")
|
||||
if enc == "utf-8" or enc.startswith("utf-8-"):
|
||||
return "utf-8"
|
||||
if enc in ("latin-1", "iso-8859-1", "iso-latin-1") or \
|
||||
enc.startswith(("latin-1-", "iso-8859-1-", "iso-latin-1-")):
|
||||
return "iso-8859-1"
|
||||
return orig_enc
|
||||
|
||||
def detect_encoding(readline):
|
||||
"""
|
||||
The detect_encoding() function is used to detect the encoding that should
|
||||
be used to decode a Python source file. It requires one argument, readline,
|
||||
in the same way as the tokenize() generator.
|
||||
|
||||
It will call readline a maximum of twice, and return the encoding used
|
||||
(as a string) and a list of any lines (left as bytes) it has read
|
||||
in.
|
||||
|
||||
It detects the encoding from the presence of a utf-8 bom or an encoding
|
||||
cookie as specified in pep-0263. If both a bom and a cookie are present, but
|
||||
disagree, a SyntaxError will be raised. If the encoding cookie is an invalid
|
||||
charset, raise a SyntaxError. Note that if a utf-8 bom is found,
|
||||
'utf-8-sig' is returned.
|
||||
|
||||
If no encoding is specified, then the default of 'utf-8' will be returned.
|
||||
"""
|
||||
bom_found = False
|
||||
encoding = None
|
||||
default = 'utf-8'
|
||||
def read_or_stop():
|
||||
try:
|
||||
return readline()
|
||||
except StopIteration:
|
||||
return bytes()
|
||||
|
||||
def find_cookie(line):
|
||||
try:
|
||||
line_string = line.decode('ascii')
|
||||
except UnicodeDecodeError:
|
||||
return None
|
||||
match = cookie_re.match(line_string)
|
||||
if not match:
|
||||
return None
|
||||
encoding = _get_normal_name(match.group(1))
|
||||
try:
|
||||
codec = lookup(encoding)
|
||||
except LookupError:
|
||||
# This behaviour mimics the Python interpreter
|
||||
raise SyntaxError("unknown encoding: " + encoding)
|
||||
|
||||
if bom_found:
|
||||
if codec.name != 'utf-8':
|
||||
# This behaviour mimics the Python interpreter
|
||||
raise SyntaxError('encoding problem: utf-8')
|
||||
encoding += '-sig'
|
||||
return encoding
|
||||
|
||||
first = read_or_stop()
|
||||
if first.startswith(BOM_UTF8):
|
||||
bom_found = True
|
||||
first = first[3:]
|
||||
default = 'utf-8-sig'
|
||||
if not first:
|
||||
return default, []
|
||||
|
||||
encoding = find_cookie(first)
|
||||
if encoding:
|
||||
return encoding, [first]
|
||||
if not blank_re.match(first):
|
||||
return default, [first]
|
||||
|
||||
second = read_or_stop()
|
||||
if not second:
|
||||
return default, [first]
|
||||
|
||||
encoding = find_cookie(second)
|
||||
if encoding:
|
||||
return encoding, [first, second]
|
||||
|
||||
return default, [first, second]
|
||||
|
||||
def untokenize(iterable):
|
||||
"""Transform tokens back into Python source code.
|
||||
|
||||
Each element returned by the iterable must be a token sequence
|
||||
with at least two elements, a token number and token value. If
|
||||
only two tokens are passed, the resulting output is poor.
|
||||
|
||||
Round-trip invariant for full input:
|
||||
Untokenized source will match input source exactly
|
||||
|
||||
Round-trip invariant for limited intput:
|
||||
# Output text will tokenize the back to the input
|
||||
t1 = [tok[:2] for tok in generate_tokens(f.readline)]
|
||||
newcode = untokenize(t1)
|
||||
readline = iter(newcode.splitlines(1)).next
|
||||
t2 = [tok[:2] for tokin generate_tokens(readline)]
|
||||
assert t1 == t2
|
||||
"""
|
||||
ut = Untokenizer()
|
||||
return ut.untokenize(iterable)
|
||||
|
||||
def generate_tokens(readline):
|
||||
"""
|
||||
The generate_tokens() generator requires one argument, readline, which
|
||||
must be a callable object which provides the same interface as the
|
||||
readline() method of built-in file objects. Each call to the function
|
||||
should return one line of input as a string. Alternately, readline
|
||||
can be a callable function terminating with StopIteration:
|
||||
readline = open(myfile).next # Example of alternate readline
|
||||
|
||||
The generator produces 5-tuples with these members: the token type; the
|
||||
token string; a 2-tuple (srow, scol) of ints specifying the row and
|
||||
column where the token begins in the source; a 2-tuple (erow, ecol) of
|
||||
ints specifying the row and column where the token ends in the source;
|
||||
and the line on which the token was found. The line passed is the
|
||||
logical line; continuation lines are included.
|
||||
"""
|
||||
lnum = parenlev = continued = 0
|
||||
numchars = '0123456789'
|
||||
contstr, needcont = '', 0
|
||||
contline = None
|
||||
indents = [0]
|
||||
|
||||
# 'stashed' and 'async_*' are used for async/await parsing
|
||||
stashed = None
|
||||
async_def = False
|
||||
async_def_indent = 0
|
||||
async_def_nl = False
|
||||
|
||||
while 1: # loop over lines in stream
|
||||
try:
|
||||
line = readline()
|
||||
except StopIteration:
|
||||
line = ''
|
||||
lnum = lnum + 1
|
||||
pos, max = 0, len(line)
|
||||
|
||||
if contstr: # continued string
|
||||
if not line:
|
||||
raise TokenError("EOF in multi-line string", strstart)
|
||||
endmatch = endprog.match(line)
|
||||
if endmatch:
|
||||
pos = end = endmatch.end(0)
|
||||
yield (STRING, contstr + line[:end],
|
||||
strstart, (lnum, end), contline + line)
|
||||
contstr, needcont = '', 0
|
||||
contline = None
|
||||
elif needcont and line[-2:] != '\\\n' and line[-3:] != '\\\r\n':
|
||||
yield (ERRORTOKEN, contstr + line,
|
||||
strstart, (lnum, len(line)), contline)
|
||||
contstr = ''
|
||||
contline = None
|
||||
continue
|
||||
else:
|
||||
contstr = contstr + line
|
||||
contline = contline + line
|
||||
continue
|
||||
|
||||
elif parenlev == 0 and not continued: # new statement
|
||||
if not line: break
|
||||
column = 0
|
||||
while pos < max: # measure leading whitespace
|
||||
if line[pos] == ' ': column = column + 1
|
||||
elif line[pos] == '\t': column = (column//tabsize + 1)*tabsize
|
||||
elif line[pos] == '\f': column = 0
|
||||
else: break
|
||||
pos = pos + 1
|
||||
if pos == max: break
|
||||
|
||||
if stashed:
|
||||
yield stashed
|
||||
stashed = None
|
||||
|
||||
if line[pos] in '\r\n': # skip blank lines
|
||||
yield (NL, line[pos:], (lnum, pos), (lnum, len(line)), line)
|
||||
continue
|
||||
|
||||
if line[pos] == '#': # skip comments
|
||||
comment_token = line[pos:].rstrip('\r\n')
|
||||
nl_pos = pos + len(comment_token)
|
||||
yield (COMMENT, comment_token,
|
||||
(lnum, pos), (lnum, pos + len(comment_token)), line)
|
||||
yield (NL, line[nl_pos:],
|
||||
(lnum, nl_pos), (lnum, len(line)), line)
|
||||
continue
|
||||
|
||||
if column > indents[-1]: # count indents
|
||||
indents.append(column)
|
||||
yield (INDENT, line[:pos], (lnum, 0), (lnum, pos), line)
|
||||
|
||||
while column < indents[-1]: # count dedents
|
||||
if column not in indents:
|
||||
raise IndentationError(
|
||||
"unindent does not match any outer indentation level",
|
||||
("<tokenize>", lnum, pos, line))
|
||||
indents = indents[:-1]
|
||||
|
||||
if async_def and async_def_indent >= indents[-1]:
|
||||
async_def = False
|
||||
async_def_nl = False
|
||||
async_def_indent = 0
|
||||
|
||||
yield (DEDENT, '', (lnum, pos), (lnum, pos), line)
|
||||
|
||||
if async_def and async_def_nl and async_def_indent >= indents[-1]:
|
||||
async_def = False
|
||||
async_def_nl = False
|
||||
async_def_indent = 0
|
||||
|
||||
else: # continued statement
|
||||
if not line:
|
||||
raise TokenError("EOF in multi-line statement", (lnum, 0))
|
||||
continued = 0
|
||||
|
||||
while pos < max:
|
||||
pseudomatch = pseudoprog.match(line, pos)
|
||||
if pseudomatch: # scan for tokens
|
||||
start, end = pseudomatch.span(1)
|
||||
spos, epos, pos = (lnum, start), (lnum, end), end
|
||||
token, initial = line[start:end], line[start]
|
||||
|
||||
if initial in numchars or \
|
||||
(initial == '.' and token != '.'): # ordinary number
|
||||
yield (NUMBER, token, spos, epos, line)
|
||||
elif initial in '\r\n':
|
||||
newline = NEWLINE
|
||||
if parenlev > 0:
|
||||
newline = NL
|
||||
elif async_def:
|
||||
async_def_nl = True
|
||||
if stashed:
|
||||
yield stashed
|
||||
stashed = None
|
||||
yield (newline, token, spos, epos, line)
|
||||
|
||||
elif initial == '#':
|
||||
assert not token.endswith("\n")
|
||||
if stashed:
|
||||
yield stashed
|
||||
stashed = None
|
||||
yield (COMMENT, token, spos, epos, line)
|
||||
elif token in triple_quoted:
|
||||
endprog = endprogs[token]
|
||||
endmatch = endprog.match(line, pos)
|
||||
if endmatch: # all on one line
|
||||
pos = endmatch.end(0)
|
||||
token = line[start:pos]
|
||||
if stashed:
|
||||
yield stashed
|
||||
stashed = None
|
||||
yield (STRING, token, spos, (lnum, pos), line)
|
||||
else:
|
||||
strstart = (lnum, start) # multiple lines
|
||||
contstr = line[start:]
|
||||
contline = line
|
||||
break
|
||||
elif initial in single_quoted or \
|
||||
token[:2] in single_quoted or \
|
||||
token[:3] in single_quoted:
|
||||
if token[-1] == '\n': # continued string
|
||||
strstart = (lnum, start)
|
||||
endprog = (endprogs[initial] or endprogs[token[1]] or
|
||||
endprogs[token[2]])
|
||||
contstr, needcont = line[start:], 1
|
||||
contline = line
|
||||
break
|
||||
else: # ordinary string
|
||||
if stashed:
|
||||
yield stashed
|
||||
stashed = None
|
||||
yield (STRING, token, spos, epos, line)
|
||||
elif initial.isidentifier(): # ordinary name
|
||||
if token in ('async', 'await'):
|
||||
if async_def:
|
||||
yield (ASYNC if token == 'async' else AWAIT,
|
||||
token, spos, epos, line)
|
||||
continue
|
||||
|
||||
tok = (NAME, token, spos, epos, line)
|
||||
if token == 'async' and not stashed:
|
||||
stashed = tok
|
||||
continue
|
||||
|
||||
if token == 'def':
|
||||
if (stashed
|
||||
and stashed[0] == NAME
|
||||
and stashed[1] == 'async'):
|
||||
|
||||
async_def = True
|
||||
async_def_indent = indents[-1]
|
||||
|
||||
yield (ASYNC, stashed[1],
|
||||
stashed[2], stashed[3],
|
||||
stashed[4])
|
||||
stashed = None
|
||||
|
||||
if stashed:
|
||||
yield stashed
|
||||
stashed = None
|
||||
|
||||
yield tok
|
||||
elif initial == '\\': # continued stmt
|
||||
# This yield is new; needed for better idempotency:
|
||||
if stashed:
|
||||
yield stashed
|
||||
stashed = None
|
||||
yield (NL, token, spos, (lnum, pos), line)
|
||||
continued = 1
|
||||
else:
|
||||
if initial in '([{': parenlev = parenlev + 1
|
||||
elif initial in ')]}': parenlev = parenlev - 1
|
||||
if stashed:
|
||||
yield stashed
|
||||
stashed = None
|
||||
yield (OP, token, spos, epos, line)
|
||||
else:
|
||||
yield (ERRORTOKEN, line[pos],
|
||||
(lnum, pos), (lnum, pos+1), line)
|
||||
pos = pos + 1
|
||||
|
||||
if stashed:
|
||||
yield stashed
|
||||
stashed = None
|
||||
|
||||
for indent in indents[1:]: # pop remaining indent levels
|
||||
yield (DEDENT, '', (lnum, 0), (lnum, 0), '')
|
||||
yield (ENDMARKER, '', (lnum, 0), (lnum, 0), '')
|
||||
|
||||
if __name__ == '__main__': # testing
|
||||
import sys
|
||||
if len(sys.argv) > 1: tokenize(open(sys.argv[1]).readline)
|
||||
else: tokenize(sys.stdin.readline)
|
30
blib2to3/pgen2/tokenize.pyi
Normal file
30
blib2to3/pgen2/tokenize.pyi
Normal file
@ -0,0 +1,30 @@
|
||||
# Stubs for lib2to3.pgen2.tokenize (Python 3.6)
|
||||
# NOTE: Only elements from __all__ are present.
|
||||
|
||||
from typing import Callable, Iterable, Iterator, List, Text, Tuple
|
||||
from blib2to3.pgen2.token import * # noqa
|
||||
|
||||
|
||||
_Coord = Tuple[int, int]
|
||||
_TokenEater = Callable[[int, Text, _Coord, _Coord, Text], None]
|
||||
_TokenInfo = Tuple[int, Text, _Coord, _Coord, Text]
|
||||
|
||||
|
||||
class TokenError(Exception): ...
|
||||
class StopTokenizing(Exception): ...
|
||||
|
||||
def tokenize(readline: Callable[[], Text], tokeneater: _TokenEater = ...) -> None: ...
|
||||
|
||||
class Untokenizer:
|
||||
tokens: List[Text]
|
||||
prev_row: int
|
||||
prev_col: int
|
||||
def __init__(self) -> None: ...
|
||||
def add_whitespace(self, start: _Coord) -> None: ...
|
||||
def untokenize(self, iterable: Iterable[_TokenInfo]) -> Text: ...
|
||||
def compat(self, token: Tuple[int, Text], iterable: Iterable[_TokenInfo]) -> None: ...
|
||||
|
||||
def untokenize(iterable: Iterable[_TokenInfo]) -> Text: ...
|
||||
def generate_tokens(
|
||||
readline: Callable[[], Text]
|
||||
) -> Iterator[_TokenInfo]: ...
|
57
blib2to3/pygram.py
Normal file
57
blib2to3/pygram.py
Normal file
@ -0,0 +1,57 @@
|
||||
# Copyright 2006 Google, Inc. All Rights Reserved.
|
||||
# Licensed to PSF under a Contributor Agreement.
|
||||
|
||||
"""Export the Python grammar and symbols."""
|
||||
|
||||
# Python imports
|
||||
import os
|
||||
|
||||
# Local imports
|
||||
from .pgen2 import token
|
||||
from .pgen2 import driver
|
||||
from . import pytree
|
||||
|
||||
# The grammar file
|
||||
_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__), "Grammar.txt")
|
||||
_PATTERN_GRAMMAR_FILE = os.path.join(os.path.dirname(__file__),
|
||||
"PatternGrammar.txt")
|
||||
|
||||
|
||||
class Symbols(object):
|
||||
|
||||
def __init__(self, grammar):
|
||||
"""Initializer.
|
||||
|
||||
Creates an attribute for each grammar symbol (nonterminal),
|
||||
whose value is the symbol's type (an int >= 256).
|
||||
"""
|
||||
for name, symbol in grammar.symbol2number.items():
|
||||
setattr(self, name, symbol)
|
||||
|
||||
|
||||
def initialize(cache_dir=None):
|
||||
global python_grammar
|
||||
global python_grammar_no_print_statement
|
||||
global python_grammar_no_print_statement_no_exec_statement
|
||||
global python_symbols
|
||||
global pattern_grammar
|
||||
global pattern_symbols
|
||||
|
||||
# Python 2
|
||||
python_grammar = driver.load_packaged_grammar("blib2to3", _GRAMMAR_FILE,
|
||||
cache_dir)
|
||||
|
||||
python_symbols = Symbols(python_grammar)
|
||||
|
||||
# Python 2 + from __future__ import print_function
|
||||
python_grammar_no_print_statement = python_grammar.copy()
|
||||
del python_grammar_no_print_statement.keywords["print"]
|
||||
|
||||
# Python 3
|
||||
python_grammar_no_print_statement_no_exec_statement = python_grammar.copy()
|
||||
del python_grammar_no_print_statement_no_exec_statement.keywords["print"]
|
||||
del python_grammar_no_print_statement_no_exec_statement.keywords["exec"]
|
||||
|
||||
pattern_grammar = driver.load_packaged_grammar("blib2to3", _PATTERN_GRAMMAR_FILE,
|
||||
cache_dir)
|
||||
pattern_symbols = Symbols(pattern_grammar)
|
124
blib2to3/pygram.pyi
Normal file
124
blib2to3/pygram.pyi
Normal file
@ -0,0 +1,124 @@
|
||||
# Stubs for lib2to3.pygram (Python 3.6)
|
||||
|
||||
import os
|
||||
from typing import Any, Union
|
||||
from blib2to3.pgen2.grammar import Grammar
|
||||
|
||||
class Symbols:
|
||||
def __init__(self, grammar: Grammar) -> None: ...
|
||||
|
||||
class python_symbols(Symbols):
|
||||
and_expr: int
|
||||
and_test: int
|
||||
annassign: int
|
||||
arglist: int
|
||||
argument: int
|
||||
arith_expr: int
|
||||
assert_stmt: int
|
||||
async_funcdef: int
|
||||
async_stmt: int
|
||||
atom: int
|
||||
augassign: int
|
||||
break_stmt: int
|
||||
classdef: int
|
||||
comp_for: int
|
||||
comp_if: int
|
||||
comp_iter: int
|
||||
comp_op: int
|
||||
comparison: int
|
||||
compound_stmt: int
|
||||
continue_stmt: int
|
||||
decorated: int
|
||||
decorator: int
|
||||
decorators: int
|
||||
del_stmt: int
|
||||
dictsetmaker: int
|
||||
dotted_as_name: int
|
||||
dotted_as_names: int
|
||||
dotted_name: int
|
||||
encoding_decl: int
|
||||
eval_input: int
|
||||
except_clause: int
|
||||
exec_stmt: int
|
||||
expr: int
|
||||
expr_stmt: int
|
||||
exprlist: int
|
||||
factor: int
|
||||
file_input: int
|
||||
flow_stmt: int
|
||||
for_stmt: int
|
||||
funcdef: int
|
||||
global_stmt: int
|
||||
if_stmt: int
|
||||
import_as_name: int
|
||||
import_as_names: int
|
||||
import_from: int
|
||||
import_name: int
|
||||
import_stmt: int
|
||||
lambdef: int
|
||||
listmaker: int
|
||||
not_test: int
|
||||
old_comp_for: int
|
||||
old_comp_if: int
|
||||
old_comp_iter: int
|
||||
old_lambdef: int
|
||||
old_test: int
|
||||
or_test: int
|
||||
parameters: int
|
||||
pass_stmt: int
|
||||
power: int
|
||||
print_stmt: int
|
||||
raise_stmt: int
|
||||
return_stmt: int
|
||||
shift_expr: int
|
||||
simple_stmt: int
|
||||
single_input: int
|
||||
sliceop: int
|
||||
small_stmt: int
|
||||
star_expr: int
|
||||
stmt: int
|
||||
subscript: int
|
||||
subscriptlist: int
|
||||
suite: int
|
||||
term: int
|
||||
test: int
|
||||
testlist: int
|
||||
testlist1: int
|
||||
testlist_gexp: int
|
||||
testlist_safe: int
|
||||
testlist_star_expr: int
|
||||
tfpdef: int
|
||||
tfplist: int
|
||||
tname: int
|
||||
trailer: int
|
||||
try_stmt: int
|
||||
typedargslist: int
|
||||
varargslist: int
|
||||
vfpdef: int
|
||||
vfplist: int
|
||||
vname: int
|
||||
while_stmt: int
|
||||
with_item: int
|
||||
with_stmt: int
|
||||
with_var: int
|
||||
xor_expr: int
|
||||
yield_arg: int
|
||||
yield_expr: int
|
||||
yield_stmt: int
|
||||
|
||||
class pattern_symbols(Symbols):
|
||||
Alternative: int
|
||||
Alternatives: int
|
||||
Details: int
|
||||
Matcher: int
|
||||
NegatedUnit: int
|
||||
Repeater: int
|
||||
Unit: int
|
||||
|
||||
python_grammar: Grammar
|
||||
python_grammar_no_print_statement: Grammar
|
||||
python_grammar_no_print_statement_no_exec_statement: Grammar
|
||||
python_grammar_no_exec_statement: Grammar
|
||||
pattern_grammar: Grammar
|
||||
|
||||
def initialize(cache_dir: Union[str, os.PathLike, None]) -> None: ...
|
@ -10,48 +10,26 @@
|
||||
There's also a pattern matching implementation here.
|
||||
"""
|
||||
|
||||
# mypy: allow-untyped-defs, allow-incomplete-defs
|
||||
|
||||
from collections.abc import Iterable, Iterator
|
||||
from typing import Any, Optional, TypeVar, Union
|
||||
|
||||
from blib2to3.pgen2.grammar import Grammar
|
||||
|
||||
__author__ = "Guido van Rossum <guido@python.org>"
|
||||
|
||||
import sys
|
||||
from io import StringIO
|
||||
|
||||
HUGE: int = 0x7FFFFFFF # maximum repeat count, default max
|
||||
HUGE = 0x7FFFFFFF # maximum repeat count, default max
|
||||
|
||||
_type_reprs: dict[int, Union[str, int]] = {}
|
||||
|
||||
|
||||
def type_repr(type_num: int) -> Union[str, int]:
|
||||
_type_reprs = {}
|
||||
def type_repr(type_num):
|
||||
global _type_reprs
|
||||
if not _type_reprs:
|
||||
from . import pygram
|
||||
|
||||
if not hasattr(pygram, "python_symbols"):
|
||||
pygram.initialize(cache_dir=None)
|
||||
|
||||
from .pygram import python_symbols
|
||||
# printing tokens is possible but not as useful
|
||||
# from .pgen2 import token // token.__dict__.items():
|
||||
for name in dir(pygram.python_symbols):
|
||||
val = getattr(pygram.python_symbols, name)
|
||||
if type(val) == int:
|
||||
_type_reprs[val] = name
|
||||
for name, val in python_symbols.__dict__.items():
|
||||
if type(val) == int: _type_reprs[val] = name
|
||||
return _type_reprs.setdefault(type_num, type_num)
|
||||
|
||||
class Base(object):
|
||||
|
||||
_P = TypeVar("_P", bound="Base")
|
||||
|
||||
NL = Union["Node", "Leaf"]
|
||||
Context = tuple[str, tuple[int, int]]
|
||||
RawNode = tuple[int, Optional[str], Optional[Context], Optional[list[NL]]]
|
||||
|
||||
|
||||
class Base:
|
||||
"""
|
||||
Abstract base class for Node and Leaf.
|
||||
|
||||
@ -62,18 +40,18 @@ class Base:
|
||||
"""
|
||||
|
||||
# Default values for instance variables
|
||||
type: int # int: token number (< 256) or symbol number (>= 256)
|
||||
parent: Optional["Node"] = None # Parent node pointer, or None
|
||||
children: list[NL] # List of subnodes
|
||||
was_changed: bool = False
|
||||
was_checked: bool = False
|
||||
type = None # int: token number (< 256) or symbol number (>= 256)
|
||||
parent = None # Parent node pointer, or None
|
||||
children = () # Tuple of subnodes
|
||||
was_changed = False
|
||||
was_checked = False
|
||||
|
||||
def __new__(cls, *args, **kwds):
|
||||
"""Constructor that prevents Base from being instantiated."""
|
||||
assert cls is not Base, "Cannot instantiate Base"
|
||||
return object.__new__(cls)
|
||||
|
||||
def __eq__(self, other: Any) -> bool:
|
||||
def __eq__(self, other):
|
||||
"""
|
||||
Compare two nodes for equality.
|
||||
|
||||
@ -83,11 +61,9 @@ def __eq__(self, other: Any) -> bool:
|
||||
return NotImplemented
|
||||
return self._eq(other)
|
||||
|
||||
@property
|
||||
def prefix(self) -> str:
|
||||
raise NotImplementedError
|
||||
__hash__ = None # For Py3 compatibility.
|
||||
|
||||
def _eq(self: _P, other: _P) -> bool:
|
||||
def _eq(self, other):
|
||||
"""
|
||||
Compare two nodes for equality.
|
||||
|
||||
@ -98,10 +74,7 @@ def _eq(self: _P, other: _P) -> bool:
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def __deepcopy__(self: _P, memo: Any) -> _P:
|
||||
return self.clone()
|
||||
|
||||
def clone(self: _P) -> _P:
|
||||
def clone(self):
|
||||
"""
|
||||
Return a cloned (deep) copy of self.
|
||||
|
||||
@ -109,7 +82,7 @@ def clone(self: _P) -> _P:
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def post_order(self) -> Iterator[NL]:
|
||||
def post_order(self):
|
||||
"""
|
||||
Return a post-order iterator for the tree.
|
||||
|
||||
@ -117,7 +90,7 @@ def post_order(self) -> Iterator[NL]:
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def pre_order(self) -> Iterator[NL]:
|
||||
def pre_order(self):
|
||||
"""
|
||||
Return a pre-order iterator for the tree.
|
||||
|
||||
@ -125,7 +98,7 @@ def pre_order(self) -> Iterator[NL]:
|
||||
"""
|
||||
raise NotImplementedError
|
||||
|
||||
def replace(self, new: Union[NL, list[NL]]) -> None:
|
||||
def replace(self, new):
|
||||
"""Replace this node with a new one in the parent."""
|
||||
assert self.parent is not None, str(self)
|
||||
assert new is not None
|
||||
@ -149,23 +122,23 @@ def replace(self, new: Union[NL, list[NL]]) -> None:
|
||||
x.parent = self.parent
|
||||
self.parent = None
|
||||
|
||||
def get_lineno(self) -> Optional[int]:
|
||||
def get_lineno(self):
|
||||
"""Return the line number which generated the invocant node."""
|
||||
node = self
|
||||
while not isinstance(node, Leaf):
|
||||
if not node.children:
|
||||
return None
|
||||
return
|
||||
node = node.children[0]
|
||||
return node.lineno
|
||||
|
||||
def changed(self) -> None:
|
||||
def changed(self):
|
||||
if self.was_changed:
|
||||
return
|
||||
if self.parent:
|
||||
self.parent.changed()
|
||||
self.was_changed = True
|
||||
|
||||
def remove(self) -> Optional[int]:
|
||||
def remove(self):
|
||||
"""
|
||||
Remove the node from the tree. Returns the position of the node in its
|
||||
parent's children before it was removed.
|
||||
@ -178,10 +151,9 @@ def remove(self) -> Optional[int]:
|
||||
self.parent.invalidate_sibling_maps()
|
||||
self.parent = None
|
||||
return i
|
||||
return None
|
||||
|
||||
@property
|
||||
def next_sibling(self) -> Optional[NL]:
|
||||
def next_sibling(self):
|
||||
"""
|
||||
The node immediately following the invocant in their parent's children
|
||||
list. If the invocant does not have a next sibling, it is None
|
||||
@ -191,11 +163,10 @@ def next_sibling(self) -> Optional[NL]:
|
||||
|
||||
if self.parent.next_sibling_map is None:
|
||||
self.parent.update_sibling_maps()
|
||||
assert self.parent.next_sibling_map is not None
|
||||
return self.parent.next_sibling_map[id(self)]
|
||||
|
||||
@property
|
||||
def prev_sibling(self) -> Optional[NL]:
|
||||
def prev_sibling(self):
|
||||
"""
|
||||
The node immediately preceding the invocant in their parent's children
|
||||
list. If the invocant does not have a previous sibling, it is None.
|
||||
@ -205,19 +176,18 @@ def prev_sibling(self) -> Optional[NL]:
|
||||
|
||||
if self.parent.prev_sibling_map is None:
|
||||
self.parent.update_sibling_maps()
|
||||
assert self.parent.prev_sibling_map is not None
|
||||
return self.parent.prev_sibling_map[id(self)]
|
||||
|
||||
def leaves(self) -> Iterator["Leaf"]:
|
||||
def leaves(self):
|
||||
for child in self.children:
|
||||
yield from child.leaves()
|
||||
|
||||
def depth(self) -> int:
|
||||
def depth(self):
|
||||
if self.parent is None:
|
||||
return 0
|
||||
return 1 + self.parent.depth()
|
||||
|
||||
def get_suffix(self) -> str:
|
||||
def get_suffix(self):
|
||||
"""
|
||||
Return the string immediately following the invocant node. This is
|
||||
effectively equivalent to node.next_sibling.prefix
|
||||
@ -225,24 +195,20 @@ def get_suffix(self) -> str:
|
||||
next_sib = self.next_sibling
|
||||
if next_sib is None:
|
||||
return ""
|
||||
prefix = next_sib.prefix
|
||||
return prefix
|
||||
return next_sib.prefix
|
||||
|
||||
if sys.version_info < (3, 0):
|
||||
def __str__(self):
|
||||
return str(self).encode("ascii")
|
||||
|
||||
class Node(Base):
|
||||
|
||||
"""Concrete implementation for interior nodes."""
|
||||
|
||||
fixers_applied: Optional[list[Any]]
|
||||
used_names: Optional[set[str]]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
type: int,
|
||||
children: list[NL],
|
||||
context: Optional[Any] = None,
|
||||
prefix: Optional[str] = None,
|
||||
fixers_applied: Optional[list[Any]] = None,
|
||||
) -> None:
|
||||
def __init__(self,type, children,
|
||||
context=None,
|
||||
prefix=None,
|
||||
fixers_applied=None):
|
||||
"""
|
||||
Initializer.
|
||||
|
||||
@ -265,12 +231,13 @@ def __init__(
|
||||
else:
|
||||
self.fixers_applied = None
|
||||
|
||||
def __repr__(self) -> str:
|
||||
def __repr__(self):
|
||||
"""Return a canonical string representation."""
|
||||
assert self.type is not None
|
||||
return f"{self.__class__.__name__}({type_repr(self.type)}, {self.children!r})"
|
||||
return "%s(%s, %r)" % (self.__class__.__name__,
|
||||
type_repr(self.type),
|
||||
self.children)
|
||||
|
||||
def __str__(self) -> str:
|
||||
def __unicode__(self):
|
||||
"""
|
||||
Return a pretty string representation.
|
||||
|
||||
@ -278,33 +245,32 @@ def __str__(self) -> str:
|
||||
"""
|
||||
return "".join(map(str, self.children))
|
||||
|
||||
def _eq(self, other: Base) -> bool:
|
||||
if sys.version_info > (3, 0):
|
||||
__str__ = __unicode__
|
||||
|
||||
def _eq(self, other):
|
||||
"""Compare two nodes for equality."""
|
||||
return (self.type, self.children) == (other.type, other.children)
|
||||
|
||||
def clone(self) -> "Node":
|
||||
assert self.type is not None
|
||||
def clone(self):
|
||||
"""Return a cloned (deep) copy of self."""
|
||||
return Node(
|
||||
self.type,
|
||||
[ch.clone() for ch in self.children],
|
||||
fixers_applied=self.fixers_applied,
|
||||
)
|
||||
return Node(self.type, [ch.clone() for ch in self.children],
|
||||
fixers_applied=self.fixers_applied)
|
||||
|
||||
def post_order(self) -> Iterator[NL]:
|
||||
def post_order(self):
|
||||
"""Return a post-order iterator for the tree."""
|
||||
for child in self.children:
|
||||
yield from child.post_order()
|
||||
yield self
|
||||
|
||||
def pre_order(self) -> Iterator[NL]:
|
||||
def pre_order(self):
|
||||
"""Return a pre-order iterator for the tree."""
|
||||
yield self
|
||||
for child in self.children:
|
||||
yield from child.pre_order()
|
||||
|
||||
@property
|
||||
def prefix(self) -> str:
|
||||
def prefix(self):
|
||||
"""
|
||||
The whitespace and comments preceding this node in the input.
|
||||
"""
|
||||
@ -313,11 +279,11 @@ def prefix(self) -> str:
|
||||
return self.children[0].prefix
|
||||
|
||||
@prefix.setter
|
||||
def prefix(self, prefix: str) -> None:
|
||||
def prefix(self, prefix):
|
||||
if self.children:
|
||||
self.children[0].prefix = prefix
|
||||
|
||||
def set_child(self, i: int, child: NL) -> None:
|
||||
def set_child(self, i, child):
|
||||
"""
|
||||
Equivalent to 'node.children[i] = child'. This method also sets the
|
||||
child's parent attribute appropriately.
|
||||
@ -328,7 +294,7 @@ def set_child(self, i: int, child: NL) -> None:
|
||||
self.changed()
|
||||
self.invalidate_sibling_maps()
|
||||
|
||||
def insert_child(self, i: int, child: NL) -> None:
|
||||
def insert_child(self, i, child):
|
||||
"""
|
||||
Equivalent to 'node.children.insert(i, child)'. This method also sets
|
||||
the child's parent attribute appropriately.
|
||||
@ -338,7 +304,7 @@ def insert_child(self, i: int, child: NL) -> None:
|
||||
self.changed()
|
||||
self.invalidate_sibling_maps()
|
||||
|
||||
def append_child(self, child: NL) -> None:
|
||||
def append_child(self, child):
|
||||
"""
|
||||
Equivalent to 'node.children.append(child)'. This method also sets the
|
||||
child's parent attribute appropriately.
|
||||
@ -348,58 +314,39 @@ def append_child(self, child: NL) -> None:
|
||||
self.changed()
|
||||
self.invalidate_sibling_maps()
|
||||
|
||||
def invalidate_sibling_maps(self) -> None:
|
||||
self.prev_sibling_map: Optional[dict[int, Optional[NL]]] = None
|
||||
self.next_sibling_map: Optional[dict[int, Optional[NL]]] = None
|
||||
def invalidate_sibling_maps(self):
|
||||
self.prev_sibling_map = None
|
||||
self.next_sibling_map = None
|
||||
|
||||
def update_sibling_maps(self) -> None:
|
||||
_prev: dict[int, Optional[NL]] = {}
|
||||
_next: dict[int, Optional[NL]] = {}
|
||||
self.prev_sibling_map = _prev
|
||||
self.next_sibling_map = _next
|
||||
previous: Optional[NL] = None
|
||||
def update_sibling_maps(self):
|
||||
self.prev_sibling_map = _prev = {}
|
||||
self.next_sibling_map = _next = {}
|
||||
previous = None
|
||||
for current in self.children:
|
||||
_prev[id(current)] = previous
|
||||
_next[id(previous)] = current
|
||||
previous = current
|
||||
_next[id(current)] = None
|
||||
|
||||
|
||||
class Leaf(Base):
|
||||
|
||||
"""Concrete implementation for leaf nodes."""
|
||||
|
||||
# Default values for instance variables
|
||||
value: str
|
||||
fixers_applied: list[Any]
|
||||
bracket_depth: int
|
||||
# Changed later in brackets.py
|
||||
opening_bracket: Optional["Leaf"] = None
|
||||
used_names: Optional[set[str]]
|
||||
_prefix = "" # Whitespace and comments preceding this token in the input
|
||||
lineno: int = 0 # Line where this token starts in the input
|
||||
column: int = 0 # Column where this token starts in the input
|
||||
# If not None, this Leaf is created by converting a block of fmt off/skip
|
||||
# code, and `fmt_pass_converted_first_leaf` points to the first Leaf in the
|
||||
# converted code.
|
||||
fmt_pass_converted_first_leaf: Optional["Leaf"] = None
|
||||
lineno = 0 # Line where this token starts in the input
|
||||
column = 0 # Column where this token tarts in the input
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
type: int,
|
||||
value: str,
|
||||
context: Optional[Context] = None,
|
||||
prefix: Optional[str] = None,
|
||||
fixers_applied: list[Any] = [],
|
||||
opening_bracket: Optional["Leaf"] = None,
|
||||
fmt_pass_converted_first_leaf: Optional["Leaf"] = None,
|
||||
) -> None:
|
||||
def __init__(self, type, value,
|
||||
context=None,
|
||||
prefix=None,
|
||||
fixers_applied=[]):
|
||||
"""
|
||||
Initializer.
|
||||
|
||||
Takes a type constant (a token number < 256), a string value, and an
|
||||
optional context keyword argument.
|
||||
"""
|
||||
|
||||
assert 0 <= type < 256, type
|
||||
if context is not None:
|
||||
self._prefix, (self.lineno, self.column) = context
|
||||
@ -407,68 +354,60 @@ def __init__(
|
||||
self.value = value
|
||||
if prefix is not None:
|
||||
self._prefix = prefix
|
||||
self.fixers_applied: Optional[list[Any]] = fixers_applied[:]
|
||||
self.children = []
|
||||
self.opening_bracket = opening_bracket
|
||||
self.fmt_pass_converted_first_leaf = fmt_pass_converted_first_leaf
|
||||
self.fixers_applied = fixers_applied[:]
|
||||
|
||||
def __repr__(self) -> str:
|
||||
def __repr__(self):
|
||||
"""Return a canonical string representation."""
|
||||
from .pgen2.token import tok_name
|
||||
return "%s(%s, %r)" % (self.__class__.__name__,
|
||||
tok_name.get(self.type, self.type),
|
||||
self.value)
|
||||
|
||||
assert self.type is not None
|
||||
return (
|
||||
f"{self.__class__.__name__}({tok_name.get(self.type, self.type)},"
|
||||
f" {self.value!r})"
|
||||
)
|
||||
|
||||
def __str__(self) -> str:
|
||||
def __unicode__(self):
|
||||
"""
|
||||
Return a pretty string representation.
|
||||
|
||||
This reproduces the input source exactly.
|
||||
"""
|
||||
return self._prefix + str(self.value)
|
||||
return self.prefix + str(self.value)
|
||||
|
||||
def _eq(self, other: "Leaf") -> bool:
|
||||
if sys.version_info > (3, 0):
|
||||
__str__ = __unicode__
|
||||
|
||||
def _eq(self, other):
|
||||
"""Compare two nodes for equality."""
|
||||
return (self.type, self.value) == (other.type, other.value)
|
||||
|
||||
def clone(self) -> "Leaf":
|
||||
assert self.type is not None
|
||||
def clone(self):
|
||||
"""Return a cloned (deep) copy of self."""
|
||||
return Leaf(
|
||||
self.type,
|
||||
self.value,
|
||||
(self.prefix, (self.lineno, self.column)),
|
||||
fixers_applied=self.fixers_applied,
|
||||
)
|
||||
return Leaf(self.type, self.value,
|
||||
(self.prefix, (self.lineno, self.column)),
|
||||
fixers_applied=self.fixers_applied)
|
||||
|
||||
def leaves(self) -> Iterator["Leaf"]:
|
||||
def leaves(self):
|
||||
yield self
|
||||
|
||||
def post_order(self) -> Iterator["Leaf"]:
|
||||
def post_order(self):
|
||||
"""Return a post-order iterator for the tree."""
|
||||
yield self
|
||||
|
||||
def pre_order(self) -> Iterator["Leaf"]:
|
||||
def pre_order(self):
|
||||
"""Return a pre-order iterator for the tree."""
|
||||
yield self
|
||||
|
||||
@property
|
||||
def prefix(self) -> str:
|
||||
def prefix(self):
|
||||
"""
|
||||
The whitespace and comments preceding this token in the input.
|
||||
"""
|
||||
return self._prefix
|
||||
|
||||
@prefix.setter
|
||||
def prefix(self, prefix: str) -> None:
|
||||
def prefix(self, prefix):
|
||||
self.changed()
|
||||
self._prefix = prefix
|
||||
|
||||
|
||||
def convert(gr: Grammar, raw_node: RawNode) -> NL:
|
||||
def convert(gr, raw_node):
|
||||
"""
|
||||
Convert raw node information to a Node or Leaf instance.
|
||||
|
||||
@ -480,18 +419,15 @@ def convert(gr: Grammar, raw_node: RawNode) -> NL:
|
||||
if children or type in gr.number2symbol:
|
||||
# If there's exactly one child, return that child instead of
|
||||
# creating a new node.
|
||||
assert children is not None
|
||||
if len(children) == 1:
|
||||
return children[0]
|
||||
return Node(type, children, context=context)
|
||||
else:
|
||||
return Leaf(type, value or "", context=context)
|
||||
return Leaf(type, value, context=context)
|
||||
|
||||
|
||||
_Results = dict[str, NL]
|
||||
class BasePattern(object):
|
||||
|
||||
|
||||
class BasePattern:
|
||||
"""
|
||||
A pattern is a tree matching pattern.
|
||||
|
||||
@ -507,27 +443,22 @@ class BasePattern:
|
||||
"""
|
||||
|
||||
# Defaults for instance variables
|
||||
type: Optional[int]
|
||||
type = None # Node type (token if < 256, symbol if >= 256)
|
||||
content: Any = None # Optional content matching pattern
|
||||
name: Optional[str] = None # Optional name used to store match in results dict
|
||||
type = None # Node type (token if < 256, symbol if >= 256)
|
||||
content = None # Optional content matching pattern
|
||||
name = None # Optional name used to store match in results dict
|
||||
|
||||
def __new__(cls, *args, **kwds):
|
||||
"""Constructor that prevents BasePattern from being instantiated."""
|
||||
assert cls is not BasePattern, "Cannot instantiate BasePattern"
|
||||
return object.__new__(cls)
|
||||
|
||||
def __repr__(self) -> str:
|
||||
assert self.type is not None
|
||||
def __repr__(self):
|
||||
args = [type_repr(self.type), self.content, self.name]
|
||||
while args and args[-1] is None:
|
||||
del args[-1]
|
||||
return f"{self.__class__.__name__}({', '.join(map(repr, args))})"
|
||||
return "%s(%s)" % (self.__class__.__name__, ", ".join(map(repr, args)))
|
||||
|
||||
def _submatch(self, node, results=None) -> bool:
|
||||
raise NotImplementedError
|
||||
|
||||
def optimize(self) -> "BasePattern":
|
||||
def optimize(self):
|
||||
"""
|
||||
A subclass can define this as a hook for optimizations.
|
||||
|
||||
@ -535,7 +466,7 @@ def optimize(self) -> "BasePattern":
|
||||
"""
|
||||
return self
|
||||
|
||||
def match(self, node: NL, results: Optional[_Results] = None) -> bool:
|
||||
def match(self, node, results=None):
|
||||
"""
|
||||
Does this pattern exactly match a node?
|
||||
|
||||
@ -549,19 +480,18 @@ def match(self, node: NL, results: Optional[_Results] = None) -> bool:
|
||||
if self.type is not None and node.type != self.type:
|
||||
return False
|
||||
if self.content is not None:
|
||||
r: Optional[_Results] = None
|
||||
r = None
|
||||
if results is not None:
|
||||
r = {}
|
||||
if not self._submatch(node, r):
|
||||
return False
|
||||
if r:
|
||||
assert results is not None
|
||||
results.update(r)
|
||||
if results is not None and self.name:
|
||||
results[self.name] = node
|
||||
return True
|
||||
|
||||
def match_seq(self, nodes: list[NL], results: Optional[_Results] = None) -> bool:
|
||||
def match_seq(self, nodes, results=None):
|
||||
"""
|
||||
Does this pattern exactly match a sequence of nodes?
|
||||
|
||||
@ -571,24 +501,20 @@ def match_seq(self, nodes: list[NL], results: Optional[_Results] = None) -> bool
|
||||
return False
|
||||
return self.match(nodes[0], results)
|
||||
|
||||
def generate_matches(self, nodes: list[NL]) -> Iterator[tuple[int, _Results]]:
|
||||
def generate_matches(self, nodes):
|
||||
"""
|
||||
Generator yielding all matches for this pattern.
|
||||
|
||||
Default implementation for non-wildcard patterns.
|
||||
"""
|
||||
r: _Results = {}
|
||||
r = {}
|
||||
if nodes and self.match(nodes[0], r):
|
||||
yield 1, r
|
||||
|
||||
|
||||
class LeafPattern(BasePattern):
|
||||
def __init__(
|
||||
self,
|
||||
type: Optional[int] = None,
|
||||
content: Optional[str] = None,
|
||||
name: Optional[str] = None,
|
||||
) -> None:
|
||||
|
||||
def __init__(self, type=None, content=None, name=None):
|
||||
"""
|
||||
Initializer. Takes optional type, content, and name.
|
||||
|
||||
@ -608,7 +534,7 @@ def __init__(
|
||||
self.content = content
|
||||
self.name = name
|
||||
|
||||
def match(self, node: NL, results=None) -> bool:
|
||||
def match(self, node, results=None):
|
||||
"""Override match() to insist on a leaf node."""
|
||||
if not isinstance(node, Leaf):
|
||||
return False
|
||||
@ -631,14 +557,10 @@ def _submatch(self, node, results=None):
|
||||
|
||||
|
||||
class NodePattern(BasePattern):
|
||||
wildcards: bool = False
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
type: Optional[int] = None,
|
||||
content: Optional[Iterable[str]] = None,
|
||||
name: Optional[str] = None,
|
||||
) -> None:
|
||||
wildcards = False
|
||||
|
||||
def __init__(self, type=None, content=None, name=None):
|
||||
"""
|
||||
Initializer. Takes optional type, content, and name.
|
||||
|
||||
@ -658,19 +580,16 @@ def __init__(
|
||||
assert type >= 256, type
|
||||
if content is not None:
|
||||
assert not isinstance(content, str), repr(content)
|
||||
newcontent = list(content)
|
||||
for i, item in enumerate(newcontent):
|
||||
content = list(content)
|
||||
for i, item in enumerate(content):
|
||||
assert isinstance(item, BasePattern), (i, item)
|
||||
# I don't even think this code is used anywhere, but it does cause
|
||||
# unreachable errors from mypy. This function's signature does look
|
||||
# odd though *shrug*.
|
||||
if isinstance(item, WildcardPattern): # type: ignore[unreachable]
|
||||
self.wildcards = True # type: ignore[unreachable]
|
||||
if isinstance(item, WildcardPattern):
|
||||
self.wildcards = True
|
||||
self.type = type
|
||||
self.content = newcontent # TODO: this is unbound when content is None
|
||||
self.content = content
|
||||
self.name = name
|
||||
|
||||
def _submatch(self, node, results=None) -> bool:
|
||||
def _submatch(self, node, results=None):
|
||||
"""
|
||||
Match the pattern's content to the node's children.
|
||||
|
||||
@ -699,6 +618,7 @@ def _submatch(self, node, results=None) -> bool:
|
||||
|
||||
|
||||
class WildcardPattern(BasePattern):
|
||||
|
||||
"""
|
||||
A wildcard pattern can match zero or more nodes.
|
||||
|
||||
@ -711,16 +631,7 @@ class WildcardPattern(BasePattern):
|
||||
except it always uses non-greedy matching.
|
||||
"""
|
||||
|
||||
min: int
|
||||
max: int
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
content: Optional[str] = None,
|
||||
min: int = 0,
|
||||
max: int = HUGE,
|
||||
name: Optional[str] = None,
|
||||
) -> None:
|
||||
def __init__(self, content=None, min=0, max=HUGE, name=None):
|
||||
"""
|
||||
Initializer.
|
||||
|
||||
@ -745,52 +656,40 @@ def __init__(
|
||||
"""
|
||||
assert 0 <= min <= max <= HUGE, (min, max)
|
||||
if content is not None:
|
||||
f = lambda s: tuple(s)
|
||||
wrapped_content = tuple(map(f, content)) # Protect against alterations
|
||||
content = tuple(map(tuple, content)) # Protect against alterations
|
||||
# Check sanity of alternatives
|
||||
assert len(wrapped_content), repr(
|
||||
wrapped_content
|
||||
) # Can't have zero alternatives
|
||||
for alt in wrapped_content:
|
||||
assert len(alt), repr(alt) # Can have empty alternatives
|
||||
self.content = wrapped_content
|
||||
assert len(content), repr(content) # Can't have zero alternatives
|
||||
for alt in content:
|
||||
assert len(alt), repr(alt) # Can have empty alternatives
|
||||
self.content = content
|
||||
self.min = min
|
||||
self.max = max
|
||||
self.name = name
|
||||
|
||||
def optimize(self) -> Any:
|
||||
def optimize(self):
|
||||
"""Optimize certain stacked wildcard patterns."""
|
||||
subpattern = None
|
||||
if (
|
||||
self.content is not None
|
||||
and len(self.content) == 1
|
||||
and len(self.content[0]) == 1
|
||||
):
|
||||
if (self.content is not None and
|
||||
len(self.content) == 1 and len(self.content[0]) == 1):
|
||||
subpattern = self.content[0][0]
|
||||
if self.min == 1 and self.max == 1:
|
||||
if self.content is None:
|
||||
return NodePattern(name=self.name)
|
||||
if subpattern is not None and self.name == subpattern.name:
|
||||
if subpattern is not None and self.name == subpattern.name:
|
||||
return subpattern.optimize()
|
||||
if (
|
||||
self.min <= 1
|
||||
and isinstance(subpattern, WildcardPattern)
|
||||
and subpattern.min <= 1
|
||||
and self.name == subpattern.name
|
||||
):
|
||||
return WildcardPattern(
|
||||
subpattern.content,
|
||||
self.min * subpattern.min,
|
||||
self.max * subpattern.max,
|
||||
subpattern.name,
|
||||
)
|
||||
if (self.min <= 1 and isinstance(subpattern, WildcardPattern) and
|
||||
subpattern.min <= 1 and self.name == subpattern.name):
|
||||
return WildcardPattern(subpattern.content,
|
||||
self.min*subpattern.min,
|
||||
self.max*subpattern.max,
|
||||
subpattern.name)
|
||||
return self
|
||||
|
||||
def match(self, node, results=None) -> bool:
|
||||
def match(self, node, results=None):
|
||||
"""Does this pattern exactly match a node?"""
|
||||
return self.match_seq([node], results)
|
||||
|
||||
def match_seq(self, nodes, results=None) -> bool:
|
||||
def match_seq(self, nodes, results=None):
|
||||
"""Does this pattern exactly match a sequence of nodes?"""
|
||||
for c, r in self.generate_matches(nodes):
|
||||
if c == len(nodes):
|
||||
@ -801,7 +700,7 @@ def match_seq(self, nodes, results=None) -> bool:
|
||||
return True
|
||||
return False
|
||||
|
||||
def generate_matches(self, nodes) -> Iterator[tuple[int, _Results]]:
|
||||
def generate_matches(self, nodes):
|
||||
"""
|
||||
Generator yielding matches for a sequence of nodes.
|
||||
|
||||
@ -846,7 +745,7 @@ def generate_matches(self, nodes) -> Iterator[tuple[int, _Results]]:
|
||||
if hasattr(sys, "getrefcount"):
|
||||
sys.stderr = save_stderr
|
||||
|
||||
def _iterative_matches(self, nodes) -> Iterator[tuple[int, _Results]]:
|
||||
def _iterative_matches(self, nodes):
|
||||
"""Helper to iteratively yield the matches."""
|
||||
nodelen = len(nodes)
|
||||
if 0 >= self.min:
|
||||
@ -875,10 +774,10 @@ def _iterative_matches(self, nodes) -> Iterator[tuple[int, _Results]]:
|
||||
new_results.append((c0 + c1, r))
|
||||
results = new_results
|
||||
|
||||
def _bare_name_matches(self, nodes) -> tuple[int, _Results]:
|
||||
def _bare_name_matches(self, nodes):
|
||||
"""Special optimized matcher for bare_name."""
|
||||
count = 0
|
||||
r = {} # type: _Results
|
||||
r = {}
|
||||
done = False
|
||||
max = len(nodes)
|
||||
while not done and count < max:
|
||||
@ -888,11 +787,10 @@ def _bare_name_matches(self, nodes) -> tuple[int, _Results]:
|
||||
count += 1
|
||||
done = False
|
||||
break
|
||||
assert self.name is not None
|
||||
r[self.name] = nodes[:count]
|
||||
return count, r
|
||||
|
||||
def _recursive_matches(self, nodes, count) -> Iterator[tuple[int, _Results]]:
|
||||
def _recursive_matches(self, nodes, count):
|
||||
"""Helper to recursively yield the matches."""
|
||||
assert self.content is not None
|
||||
if count >= self.min:
|
||||
@ -900,7 +798,7 @@ def _recursive_matches(self, nodes, count) -> Iterator[tuple[int, _Results]]:
|
||||
if count < self.max:
|
||||
for alt in self.content:
|
||||
for c0, r0 in generate_matches(alt, nodes):
|
||||
for c1, r1 in self._recursive_matches(nodes[c0:], count + 1):
|
||||
for c1, r1 in self._recursive_matches(nodes[c0:], count+1):
|
||||
r = {}
|
||||
r.update(r0)
|
||||
r.update(r1)
|
||||
@ -908,7 +806,8 @@ def _recursive_matches(self, nodes, count) -> Iterator[tuple[int, _Results]]:
|
||||
|
||||
|
||||
class NegatedPattern(BasePattern):
|
||||
def __init__(self, content: Optional[BasePattern] = None) -> None:
|
||||
|
||||
def __init__(self, content=None):
|
||||
"""
|
||||
Initializer.
|
||||
|
||||
@ -921,15 +820,15 @@ def __init__(self, content: Optional[BasePattern] = None) -> None:
|
||||
assert isinstance(content, BasePattern), repr(content)
|
||||
self.content = content
|
||||
|
||||
def match(self, node, results=None) -> bool:
|
||||
def match(self, node):
|
||||
# We never match a node in its entirety
|
||||
return False
|
||||
|
||||
def match_seq(self, nodes, results=None) -> bool:
|
||||
def match_seq(self, nodes):
|
||||
# We only match an empty sequence of nodes in its entirety
|
||||
return len(nodes) == 0
|
||||
|
||||
def generate_matches(self, nodes: list[NL]) -> Iterator[tuple[int, _Results]]:
|
||||
def generate_matches(self, nodes):
|
||||
if self.content is None:
|
||||
# Return a match if there is an empty sequence
|
||||
if len(nodes) == 0:
|
||||
@ -941,9 +840,7 @@ def generate_matches(self, nodes: list[NL]) -> Iterator[tuple[int, _Results]]:
|
||||
yield 0, {}
|
||||
|
||||
|
||||
def generate_matches(
|
||||
patterns: list[BasePattern], nodes: list[NL]
|
||||
) -> Iterator[tuple[int, _Results]]:
|
||||
def generate_matches(patterns, nodes):
|
||||
"""
|
||||
Generator yielding matches for a sequence of patterns and nodes.
|
||||
|
||||
@ -955,7 +852,7 @@ def generate_matches(
|
||||
(count, results) tuples where:
|
||||
count: the entire sequence of patterns matches nodes[:count];
|
||||
results: dict containing named submatches.
|
||||
"""
|
||||
"""
|
||||
if not patterns:
|
||||
yield 0, {}
|
||||
else:
|
89
blib2to3/pytree.pyi
Normal file
89
blib2to3/pytree.pyi
Normal file
@ -0,0 +1,89 @@
|
||||
# Stubs for lib2to3.pytree (Python 3.6)
|
||||
|
||||
import sys
|
||||
from typing import Any, Callable, Dict, Iterator, List, Optional, Text, Tuple, TypeVar, Union
|
||||
|
||||
from blib2to3.pgen2.grammar import Grammar
|
||||
|
||||
_P = TypeVar('_P')
|
||||
_NL = Union[Node, Leaf]
|
||||
_Context = Tuple[Text, int, int]
|
||||
_Results = Dict[Text, _NL]
|
||||
_RawNode = Tuple[int, Text, _Context, Optional[List[_NL]]]
|
||||
_Convert = Callable[[Grammar, _RawNode], Any]
|
||||
|
||||
HUGE: int
|
||||
|
||||
def type_repr(type_num: int) -> Text: ...
|
||||
|
||||
class Base:
|
||||
type: int
|
||||
parent: Optional[Node]
|
||||
prefix: Text
|
||||
children: List[_NL]
|
||||
was_changed: bool
|
||||
was_checked: bool
|
||||
def __eq__(self, other: Any) -> bool: ...
|
||||
def _eq(self: _P, other: _P) -> bool: ...
|
||||
def clone(self: _P) -> _P: ...
|
||||
def post_order(self) -> Iterator[_NL]: ...
|
||||
def pre_order(self) -> Iterator[_NL]: ...
|
||||
def replace(self, new: Union[_NL, List[_NL]]) -> None: ...
|
||||
def get_lineno(self) -> int: ...
|
||||
def changed(self) -> None: ...
|
||||
def remove(self) -> Optional[int]: ...
|
||||
@property
|
||||
def next_sibling(self) -> Optional[_NL]: ...
|
||||
@property
|
||||
def prev_sibling(self) -> Optional[_NL]: ...
|
||||
def leaves(self) -> Iterator[Leaf]: ...
|
||||
def depth(self) -> int: ...
|
||||
def get_suffix(self) -> Text: ...
|
||||
if sys.version_info < (3,):
|
||||
def get_prefix(self) -> Text: ...
|
||||
def set_prefix(self, prefix: Text) -> None: ...
|
||||
|
||||
class Node(Base):
|
||||
fixers_applied: List[Any]
|
||||
def __init__(self, type: int, children: List[_NL], context: Optional[Any] = ..., prefix: Optional[Text] = ..., fixers_applied: Optional[List[Any]] = ...) -> None: ...
|
||||
def set_child(self, i: int, child: _NL) -> None: ...
|
||||
def insert_child(self, i: int, child: _NL) -> None: ...
|
||||
def append_child(self, child: _NL) -> None: ...
|
||||
|
||||
class Leaf(Base):
|
||||
lineno: int
|
||||
column: int
|
||||
value: Text
|
||||
fixers_applied: List[Any]
|
||||
def __init__(self, type: int, value: Text, context: Optional[_Context] = ..., prefix: Optional[Text] = ..., fixers_applied: List[Any] = ...) -> None: ...
|
||||
# bolted on attributes by Black
|
||||
bracket_depth: int
|
||||
opening_bracket: Leaf
|
||||
|
||||
def convert(gr: Grammar, raw_node: _RawNode) -> _NL: ...
|
||||
|
||||
class BasePattern:
|
||||
type: int
|
||||
content: Optional[Text]
|
||||
name: Optional[Text]
|
||||
def optimize(self) -> BasePattern: ... # sic, subclasses are free to optimize themselves into different patterns
|
||||
def match(self, node: _NL, results: Optional[_Results] = ...) -> bool: ...
|
||||
def match_seq(self, nodes: List[_NL], results: Optional[_Results] = ...) -> bool: ...
|
||||
def generate_matches(self, nodes: List[_NL]) -> Iterator[Tuple[int, _Results]]: ...
|
||||
|
||||
class LeafPattern(BasePattern):
|
||||
def __init__(self, type: Optional[int] = ..., content: Optional[Text] = ..., name: Optional[Text] = ...) -> None: ...
|
||||
|
||||
class NodePattern(BasePattern):
|
||||
wildcards: bool
|
||||
def __init__(self, type: Optional[int] = ..., content: Optional[Text] = ..., name: Optional[Text] = ...) -> None: ...
|
||||
|
||||
class WildcardPattern(BasePattern):
|
||||
min: int
|
||||
max: int
|
||||
def __init__(self, content: Optional[Text] = ..., min: int = ..., max: int = ..., name: Optional[Text] = ...) -> None: ...
|
||||
|
||||
class NegatedPattern(BasePattern):
|
||||
def __init__(self, content: Optional[Text] = ...) -> None: ...
|
||||
|
||||
def generate_matches(patterns: List[BasePattern], nodes: List[_NL]) -> Iterator[Tuple[int, _Results]]: ...
|
@ -17,4 +17,4 @@ help:
|
||||
# Catch-all target: route all unknown targets to Sphinx using the new
|
||||
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
||||
%: Makefile
|
||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
||||
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
2
docs/_static/license.svg
vendored
2
docs/_static/license.svg
vendored
@ -1 +1 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="78" height="20"><linearGradient id="b" x2="0" y2="100%"><stop offset="0" stop-color="#bbb" stop-opacity=".1"/><stop offset="1" stop-opacity=".1"/></linearGradient><clipPath id="a"><rect width="78" height="20" rx="3" fill="#fff"/></clipPath><g clip-path="url(#a)"><path fill="#555" d="M0 0h47v20H0z"/><path fill="#7900CA" d="M47 0h31v20H47z"/><path fill="url(#b)" d="M0 0h78v20H0z"/></g><g fill="#fff" text-anchor="middle" font-family="DejaVu Sans,Verdana,Geneva,sans-serif" font-size="110"><text x="245" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="370">license</text><text x="245" y="140" transform="scale(.1)" textLength="370">license</text><text x="615" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="210">MIT</text><text x="615" y="140" transform="scale(.1)" textLength="210">MIT</text></g> </svg>
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="78" height="20"><linearGradient id="b" x2="0" y2="100%"><stop offset="0" stop-color="#bbb" stop-opacity=".1"/><stop offset="1" stop-opacity=".1"/></linearGradient><clipPath id="a"><rect width="78" height="20" rx="3" fill="#fff"/></clipPath><g clip-path="url(#a)"><path fill="#555" d="M0 0h47v20H0z"/><path fill="#7900CA" d="M47 0h31v20H47z"/><path fill="url(#b)" d="M0 0h78v20H0z"/></g><g fill="#fff" text-anchor="middle" font-family="DejaVu Sans,Verdana,Geneva,sans-serif" font-size="110"><text x="245" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="370">license</text><text x="245" y="140" transform="scale(.1)" textLength="370">license</text><text x="615" y="150" fill="#010101" fill-opacity=".3" transform="scale(.1)" textLength="210">MIT</text><text x="615" y="140" transform="scale(.1)" textLength="210">MIT</text></g> </svg>
|
Before Width: | Height: | Size: 950 B After Width: | Height: | Size: 949 B |
BIN
docs/_static/logo2-readme.png
vendored
BIN
docs/_static/logo2-readme.png
vendored
Binary file not shown.
Before Width: | Height: | Size: 97 KiB After Width: | Height: | Size: 79 KiB |
@ -1,3 +0,0 @@
|
||||
```{include} ../AUTHORS.md
|
||||
|
||||
```
|
1
docs/authors.md
Symbolic link
1
docs/authors.md
Symbolic link
@ -0,0 +1 @@
|
||||
_build/generated/authors.md
|
@ -1,3 +0,0 @@
|
||||
```{include} ../CHANGES.md
|
||||
|
||||
```
|
1
docs/change_log.md
Symbolic link
1
docs/change_log.md
Symbolic link
@ -0,0 +1 @@
|
||||
_build/generated/change_log.md
|
@ -1,3 +0,0 @@
|
||||
[flake8]
|
||||
max-line-length = 88
|
||||
extend-ignore = E203,E701
|
@ -1,3 +0,0 @@
|
||||
[flake8]
|
||||
max-line-length = 88
|
||||
extend-ignore = E203,E701
|
@ -1,3 +0,0 @@
|
||||
[flake8]
|
||||
max-line-length = 88
|
||||
extend-ignore = E203,E701
|
@ -1,2 +0,0 @@
|
||||
[*.py]
|
||||
profile = black
|
@ -1,2 +0,0 @@
|
||||
[settings]
|
||||
profile = black
|
@ -1,2 +0,0 @@
|
||||
[tool.isort]
|
||||
profile = 'black'
|
@ -1,2 +0,0 @@
|
||||
[isort]
|
||||
profile = black
|
@ -1,3 +0,0 @@
|
||||
[pycodestyle]
|
||||
max-line-length = 88
|
||||
ignore = E203,E701
|
@ -1,3 +0,0 @@
|
||||
[pycodestyle]
|
||||
max-line-length = 88
|
||||
ignore = E203,E701
|
@ -1,3 +0,0 @@
|
||||
[pycodestyle]
|
||||
max-line-length = 88
|
||||
ignore = E203,E701
|
@ -1,2 +0,0 @@
|
||||
[format]
|
||||
max-line-length = 88
|
@ -1,2 +0,0 @@
|
||||
[tool.pylint.format]
|
||||
max-line-length = "88"
|
@ -1,2 +0,0 @@
|
||||
[pylint]
|
||||
max-line-length = 88
|
215
docs/conf.py
215
docs/conf.py
@ -1,3 +1,4 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Configuration file for the Sphinx documentation builder.
|
||||
#
|
||||
@ -11,92 +12,111 @@
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#
|
||||
|
||||
import os
|
||||
import re
|
||||
import string
|
||||
from importlib.metadata import version
|
||||
import ast
|
||||
from pathlib import Path
|
||||
import re
|
||||
import shutil
|
||||
import string
|
||||
|
||||
from recommonmark.parser import CommonMarkParser
|
||||
|
||||
from sphinx.application import Sphinx
|
||||
|
||||
CURRENT_DIR = Path(__file__).parent
|
||||
|
||||
|
||||
def make_pypi_svg(version: str) -> None:
|
||||
template: Path = CURRENT_DIR / "_static" / "pypi_template.svg"
|
||||
target: Path = CURRENT_DIR / "_static" / "pypi.svg"
|
||||
with open(str(template), encoding="utf8") as f:
|
||||
svg: str = string.Template(f.read()).substitute(version=version)
|
||||
def get_version():
|
||||
black_py = CURRENT_DIR / ".." / "black.py"
|
||||
_version_re = re.compile(r"__version__\s+=\s+(?P<version>.*)")
|
||||
with open(str(black_py), "r", encoding="utf8") as f:
|
||||
version = _version_re.search(f.read()).group("version")
|
||||
return str(ast.literal_eval(version))
|
||||
|
||||
|
||||
def make_pypi_svg(version):
|
||||
template = CURRENT_DIR / "_static" / "pypi_template.svg"
|
||||
target = CURRENT_DIR / "_static" / "pypi.svg"
|
||||
with open(str(template), "r", encoding="utf8") as f:
|
||||
svg = string.Template(f.read()).substitute(version=version)
|
||||
with open(str(target), "w", encoding="utf8") as f:
|
||||
f.write(svg)
|
||||
|
||||
|
||||
def replace_pr_numbers_with_links(content: str) -> str:
|
||||
"""Replaces all PR numbers with the corresponding GitHub link."""
|
||||
return re.sub(r"#(\d+)", r"[#\1](https://github.com/psf/black/pull/\1)", content)
|
||||
def make_filename(line):
|
||||
non_letters = re.compile(r"[^a-z]+")
|
||||
filename = line[3:].rstrip().lower()
|
||||
filename = non_letters.sub("_", filename)
|
||||
if filename.startswith("_"):
|
||||
filename = filename[1:]
|
||||
if filename.endswith("_"):
|
||||
filename = filename[:-1]
|
||||
return filename + ".md"
|
||||
|
||||
|
||||
def handle_include_read(
|
||||
app: Sphinx,
|
||||
relative_path: Path,
|
||||
parent_docname: str,
|
||||
content: list[str],
|
||||
) -> None:
|
||||
"""Handler for the include-read sphinx event."""
|
||||
if parent_docname == "change_log":
|
||||
content[0] = replace_pr_numbers_with_links(content[0])
|
||||
def generate_sections_from_readme():
|
||||
target_dir = CURRENT_DIR / "_build" / "generated"
|
||||
readme = CURRENT_DIR / ".." / "README.md"
|
||||
shutil.rmtree(str(target_dir), ignore_errors=True)
|
||||
target_dir.mkdir(parents=True)
|
||||
|
||||
output = None
|
||||
target_dir = target_dir.relative_to(CURRENT_DIR)
|
||||
with open(str(readme), "r", encoding="utf8") as f:
|
||||
for line in f:
|
||||
if line.startswith("## "):
|
||||
if output is not None:
|
||||
output.close()
|
||||
filename = make_filename(line)
|
||||
output_path = CURRENT_DIR / filename
|
||||
if output_path.is_symlink() or output_path.is_file():
|
||||
output_path.unlink()
|
||||
output_path.symlink_to(target_dir / filename)
|
||||
output = open(str(output_path), "w", encoding="utf8")
|
||||
output.write(
|
||||
"[//]: # (NOTE: THIS FILE IS AUTOGENERATED FROM README.md)\n\n"
|
||||
)
|
||||
|
||||
def setup(app: Sphinx) -> None:
|
||||
"""Sets up a minimal sphinx extension."""
|
||||
app.connect("include-read", handle_include_read)
|
||||
if output is None:
|
||||
continue
|
||||
|
||||
if line.startswith("##"):
|
||||
line = line[1:]
|
||||
|
||||
output.write(line)
|
||||
|
||||
# Necessary so Click doesn't hit an encode error when called by
|
||||
# sphinxcontrib-programoutput on Windows.
|
||||
os.putenv("pythonioencoding", "utf-8")
|
||||
|
||||
# -- Project information -----------------------------------------------------
|
||||
|
||||
project = "Black"
|
||||
copyright = "2018-Present, Łukasz Langa and contributors to Black"
|
||||
copyright = "2018, Łukasz Langa and contributors to Black"
|
||||
author = "Łukasz Langa and contributors to Black"
|
||||
|
||||
# Autopopulate version
|
||||
# The version, including alpha/beta/rc tags, but not commit hash and datestamps
|
||||
release = version("black").split("+")[0]
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = get_version()
|
||||
# The short X.Y version.
|
||||
version = release
|
||||
for sp in "abcfr":
|
||||
version = version.split(sp)[0]
|
||||
|
||||
make_pypi_svg(release)
|
||||
generate_sections_from_readme()
|
||||
|
||||
|
||||
# -- General configuration ---------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
needs_sphinx = "4.4"
|
||||
#
|
||||
# needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
||||
# ones.
|
||||
extensions = [
|
||||
"sphinx.ext.autodoc",
|
||||
"sphinx.ext.intersphinx",
|
||||
"sphinx.ext.napoleon",
|
||||
"myst_parser",
|
||||
"sphinxcontrib.programoutput",
|
||||
"sphinx_copybutton",
|
||||
]
|
||||
|
||||
# If you need extensions of a certain version or higher, list them here.
|
||||
needs_extensions = {"myst_parser": "0.13.7"}
|
||||
extensions = ["sphinx.ext.autodoc", "sphinx.ext.intersphinx", "sphinx.ext.napoleon"]
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ["_templates"]
|
||||
|
||||
source_parsers = {".md": CommonMarkParser}
|
||||
|
||||
# The suffix(es) of source filenames.
|
||||
# You can specify multiple suffix as a list of string:
|
||||
source_suffix = [".rst", ".md"]
|
||||
@ -109,40 +129,47 @@ def setup(app: Sphinx) -> None:
|
||||
#
|
||||
# This is also used if you do content translation via gettext catalogs.
|
||||
# Usually you set "language" from the command line for these cases.
|
||||
language = "en"
|
||||
language = None
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
# This pattern also affects html_static_path and html_extra_path .
|
||||
|
||||
exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = "sphinx"
|
||||
|
||||
# We need headers to be linkable to so ask MyST-Parser to autogenerate anchor IDs for
|
||||
# headers up to and including level 3.
|
||||
myst_heading_anchors = 3
|
||||
|
||||
# Prettier support formatting some MyST syntax but not all, so let's disable the
|
||||
# unsupported yet still enabled by default ones.
|
||||
myst_disable_syntax = [
|
||||
"colon_fence",
|
||||
"myst_block_break",
|
||||
"myst_line_comment",
|
||||
"math_block",
|
||||
]
|
||||
|
||||
# Optional MyST Syntaxes
|
||||
myst_enable_extensions = []
|
||||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
#
|
||||
html_theme = "furo"
|
||||
html_logo = "_static/logo2-readme.png"
|
||||
html_theme = "alabaster"
|
||||
|
||||
html_sidebars = {
|
||||
"**": [
|
||||
"about.html",
|
||||
"navigation.html",
|
||||
"relations.html",
|
||||
"sourcelink.html",
|
||||
"searchbox.html",
|
||||
]
|
||||
}
|
||||
|
||||
html_theme_options = {
|
||||
"show_related": False,
|
||||
"description": "“Any color you like.”",
|
||||
"github_button": True,
|
||||
"github_user": "ambv",
|
||||
"github_repo": "black",
|
||||
"github_type": "star",
|
||||
"show_powered_by": True,
|
||||
"fixed_sidebar": True,
|
||||
"logo": "logo2.png",
|
||||
"travis_button": True,
|
||||
}
|
||||
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
@ -168,16 +195,33 @@ def setup(app: Sphinx) -> None:
|
||||
|
||||
# -- Options for LaTeX output ------------------------------------------------
|
||||
|
||||
latex_elements = {
|
||||
# The paper size ('letterpaper' or 'a4paper').
|
||||
#
|
||||
# 'papersize': 'letterpaper',
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#
|
||||
# 'pointsize': '10pt',
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#
|
||||
# 'preamble': '',
|
||||
# Latex figure (float) alignment
|
||||
#
|
||||
# 'figure_align': 'htbp',
|
||||
}
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title,
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [(
|
||||
master_doc,
|
||||
"black.tex",
|
||||
"Documentation for Black",
|
||||
"Łukasz Langa and contributors to Black",
|
||||
"manual",
|
||||
)]
|
||||
latex_documents = [
|
||||
(
|
||||
master_doc,
|
||||
"black.tex",
|
||||
"Documentation for Black",
|
||||
"Łukasz Langa and contributors to Black",
|
||||
"manual",
|
||||
)
|
||||
]
|
||||
|
||||
|
||||
# -- Options for manual page output ------------------------------------------
|
||||
@ -192,15 +236,17 @@ def setup(app: Sphinx) -> None:
|
||||
# Grouping the document tree into Texinfo files. List of tuples
|
||||
# (source start file, target name, title, author,
|
||||
# dir menu entry, description, category)
|
||||
texinfo_documents = [(
|
||||
master_doc,
|
||||
"Black",
|
||||
"Documentation for Black",
|
||||
author,
|
||||
"Black",
|
||||
"The uncompromising Python code formatter",
|
||||
"Miscellaneous",
|
||||
)]
|
||||
texinfo_documents = [
|
||||
(
|
||||
master_doc,
|
||||
"Black",
|
||||
"Documentation for Black",
|
||||
author,
|
||||
"Black",
|
||||
"The uncompromising Python code formatter",
|
||||
"Miscellaneous",
|
||||
)
|
||||
]
|
||||
|
||||
|
||||
# -- Options for Epub output -------------------------------------------------
|
||||
@ -228,14 +274,7 @@ def setup(app: Sphinx) -> None:
|
||||
|
||||
autodoc_member_order = "bysource"
|
||||
|
||||
# -- sphinx-copybutton configuration ----------------------------------------
|
||||
copybutton_prompt_text = (
|
||||
r">>> |\.\.\. |> |\$ |\# | In \[\d*\]: | {2,5}\.\.\.: | {5,8}: "
|
||||
)
|
||||
copybutton_prompt_is_regexp = True
|
||||
copybutton_remove_prompts = True
|
||||
|
||||
# -- Options for intersphinx extension ---------------------------------------
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
intersphinx_mapping = {"<name>": ("https://docs.python.org/3/", None)}
|
||||
intersphinx_mapping = {"https://docs.python.org/3/": None}
|
||||
|
1
docs/contributing.md
Symbolic link
1
docs/contributing.md
Symbolic link
@ -0,0 +1 @@
|
||||
../CONTRIBUTING.md
|
@ -1,58 +0,0 @@
|
||||
# Gauging changes
|
||||
|
||||
A lot of the time, your change will affect formatting and/or performance. Quantifying
|
||||
these changes is hard, so we have tooling to help make it easier.
|
||||
|
||||
It's recommended you evaluate the quantifiable changes your _Black_ formatting
|
||||
modification causes before submitting a PR. Think about if the change seems disruptive
|
||||
enough to cause frustration to projects that are already "black formatted".
|
||||
|
||||
## diff-shades
|
||||
|
||||
diff-shades is a tool that runs _Black_ across a list of open-source projects recording
|
||||
the results. The main highlight feature of diff-shades is being able to compare two
|
||||
revisions of _Black_. This is incredibly useful as it allows us to see what exact
|
||||
changes will occur, say merging a certain PR.
|
||||
|
||||
For more information, please see the [diff-shades documentation][diff-shades].
|
||||
|
||||
### CI integration
|
||||
|
||||
diff-shades is also the tool behind the "diff-shades results comparing ..." /
|
||||
"diff-shades reports zero changes ..." comments on PRs. The project has a GitHub Actions
|
||||
workflow that analyzes and compares two revisions of _Black_ according to these rules:
|
||||
|
||||
| | Baseline revision | Target revision |
|
||||
| --------------------- | ----------------------- | ---------------------------- |
|
||||
| On PRs | latest commit on `main` | PR commit with `main` merged |
|
||||
| On pushes (main only) | latest PyPI version | the pushed commit |
|
||||
|
||||
For pushes to main, there's only one analysis job named `preview-changes` where the
|
||||
preview style is used for all projects.
|
||||
|
||||
For PRs they get one more analysis job: `assert-no-changes`. It's similar to
|
||||
`preview-changes` but runs with the stable code style. It will fail if changes were
|
||||
made. This makes sure code won't be reformatted again and again within the same year in
|
||||
accordance to Black's stability policy.
|
||||
|
||||
Additionally for PRs, a PR comment will be posted embedding a summary of the preview
|
||||
changes and links to further information. If there's a pre-existing diff-shades comment,
|
||||
it'll be updated instead the next time the workflow is triggered on the same PR.
|
||||
|
||||
```{note}
|
||||
The `preview-changes` job will only fail intentionally if while analyzing a file failed to
|
||||
format. Otherwise a failure indicates a bug in the workflow.
|
||||
```
|
||||
|
||||
The workflow uploads several artifacts upon completion:
|
||||
|
||||
- The raw analyses (.json)
|
||||
- HTML diffs (.html)
|
||||
- `.pr-comment.json` (if triggered by a PR)
|
||||
|
||||
The last one is downloaded by the `diff-shades-comment` workflow and shouldn't be
|
||||
downloaded locally. The HTML diffs come in handy for push-based where there's no PR to
|
||||
post a comment. And the analyses exist just in case you want to do further analysis
|
||||
using the collected data locally.
|
||||
|
||||
[diff-shades]: https://github.com/ichard26/diff-shades#readme
|
@ -1,45 +0,0 @@
|
||||
# Contributing
|
||||
|
||||
```{toctree}
|
||||
---
|
||||
hidden:
|
||||
---
|
||||
|
||||
the_basics
|
||||
gauging_changes
|
||||
issue_triage
|
||||
release_process
|
||||
```
|
||||
|
||||
Welcome! Happy to see you willing to make the project better. Have you read the entire
|
||||
[user documentation](https://black.readthedocs.io/en/latest/) yet?
|
||||
|
||||
```{rubric} Bird's eye view
|
||||
|
||||
```
|
||||
|
||||
In terms of inspiration, _Black_ is about as configurable as _gofmt_ (which is to say,
|
||||
not very). This is deliberate. _Black_ aims to provide a consistent style and take away
|
||||
opportunities for arguing about style.
|
||||
|
||||
Bug reports and fixes are always welcome! Please follow the
|
||||
[issue templates on GitHub](https://github.com/psf/black/issues/new/choose) for best
|
||||
results.
|
||||
|
||||
Before you suggest a new feature or configuration knob, ask yourself why you want it. If
|
||||
it enables better integration with some workflow, fixes an inconsistency, speeds things
|
||||
up, and so on - go for it! On the other hand, if your answer is "because I don't like a
|
||||
particular formatting" then you're not ready to embrace _Black_ yet. Such changes are
|
||||
unlikely to get accepted. You can still try but prepare to be disappointed.
|
||||
|
||||
```{rubric} Contents
|
||||
|
||||
```
|
||||
|
||||
This section covers the following topics:
|
||||
|
||||
- {doc}`the_basics`
|
||||
- {doc}`gauging_changes`
|
||||
- {doc}`release_process`
|
||||
|
||||
For an overview on contributing to the _Black_, please checkout {doc}`the_basics`.
|
@ -1,169 +0,0 @@
|
||||
# Issue triage
|
||||
|
||||
Currently, _Black_ uses the issue tracker for bugs, feature requests, proposed style
|
||||
modifications, and general user support. Each of these issues have to be triaged so they
|
||||
can be eventually be resolved somehow. This document outlines the triaging process and
|
||||
also the current guidelines and recommendations.
|
||||
|
||||
```{tip}
|
||||
If you're looking for a way to contribute without submitting patches, this might be
|
||||
the area for you. Since _Black_ is a popular project, its issue tracker is quite busy
|
||||
and always needs more attention than is available. While triage isn't the most
|
||||
glamorous or technically challenging form of contribution, it's still important.
|
||||
For example, we would love to know whether that old bug report is still reproducible!
|
||||
|
||||
You can get easily started by reading over this document and then responding to issues.
|
||||
|
||||
If you contribute enough and have stayed for a long enough time, you may even be
|
||||
given Triage permissions!
|
||||
```
|
||||
|
||||
## The basics
|
||||
|
||||
_Black_ gets a whole bunch of different issues, they range from bug reports to user
|
||||
support issues. To triage is to identify, organize, and kickstart the issue's journey
|
||||
through its lifecycle to resolution.
|
||||
|
||||
More specifically, to triage an issue means to:
|
||||
|
||||
- identify what type and categories the issue falls under
|
||||
- confirm bugs
|
||||
- ask questions / for further information if necessary
|
||||
- link related issues
|
||||
- provide the first initial feedback / support
|
||||
|
||||
Note that triage is typically the first response to an issue, so don't fret if the issue
|
||||
doesn't make much progress after initial triage. The main goal of triaging to prepare
|
||||
the issue for future more specific development or discussion, so _eventually_ it will be
|
||||
resolved.
|
||||
|
||||
The lifecycle of a bug report or user support issue typically goes something like this:
|
||||
|
||||
1. _the issue is waiting for triage_
|
||||
2. **identified** - has been marked with a type label and other relevant labels, more
|
||||
details or a functional reproduction may be still needed (and therefore should be
|
||||
marked with `S: needs repro` or `S: awaiting response`)
|
||||
3. **confirmed** - the issue can reproduced and necessary details have been provided
|
||||
4. **discussion** - initial triage has been done and now the general details on how the
|
||||
issue should be best resolved are being hashed out
|
||||
5. **awaiting fix** - no further discussion on the issue is necessary and a resolving PR
|
||||
is the next step
|
||||
6. **closed** - the issue has been resolved, reasons include:
|
||||
- the issue couldn't be reproduced
|
||||
- the issue has been fixed
|
||||
- duplicate of another pre-existing issue or is invalid
|
||||
|
||||
For enhancement, documentation, and style issues, the lifecycle looks very similar but
|
||||
the details are different:
|
||||
|
||||
1. _the issue is waiting for triage_
|
||||
2. **identified** - has been marked with a type label and other relevant labels
|
||||
3. **discussion** - the merits of the suggested changes are currently being discussed, a
|
||||
PR would be acceptable but would be at significant risk of being rejected
|
||||
4. **accepted & awaiting PR** - it's been determined the suggested changes are OK and a
|
||||
PR would be welcomed (`S: accepted`)
|
||||
5. **closed**: - the issue has been resolved, reasons include:
|
||||
- the suggested changes were implemented
|
||||
- it was rejected (due to technical concerns, ethos conflicts, etc.)
|
||||
- duplicate of a pre-existing issue or is invalid
|
||||
|
||||
**Note**: documentation issues don't use the `S: accepted` label currently since they're
|
||||
less likely to be rejected.
|
||||
|
||||
## Labelling
|
||||
|
||||
We use labels to organize, track progress, and help effectively divvy up work.
|
||||
|
||||
Our labels are divided up into several groups identified by their prefix:
|
||||
|
||||
- **T - Type**: the general flavor of issue / PR
|
||||
- **C - Category**: areas of concerns, ranges from bug types to project maintenance
|
||||
- **F - Formatting Area**: like C but for formatting specifically
|
||||
- **S - Status**: what stage of resolution is this issue currently in?
|
||||
- **R - Resolution**: how / why was the issue / PR resolved?
|
||||
|
||||
We also have a few standalone labels:
|
||||
|
||||
- **`good first issue`**: issues that are beginner-friendly (and will show up in GitHub
|
||||
banners for first-time visitors to the repository)
|
||||
- **`help wanted`**: complex issues that need and are looking for a fair bit of work as
|
||||
to progress (will also show up in various GitHub pages)
|
||||
- **`skip news`**: for PRs that are trivial and don't need a CHANGELOG entry (and skips
|
||||
the CHANGELOG entry check)
|
||||
|
||||
```{note}
|
||||
We do use labels for PRs, in particular the `skip news` label, but we aren't that
|
||||
rigorous about it. Just follow your judgement on what labels make sense for the
|
||||
specific PR (if any even make sense).
|
||||
```
|
||||
|
||||
## Projects
|
||||
|
||||
For more general and broad goals we use projects to track work. Some may be longterm
|
||||
projects with no true end (e.g. the "Amazing documentation" project) while others may be
|
||||
more focused and have a definite end (like the "Getting to beta" project).
|
||||
|
||||
```{note}
|
||||
To modify GitHub Projects you need the [Write repository permission level or higher](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-permission-levels-for-an-organization#repository-access-for-each-permission-level).
|
||||
```
|
||||
|
||||
## Closing issues
|
||||
|
||||
Closing an issue signifies the issue has reached the end of its life, so closing issues
|
||||
should be taken with care. The following is the general recommendation for each type of
|
||||
issue. Note that these are only guidelines and if your judgement says something else
|
||||
it's totally cool to go with it instead.
|
||||
|
||||
For most issues, closing the issue manually or automatically after a resolving PR is
|
||||
ideal. For bug reports specifically, if the bug has already been fixed, try to check in
|
||||
with the issue opener that their specific case has been resolved before closing. Note
|
||||
that we close issues as soon as they're fixed in the `main` branch. This doesn't
|
||||
necessarily mean they've been released yet.
|
||||
|
||||
Design and enhancement issues should be also closed when it's clear the proposed change
|
||||
won't be implemented, whether that has been determined after a lot of discussion or just
|
||||
simply goes against _Black_'s ethos. If such an issue turns heated, closing and locking
|
||||
is acceptable if it's severe enough (although checking in with the core team is probably
|
||||
a good idea).
|
||||
|
||||
User support issues are best closed by the author or when it's clear the issue has been
|
||||
resolved in some sort of manner.
|
||||
|
||||
Duplicates and invalid issues should always be closed since they serve no purpose and
|
||||
add noise to an already busy issue tracker. Although be careful to make sure it's truly
|
||||
a duplicate and not just very similar before labelling and closing an issue as
|
||||
duplicate.
|
||||
|
||||
## Common reports
|
||||
|
||||
Some issues are frequently opened, like issues about _Black_ formatted code causing E203
|
||||
messages. Even though these issues are probably heavily duplicated, they still require
|
||||
triage sucking up valuable time from other things (although they usually skip most of
|
||||
their lifecycle since they're closed on triage).
|
||||
|
||||
Here's some of the most common issues and also pre-made responses you can use:
|
||||
|
||||
### "The trailing comma isn't being removed by Black!"
|
||||
|
||||
```text
|
||||
Black used to remove the trailing comma if the expression fits in a single line, but this was changed by #826 and #1288. Now a trailing comma tells Black to always explode the expression. This change was made mostly for the cases where you _know_ a collection or whatever will grow in the future. Having it always exploded as one element per line reduces diff noise when adding elements. Before the "magic trailing comma" feature, you couldn't anticipate a collection's growth reliably since collections that fitted in one line were ruthlessly collapsed regardless of your intentions. One of Black's goals is reducing diff noise, so this was a good pragmatic change.
|
||||
|
||||
So no, this is not a bug, but an intended feature. Anyway, [here's the documentation](https://github.com/psf/black/blob/master/docs/the_black_code_style.md#the-magic-trailing-comma) on the "magic trailing comma", including the ability to skip this functionality with the `--skip-magic-trailing-comma` option. Hopefully that helps solve the possible confusion.
|
||||
```
|
||||
|
||||
### "Black formatted code is violating Flake8's E203!"
|
||||
|
||||
```text
|
||||
Hi,
|
||||
|
||||
This is expected behaviour, please see the documentation regarding this case (emphasis
|
||||
mine):
|
||||
|
||||
> PEP 8 recommends to treat : in slices as a binary operator with the lowest priority, and to leave an equal amount of space on either side, **except if a parameter is omitted (e.g. ham[1 + 1 :])**. It recommends no spaces around : operators for “simple expressions” (ham[lower:upper]), and **extra space for “complex expressions” (ham[lower : upper + offset])**. **Black treats anything more than variable names as “complex” (ham[lower : upper + 1]).** It also states that for extended slices, both : operators have to have the same amount of spacing, except if a parameter is omitted (ham[1 + 1 ::]). Black enforces these rules consistently.
|
||||
|
||||
> This behaviour may raise E203 whitespace before ':' warnings in style guide enforcement tools like Flake8. **Since E203 is not PEP 8 compliant, you should tell Flake8 to ignore these warnings**.
|
||||
|
||||
https://black.readthedocs.io/en/stable/the_black_code_style/current_style.html#slices
|
||||
|
||||
Have a good day!
|
||||
```
|
@ -1,174 +0,0 @@
|
||||
# Release process
|
||||
|
||||
_Black_ has had a lot of work done into standardizing and automating its release
|
||||
process. This document sets out to explain how everything works and how to release
|
||||
_Black_ using said automation.
|
||||
|
||||
## Release cadence
|
||||
|
||||
**We aim to release whatever is on `main` every 1-2 months.** This ensures merged
|
||||
improvements and bugfixes are shipped to users reasonably quickly, while not massively
|
||||
fracturing the user-base with too many versions. This also keeps the workload on
|
||||
maintainers consistent and predictable.
|
||||
|
||||
If there's not much new on `main` to justify a release, it's acceptable to skip a
|
||||
month's release. Ideally January releases should not be skipped because as per our
|
||||
[stability policy](labels/stability-policy), the first release in a new calendar year
|
||||
may make changes to the _stable_ style. While the policy applies to the first release
|
||||
(instead of only January releases), confining changes to the stable style to January
|
||||
will keep things predictable (and nicer) for users.
|
||||
|
||||
Unless there is a serious regression or bug that requires immediate patching, **there
|
||||
should not be more than one release per month**. While version numbers are cheap,
|
||||
releases require a maintainer to both commit to do the actual cutting of a release, but
|
||||
also to be able to deal with the potential fallout post-release. Releasing more
|
||||
frequently than monthly nets rapidly diminishing returns.
|
||||
|
||||
## Cutting a release
|
||||
|
||||
**You must have `write` permissions for the _Black_ repository to cut a release.**
|
||||
|
||||
The 10,000 foot view of the release process is that you prepare a release PR and then
|
||||
publish a [GitHub Release]. This triggers [release automation](#release-workflows) that
|
||||
builds all release artifacts and publishes them to the various platforms we publish to.
|
||||
|
||||
We now have a `scripts/release.py` script to help with cutting the release PRs.
|
||||
|
||||
- `python3 scripts/release.py --help` is your friend.
|
||||
- `release.py` has only been tested in Python 3.12 (so get with the times :D)
|
||||
|
||||
To cut a release:
|
||||
|
||||
1. Determine the release's version number
|
||||
- **_Black_ follows the [CalVer] versioning standard using the `YY.M.N` format**
|
||||
- So unless there already has been a release during this month, `N` should be `0`
|
||||
- Example: the first release in January, 2022 → `22.1.0`
|
||||
- `release.py` will calculate this and log to stderr for you copy paste pleasure
|
||||
1. File a PR editing `CHANGES.md` and the docs to version the latest changes
|
||||
- Run `python3 scripts/release.py [--debug]` to generate most changes
|
||||
- Sub headings in the template, if they have no bullet points need manual removal
|
||||
_PR welcome to improve :D_
|
||||
1. If `release.py` fail manually edit; otherwise, yay, skip this step!
|
||||
1. Replace the `## Unreleased` header with the version number
|
||||
1. Remove any empty sections for the current release
|
||||
1. (_optional_) Read through and copy-edit the changelog (eg. by moving entries,
|
||||
fixing typos, or rephrasing entries)
|
||||
1. Double-check that no changelog entries since the last release were put in the
|
||||
wrong section (e.g., run `git diff <last release> CHANGES.md`)
|
||||
1. Update references to the latest version in
|
||||
{doc}`/integrations/source_version_control` and
|
||||
{doc}`/usage_and_configuration/the_basics`
|
||||
- Example PR: [GH-3139]
|
||||
1. Once the release PR is merged, wait until all CI passes
|
||||
- If CI does not pass, **stop** and investigate the failure(s) as generally we'd want
|
||||
to fix failing CI before cutting a release
|
||||
1. [Draft a new GitHub Release][new-release]
|
||||
1. Click `Choose a tag` and type in the version number, then select the
|
||||
`Create new tag: YY.M.N on publish` option that appears
|
||||
1. Verify that the new tag targets the `main` branch
|
||||
1. You can leave the release title blank, GitHub will default to the tag name
|
||||
1. Copy and paste the _raw changelog Markdown_ for the current release into the
|
||||
description box
|
||||
1. Publish the GitHub Release, triggering [release automation](#release-workflows) that
|
||||
will handle the rest
|
||||
1. Once CI is done add + commit (git push - No review) a new empty template for the next
|
||||
release to CHANGES.md _(Template is able to be copy pasted from release.py should we
|
||||
fail)_
|
||||
1. `python3 scripts/release.py --add-changes-template|-a [--debug]`
|
||||
1. Should that fail, please return to copy + paste
|
||||
1. At this point, you're basically done. It's good practice to go and [watch and verify
|
||||
that all the release workflows pass][black-actions], although you will receive a
|
||||
GitHub notification should something fail.
|
||||
- If something fails, don't panic. Please go read the respective workflow's logs and
|
||||
configuration file to reverse-engineer your way to a fix/solution.
|
||||
|
||||
Congratulations! You've successfully cut a new release of _Black_. Go and stand up and
|
||||
take a break, you deserve it.
|
||||
|
||||
```{important}
|
||||
Once the release artifacts reach PyPI, you may see new issues being filed indicating
|
||||
regressions. While regressions are not great, they don't automatically mean a hotfix
|
||||
release is warranted. Unless the regressions are serious and impact many users, a hotfix
|
||||
release is probably unnecessary.
|
||||
|
||||
In the end, use your best judgement and ask other maintainers for their thoughts.
|
||||
```
|
||||
|
||||
## Release workflows
|
||||
|
||||
All of _Black_'s release automation uses [GitHub Actions]. All workflows are therefore
|
||||
configured using YAML files in the `.github/workflows` directory of the _Black_
|
||||
repository.
|
||||
|
||||
They are triggered by the publication of a [GitHub Release].
|
||||
|
||||
Below are descriptions of our release workflows.
|
||||
|
||||
### Publish to PyPI
|
||||
|
||||
This is our main workflow. It builds an [sdist] and [wheels] to upload to PyPI where the
|
||||
vast majority of users will download Black from. It's divided into three job groups:
|
||||
|
||||
#### sdist + pure wheel
|
||||
|
||||
This single job builds the sdist and pure Python wheel (i.e., a wheel that only contains
|
||||
Python code) using [build] and then uploads them to PyPI using [twine]. These artifacts
|
||||
are general-purpose and can be used on basically any platform supported by Python.
|
||||
|
||||
#### mypyc wheels (…)
|
||||
|
||||
We use [mypyc] to compile _Black_ into a CPython C extension for significantly improved
|
||||
performance. Wheels built with mypyc are platform and Python version specific.
|
||||
[Supported platforms are documented in the FAQ](labels/mypyc-support).
|
||||
|
||||
These matrix jobs use [cibuildwheel] which handles the complicated task of building C
|
||||
extensions for many environments for us. Since building these wheels is slow, there are
|
||||
multiple mypyc wheels jobs (hence the term "matrix") that build for a specific platform
|
||||
(as noted in the job name in parentheses).
|
||||
|
||||
Like the previous job group, the built wheels are uploaded to PyPI using [twine].
|
||||
|
||||
#### Update stable branch
|
||||
|
||||
So this job doesn't _really_ belong here, but updating the `stable` branch after the
|
||||
other PyPI jobs pass (they must pass for this job to start) makes the most sense. This
|
||||
saves us from remembering to update the branch sometime after cutting the release.
|
||||
|
||||
- _Currently this workflow uses an API token associated with @ambv's PyPI account_
|
||||
|
||||
### Publish executables
|
||||
|
||||
This workflow builds native executables for multiple platforms using [PyInstaller]. This
|
||||
allows people to download the executable for their platform and run _Black_ without a
|
||||
[Python runtime](https://wiki.python.org/moin/PythonImplementations) installed.
|
||||
|
||||
The created binaries are stored on the associated GitHub Release for download over _IPv4
|
||||
only_ (GitHub still does not have IPv6 access 😢).
|
||||
|
||||
### docker
|
||||
|
||||
This workflow uses the QEMU powered `buildx` feature of Docker to upload an `arm64` and
|
||||
`amd64`/`x86_64` build of the official _Black_ Docker image™.
|
||||
|
||||
- _Currently this workflow uses an API Token associated with @cooperlees account_
|
||||
|
||||
```{note}
|
||||
This also runs on each push to `main`.
|
||||
```
|
||||
|
||||
[black-actions]: https://github.com/psf/black/actions
|
||||
[build]: https://pypa-build.readthedocs.io/
|
||||
[calver]: https://calver.org
|
||||
[cibuildwheel]: https://cibuildwheel.readthedocs.io/
|
||||
[gh-3139]: https://github.com/psf/black/pull/3139
|
||||
[github actions]: https://github.com/features/actions
|
||||
[github release]: https://github.com/psf/black/releases
|
||||
[new-release]: https://github.com/psf/black/releases/new
|
||||
[mypyc]: https://mypyc.readthedocs.io/
|
||||
[mypyc-platform-support]:
|
||||
/faq.html#what-is-compiled-yes-no-all-about-in-the-version-output
|
||||
[pyinstaller]: https://www.pyinstaller.org/
|
||||
[sdist]:
|
||||
https://packaging.python.org/en/latest/glossary/#term-Source-Distribution-or-sdist
|
||||
[twine]: https://github.com/features/actions
|
||||
[wheels]: https://packaging.python.org/en/latest/glossary/#term-Wheel
|
@ -1,158 +0,0 @@
|
||||
# The basics
|
||||
|
||||
An overview on contributing to the _Black_ project.
|
||||
|
||||
## Technicalities
|
||||
|
||||
Development on the latest version of Python is preferred. You can use any operating
|
||||
system.
|
||||
|
||||
First clone the _Black_ repository:
|
||||
|
||||
```console
|
||||
$ git clone https://github.com/psf/black.git
|
||||
$ cd black
|
||||
```
|
||||
|
||||
Then install development dependencies inside a virtual environment of your choice, for
|
||||
example:
|
||||
|
||||
```console
|
||||
$ python3 -m venv .venv
|
||||
$ source .venv/bin/activate # activation for linux and mac
|
||||
$ .venv\Scripts\activate # activation for windows
|
||||
|
||||
(.venv)$ pip install -r test_requirements.txt
|
||||
(.venv)$ pip install -e ".[d]"
|
||||
(.venv)$ pre-commit install
|
||||
```
|
||||
|
||||
Before submitting pull requests, run lints and tests with the following commands from
|
||||
the root of the black repo:
|
||||
|
||||
```console
|
||||
# Linting
|
||||
(.venv)$ pre-commit run -a
|
||||
|
||||
# Unit tests
|
||||
(.venv)$ tox -e py
|
||||
|
||||
# Optional Fuzz testing
|
||||
(.venv)$ tox -e fuzz
|
||||
|
||||
# Format Black itself
|
||||
(.venv)$ tox -e run_self
|
||||
```
|
||||
|
||||
### Development
|
||||
|
||||
Further examples of invoking the tests
|
||||
|
||||
```console
|
||||
# Run all of the above mentioned, in parallel
|
||||
(.venv)$ tox --parallel=auto
|
||||
|
||||
# Run tests on a specific python version
|
||||
(.venv)$ tox -e py39
|
||||
|
||||
# Run an individual test
|
||||
(.venv)$ pytest -k <test name>
|
||||
|
||||
# Pass arguments to pytest
|
||||
(.venv)$ tox -e py -- --no-cov
|
||||
|
||||
# Print full tree diff, see documentation below
|
||||
(.venv)$ tox -e py -- --print-full-tree
|
||||
|
||||
# Disable diff printing, see documentation below
|
||||
(.venv)$ tox -e py -- --print-tree-diff=False
|
||||
```
|
||||
|
||||
### Testing
|
||||
|
||||
All aspects of the _Black_ style should be tested. Normally, tests should be created as
|
||||
files in the `tests/data/cases` directory. These files consist of up to three parts:
|
||||
|
||||
- A line that starts with `# flags: ` followed by a set of command-line options. For
|
||||
example, if the line is `# flags: --preview --skip-magic-trailing-comma`, the test
|
||||
case will be run with preview mode on and the magic trailing comma off. The options
|
||||
accepted are mostly a subset of those of _Black_ itself, except for the
|
||||
`--minimum-version=` flag, which should be used when testing a grammar feature that
|
||||
works only in newer versions of Python. This flag ensures that we don't try to
|
||||
validate the AST on older versions and tests that we autodetect the Python version
|
||||
correctly when the feature is used. For the exact flags accepted, see the function
|
||||
`get_flags_parser` in `tests/util.py`. If this line is omitted, the default options
|
||||
are used.
|
||||
- A block of Python code used as input for the formatter.
|
||||
- The line `# output`, followed by the output of _Black_ when run on the previous block.
|
||||
If this is omitted, the test asserts that _Black_ will leave the input code unchanged.
|
||||
|
||||
_Black_ has two pytest command-line options affecting test files in `tests/data/` that
|
||||
are split into an input part, and an output part, separated by a line with`# output`.
|
||||
These can be passed to `pytest` through `tox`, or directly into pytest if not using
|
||||
`tox`.
|
||||
|
||||
#### `--print-full-tree`
|
||||
|
||||
Upon a failing test, print the full concrete syntax tree (CST) as it is after processing
|
||||
the input ("actual"), and the tree that's yielded after parsing the output ("expected").
|
||||
Note that a test can fail with different output with the same CST. This used to be the
|
||||
default, but now defaults to `False`.
|
||||
|
||||
#### `--print-tree-diff`
|
||||
|
||||
Upon a failing test, print the diff of the trees as described above. This is the
|
||||
default. To turn it off pass `--print-tree-diff=False`.
|
||||
|
||||
### News / Changelog Requirement
|
||||
|
||||
`Black` has CI that will check for an entry corresponding to your PR in `CHANGES.md`. If
|
||||
you feel this PR does not require a changelog entry please state that in a comment and a
|
||||
maintainer can add a `skip news` label to make the CI pass. Otherwise, please ensure you
|
||||
have a line in the following format added below the appropriate header:
|
||||
|
||||
```md
|
||||
- `Black` is now more awesome (#X)
|
||||
```
|
||||
|
||||
<!---
|
||||
The Next PR Number link uses HTML because of a bug in MyST-Parser that double-escapes the ampersand, causing the query parameters to not be processed.
|
||||
MyST-Parser issue: https://github.com/executablebooks/MyST-Parser/issues/760
|
||||
MyST-Parser stalled fix PR: https://github.com/executablebooks/MyST-Parser/pull/929
|
||||
-->
|
||||
|
||||
Note that X should be your PR number, not issue number! To workout X, please use
|
||||
<a href="https://ichard26.github.io/next-pr-number/?owner=psf&name=black">Next PR
|
||||
Number</a>. This is not perfect but saves a lot of release overhead as now the releaser
|
||||
does not need to go back and workout what to add to the `CHANGES.md` for each release.
|
||||
|
||||
### Style Changes
|
||||
|
||||
If a change would affect the advertised code style, please modify the documentation (The
|
||||
_Black_ code style) to reflect that change. Patches that fix unintended bugs in
|
||||
formatting don't need to be mentioned separately though. If the change is implemented
|
||||
with the `--preview` flag, please include the change in the future style document
|
||||
instead and write the changelog entry under the dedicated "Preview style" heading.
|
||||
|
||||
### Docs Testing
|
||||
|
||||
If you make changes to docs, you can test they still build locally too.
|
||||
|
||||
```console
|
||||
(.venv)$ pip install -r docs/requirements.txt
|
||||
(.venv)$ pip install -e ".[d]"
|
||||
(.venv)$ sphinx-build -a -b html -W docs/ docs/_build/
|
||||
```
|
||||
|
||||
## Hygiene
|
||||
|
||||
If you're fixing a bug, add a test. Run it first to confirm it fails, then fix the bug,
|
||||
and run the test again to confirm it's really fixed.
|
||||
|
||||
If adding a new feature, add a test. In fact, always add a test. If adding a large
|
||||
feature, please first open an issue to discuss it beforehand.
|
||||
|
||||
## Finally
|
||||
|
||||
Thanks again for your interest in improving the project! You're taking action when most
|
||||
people decide to sit and watch.
|
1
docs/contributing_to_black.md
Symbolic link
1
docs/contributing_to_black.md
Symbolic link
@ -0,0 +1 @@
|
||||
_build/generated/contributing_to_black.md
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user