"dependency-groups" is the mechanism for storing package requirements in `pyproject.toml`, recommended for formatting tools (see https://packaging.python.org/en/latest/specifications/dependency-groups/ )
this change allow the black action to look also in those locations when determining the version of black to install
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/pre-commit/mirrors-mypy: v1.13.0 → v1.14.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.13.0...v1.14.1)
* Fix wrapper's return types to be String or Text IO
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Cooper Ry Lees <me@cooperlees.com>
Fixes#4446
See https://github.com/python/cpython/issues/123821
It's possible this is too strict? We could instead do this anytime the
AST safety check fails, but feels weird to have that happen
non-deterministically
Extend the docstring of black's `find_project_root` to mention that it ignores
`pyproject.toml` files without a `[tool.black]` section.
This is relevant because that function is also used by other python packages
that use black. I found that e.g. datamodel-code-generator [1] uses that
function and that there the ignoring of the pyproject.toml files lead to
a degradation [2]. I think in that case it would be better to not use black's function
for finding the pyproject.toml, but in any case this behavior should be documented.
1: https://github.com/koxudaxi/datamodel-code-generator
2: https://github.com/koxudaxi/datamodel-code-generator/issues/2052
Co-authored-by: Michael Eliachevitch <Michael.Eliachevitch@blueyonder.com>
- Lets install black, then ask to install black with extrasC
- pip sees black is installed and just installs extra dependencies
Test:
- Build local container
- `docker build -t black_local .`
- Run blackd in container
- `docker run -p 45484:45484 --rm black_local blackd --bind-host 0.0.0.0`
```
cooper@home1:~/repos/black$ docker run -p 45484:45484 --rm black_local blackd --bind-host 0.0.0.0
blackd version 24.4.3.dev11+gad60e62 listening on 0.0.0.0 port 45484
INFO:aiohttp.access:10.255.255.1 [10/May/2024:14:40:36 +0000] "GET / HTTP/1.1" 405 204 "-" "curl/8.5.0"
cooper@home1:~/repos/black$ curl http://10.6.9.2:45484
405: Method Not Allowed
```
- Test version is compiled
```
cooper@home1:~/repos/black$ docker run --rm black_local black --version
black, 24.4.3.dev11+gad60e62 (compiled: yes)
Python (CPython) 3.12.3
```
Fixes#4163
* Prepare release 24.4.2
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Crashes are usually documented in the stable style portion of the
changelog. This patch doesn't affect the parser (e.g. blib2to3).
Noticed the second after I merged :-)
* Add support to style function definitions containing newlines before function stubs
* Relocated implementation for removal of newlines before function stubs with added tests for comments
---------
Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>
On windows the path `FoObAR` is the same as `foobar`, so the output
of `black` on a windows machine could output the path to `.gitignore`
with an upper or lower-case drive letter.
Fixes#4268
Previously we would allow whitespace changes in all strings, now
only in docstrings.
Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>
This relates to #4015, #4161 and the behaviour of os.getcwd()
Black is a big user of pathlib and as such loves doing `.resolve()`,
since for a long time it was the only good way of getting an absolute
path in pathlib. However, this has two problems:
The first minor problem is performance, e.g. in #3751 I (safely) got rid
of a bunch of `.resolve()` which made Black 40% faster on cached runs.
The second more important problem is that always resolving symlinks
results in unintuitive exclusion behaviour. For instance, a gitignored
symlink should never alter formatting of your actual code. This kind of
thing was reported by users a few times.
In #3846, I improved the exclusion rule logic for symlinks in
`gen_python_files` and everything was good.
But `gen_python_files` isn't enough, there's also `get_sources`, which
handles user specified paths directly (instead of files Black
discovers). So in #4015, I made a very similar change to #3846 for
`get_sources`, and this is where some problems began.
The core issue was the line:
```
root_relative_path = path.absolute().relative_to(root).as_posix()
```
The first issue is that despite root being computed from user inputs, we
call `.resolve()` while computing it (likely unecessarily). Which means
that `path` may not actually be relative to `root`. So I started off
this PR trying to fix that, when I ran into the second issue. Which is
that `os.getcwd()` (as called by `os.path.abspath` or `Path.absolute` or
`Path.cwd`) also often resolves symlinks!
```
>>> import os
>>> os.environ.get("PWD")
'/Users/shantanu/dev/black/symlink/bug'
>>> os.getcwd()
'/Users/shantanu/dev/black/actual/bug'
```
This also meant that the breakage often would not show up when input
relative paths.
This doesn't affect `gen_python_files` / #3846 because things are always
absolute and known to be relative to `root`.
Anyway, it looks like #4161 fixed the crash by just swallowing the error
and ignoring the file. Instead, we should just try to compute the actual
relative path. I think this PR should be quite safe, but we could also
consider reverting some of the previous changes; the associated issues
weren't too popular.
At the same time, I think there's still behaviour that can be improved
and I kind of want to make larger changes, but maybe I'll save that for
if we do something like #3952
Hopefully fixes#4205, fixes#4209, actual fix for #4077
This PR does not change any behaviour.
There have been 1-2 issues about symlinks recently. Both over and under
resolving can cause problems. This makes a case where we resolve more
explicit and prevent a resolved path from leaking out via the return.
A follow up to #4024 but for `if` guards in `case` statements. I noticed this
when #4024 was made stable, and noticed I had some code that had extra parens
around the `if` guard.
Fixes#2863
This is pretty desirable in a monorepo situation where you have
configuration in the root since it will mean you don't have to
reconfigure every project.
The good news for backward compatibility is that `find_project_root`
continues to stop at any git or hg root, so in all cases repo root
coincides with a pyproject.toml missing tool.black, we'll continue to
have the project root as before and end up using default config
(i.e. we're unlikely to randomly start using the user config).
The other thing we need to be a little careful about is that changing
find_project_root logic affects what `exclude` is relative to. Since we
only change in cases where there is no config, this only applies where
users were using `exclude` via command line arg (and had pyproject.toml
missing tool.black in a dir that was not repo root).
Finally, for the few who could be affected, the fix is to put an empty
`[tool.black]` in pyproject.toml
In #4096 I added a list of current preview/unstable features to the docs. I think
this is important for publicizing what's in our preview style. This PR adds an
automated test to ensure the list stays up to date in the future.
- Ensure total file length stays under 96
- Hash the path only if it's too long
- Proceed normally (with a warning) if the cache can't be read
Fixes#4172
This has been getting a little messy. These changes neaten things up, we
don't have to keep guarding against `self.previous_line is not None`, we
make it clearer what logic has side effects, we reduce the amount of
code that tricky `before` could touch, etc
This is a no-op change.
That function was not a good way to tell if something is a function or a
class, since it basically only worked for async functions by accident
(the parent of a suite / simple_stmt of async function body is a
funcdef).
Fixes#4043, fixes#619
These include nested functions and methods.
I think the nested function case quite clearly improves readability. I
think the method case improves consistency, adherence to PEP 8 and
resolves a point of contention.
* Add new flag for tests, --no-preview-line-length-1, to be used for test cases known to not work in preview mode with line-length=1. Also split out the problematic cases in three cases to separate files. Removed now redundant file which explicitly tested preview annotations with line-length=1
* mode.preview -> preview_mode, mark pep_572_remove_parens as failing with ll1
- Broke tagging images together
- Saved only a few mins
- x86_64 build is fast, time is all spent on cross compile of arm64
- Also remove evil copy pasta ... which is nice
Was worth an attempt.
* [docker ci] Split up amd64 (x86_64) and arm64 builds
- Lets run them seperately to cut down total time
- Will also more clearly show if either arch has specific problems
- Kept amd64 (x86_64) using qemu actions so if GitHub ever offers arm64 boxes it could stay working too
Fixes#3971
* Add CHANGES entry
---------
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
- Add to run on MacOS + Windows too
- Do not install [d] dependecies as blackd is not actually run / checked
- Move to default GitHub action version - which is 3.12 today
* Make black[d] install + test run with 3.12
- With aiohttp >= 3.9.0 we can now install all dependencies with 3.12
- Add actions to run 3.12
- Lint still needs to be 3.11
Test:
- `python3.12 -m venv /tmp/tb --upgrade-deps`
- `/tmp/tb/bin/pip install tox`
- `/tmp/tb/bin/pip install .[d]`
- `/tmp/tb/bin/tox -e py312`
```
py312: OK (37.61=setup[3.98]+cmd[3.83,0.36,19.54,6.46,3.00,0.44] seconds)
congratulations :) (37.63 seconds)
```
* Move to pypy-3.9
---------
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
The second `if` cannot be true at its execution point, because it is
already covered by the first `if`. The condition
`comma.parent.type == syms.subscriptlist` always holds if
`closing.parent.type == syms.trailer` holds, because `subscriptlist`
only appears inside `trailer` in the grammar:
```
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: (subscript|star_expr) (',' (subscript|star_expr))* [',']
```
Bracket depth is not an accurate indicator of standalone comment position inside more complex blocks because bracket depth can be virtual (in loops' and lambdas' parameter blocks) or from optional parens. Here we try to stop cumulating lines upon standalone comments in complex blocks, and try to make standalone comment processing more simple. The fundamental idea is, that if we have a standalone comment, it needs to go on its own line, so we always have to split.
This is not perfect, but at least a first step.
Python does not consider f-strings to be docstrings, so we probably
shouldn't be formatting them as such
Fixes#4018
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
* Add release tool
- Add tool for release managers to use to generate commits
- I'm trying to only use stdlib so we have no depdencies ...
- Default is to change date strings in hard coded documentation files + CHANGES.md
- I write directly to files cause we have SCM to fix any screw ups ...
- We hackily convert calver to ints to sort (all for better ideas here)
- If we hit a ValueError we just set to 0 for sorting - This is alhpa + beta release we can safely ignore these days
- Add new CI to only run release unittests in 3.12 only on all platforms
- Update release docs
- Checked with `mypy --strict` + ensure we are `black --preview` formatted :D
Tests:
- Run it to generate template PR
- `python3.12 release.py --debug --add-changes-template`
- Run it to cleanup CHANGE.md + change version in specified doc files
```
crl-m1:black cooper$ python3.12 release.py -d
[2023-10-23 23:39:38,414] INFO: Current version detected to be 23.10.1 (release.py:221)
[2023-10-23 23:39:38,414] INFO: Next version will be 23.10.2 (release.py:222)
[2023-10-23 23:39:38,414] INFO: Cleaning up /Users/cooper/repos/black/CHANGES.md (release.py:127)
[2023-10-23 23:39:38,416] DEBUG: Finished Cleaning up /Users/cooper/repos/black/CHANGES.md (release.py:147)
[2023-10-23 23:39:38,416] INFO: Updating black version to 23.10.2 in /Users/cooper/repos/black/docs/integrations/source_version_control.md (release.py:173)
[2023-10-23 23:39:38,416] DEBUG: Finished updating black version to 23.10.2 in /Users/cooper/repos/black/docs/integrations/source_version_control.md (release.py:185)
[2023-10-23 23:39:38,416] INFO: Updating black version to 23.10.2 in /Users/cooper/repos/black/docs/usage_and_configuration/the_basics.md (release.py:173)
[2023-10-23 23:39:38,417] DEBUG: Finished updating black version to 23.10.2 in /Users/cooper/repos/black/docs/usage_and_configuration/the_basics.md (release.py:185)
```
- Add tests around some key logic
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix lints + add git to release CI
- Remove black + mypy as linting already runs it ...
- Ignore delete param to TemporaryDirectory as we can't set mypy to 3.12 :D
* Only run CI on linux/ubuntu for now
* Add lots of debug printing + directly run unitests (not via coverage)
* Overloading __str__ is bad on a TestCase
* Add more logging around git tag
* Print where git is in a step
* Rollback creating a fake black repo as we were not using it - I did plan to but I can't get it working on GitHub actions
* Do a deep checkout
* Add noqa for E701,E761 ... maybe we need this in our flake8 config now?
* Fix action to have correct workflow yaml to action on
- Also add fix to not double run when we push directly to psf/black
* All jelle suggestions
- Fix bug missing lines ending with --> in CHANGES.md to delete ...
- Update ci to run out of scripts dir too
- Update test_tuple_calver
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
See https://pre-commit.com/#confining-hooks-to-run-at-certain-stages
> If you are authoring a tool, it is usually a good idea to provide an appropriate `stages` property. For example a reasonable setting for a linter or code formatter would be `stages: [pre-commit, pre-merge-commit, pre-push, manual]`.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
* Prepare release 23.10.1
* Update docs/usage_and_configuration/the_basics.md
Add missed version string
We need to automate or remove this from docs ... It's painful.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
---------
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
* Fix CI failing
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* docs: update CHANGES.md
* docs: fix changelog location to unreleased
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Until we get new aiohttp wheels we need to build with 3.11.
You can see an example of a fail here:
Workaround for #3919 - Will leave it open until we can move to 3.21
* fix indentation of line breaks in long type hints by adding parentheses, and remove unnecessary parentheses
* add entry in CHANGES.md, make the style change only in preview mode
Remove mentions of runtime support of Python 3.7
Runtime support of Python 3.7 was removed in
b4dca26c7d but a few mentions of it being
supported have remained until now.
Build in separate jobs. This makes it clearer if e.g. a single Python
version is failing. It also potentially gets you more parallelism.
Build everything on push to master.
Only build Linux 3.8 and 3.11 wheels on PRs.
* Fix broken url in editors.md
Update a link pointing to the Arch Linux repos.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
The idea behind this change is that we stop looking into previous body to determine if there should be a blank before a function or class definition.
Input:
```python
import sys
if sys.version_info > (3, 7):
class Nested1:
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
class Nested2:
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
if sys.version_info > (3, 7):
def nested1():
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
def nested2():
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
```
Stable style
```python
import sys
if sys.version_info > (3, 7):
class Nested1:
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
class Nested2:
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
if sys.version_info > (3, 7):
def nested1():
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
def nested2():
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
```
In the stable formatting, we have a blank line sometimes, not depending on the previous statement on the same level, but on the last (potentially nested) statement in the previous body.
#2783/#3564 fixes this for classes in preview style:
```python
import sys
if sys.version_info > (3, 7):
class Nested1:
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
class Nested2:
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
if sys.version_info > (3, 7):
def nested1():
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
def nested2():
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
```
This PR additionally fixes this for function definitions:
```python
if sys.version_info > (3, 7):
if sys.platform == "win32":
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
if sys.platform != "win32":
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
if sys.version_info > (3, 8):
if sys.platform == "win32":
assignment = 1
def function_definition(self): ...
class F1: ...
if sys.platform != "win32":
def function_definition(self): ...
assignment = 1
class F2: ...
```
You can see the effect of this change on typeshed in https://github.com/konstin/typeshed/pull/1/files. As baseline, the preview mode changes without this PR are at https://github.com/konstin/typeshed/pull/2.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
This PR updates an assert statement that checks the bounds of a
string-slicing operation. The updated assertion provides more accurate
and informative error handling by specifically checking the relative
values of the indices and the string length.
The original assertion was essentially checking if Python's string
slicing was behaving as expected. However, it wasn't providing any
guarantees or useful information about the bounds i and j themselves.
The updated assertion checks that the indices used for slicing are
within the bounds of the string. It will throw an AssertionError if the
indices are out of bounds or if i > j, providing a more specific and
informative error.
Fix unintentionally swapped words in index.md
I think the intent was to say "large changes in formatting", because it doesn't make sense to say "large formatting in changes".
`is_type_comment` now specifically deals with general type comments for a leaf.
`is_type_ignore_comment` now handles type comments contains ignore annotation for a leaf
`is_type_ignore_comment_string` used to determine if a string is an ignore type comment
IPython is a very expensive import, like, at least 300ms. I'd also
venture that it's much more common than tokenize-rt, which is like 30ms.
I work in a repo where I use black, have IPython installed and there
happen to be a couple notebooks (that we don't want formatted). I know I
can force exclude ipynb, but this change doesn't really have a cost.
Currently the verbose logging for "Sources to be formatted" is a little
suspect in that it is a completely different code path from
`get_sources`.
This can result in bugs like https://github.com/psf/black/pull/3216#issuecomment-1213557359
and generally limits the value of these logs.
This does change the "when" of this log, but the colours help separate
it from the even more verbose logs.
Several new versions of mypyc has been released since the last upgrade, and they include some performance improvements which could make the compiled version of Black run faster.
https://mypy-lang.org/news.html
The latest version of hatch-mypyc allows being installed next the 1.x series of mypy.
* Make phrasing for flake8 users more concise
max-line-length should be 80 with flake8-bugbear
Fixes#3716
* Re-add rationale and an explanation for
disabling E203
* Run pre-commit
Avoids a Python 3.12 deprecation warning.
Subtle difference: previously, timestamps in diff filenames had the
`+0000` separated from the timestamp by space. With this, the space is
there no more, and there is a colon, as in `+00:00`.
This patch changes the preview style so that string splitters respect
Unicode East Asian Width[^1] property. If you are not familiar to CJK
languages it is not clear immediately. Let me elaborate with some
examples.
Traditionally, East Asian characters (including punctuation) have
taken up space twice than European letters and stops when they are
rendered in monospace typeset. Compare the following characters:
```
abcdefg.
글、字。
```
The characters at the first line are half-width, and the second line
are full-width. (Also note that the last character with a small
circle, the East Asian period, is also full-width.) Therefore, if we
want to prevent those full-width characters to exceed the maximum
columns per line, we need to count their *width* rather than the number
of characters. Again, the following characters:
```
글、字。
```
These are just 4 characters, but their total width is 8.
Suppose we want to maintain up to 4 columns per line with the following
text:
```
abcdefg.
글、字。
```
How should it be then? We want it to look like:
```
abcd
efg.
글、
字。
```
However, Black currently turns it into like this:
```
abcd
efg.
글、字。
```
It's because Black currently counts the number of characters in the line
instead of measuring their width. So, how could we measure the width?
How can we tell if a character is full- or half-width? What if half-width
characters and full-width ones are mixed in a line? That's why Unicode
defined an attribute named `East_Asian_Width`. Unicode grouped every
single character according to their width in fixed-width typeset.
This partially addresses #1197, but only for string splitters. The other
parts need to be fixed as well in future patches.
This was implemented by copying rich's own approach to handling wide
characters: generate a table using wcwidth, check it into source
control, and use in to drive helper functions in Black's logic. This
gets us the best of both worlds: accuracy and performance (and let's us
update as per our stability policy too!).
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Richard Si <sichard26@gmail.com>
Fixes#3506
We can't simply escape the quotes in a naked f-string when merging string groups, because backslashes are invalid.
The quotes in f-string expressions should be toggled (this is safe since quotes can't be reused).
This fix also means implicitly concatenated f-strings with different quotes can now be merged or quote-normalized by changing the quotes used in expressions. e.g.:
```diff
raise sa_exc.UnboundExecutionError(
"Could not locate a bind configured on "
- f'{", ".join(context)} or this Session.'
+ f"{', '.join(context)} or this Session."
)
```
The option is `max-line-length` with dashes, not underscores. The config option name is given in the output of `pycodestyle -h`, which can also be checked on https://pep8.readthedocs.io/en/stable/intro.html#example-usage-and-output:
```
Configuration:
The project options are read from the [pycodestyle] section of the
tox.ini file or the setup.cfg file located in any parent folder of the
path(s) being processed. Allowed options are: exclude, filename,
select, ignore, max-line-length, max-doc-length, hang-closing, count,
format, quiet, show-pep8, show-source, statistics, verbose
```
When trying to format a project from the outside, the verbose output
shows says that there are symbolic links that points outside of the
project, but displays the wrong project path, meaning that these
messages are false positives.
This bug is triggered when the command is executed from outside a
project on a folder inside it, causing an inconsistency between the
path to the detected project root and the relative path to the target
contents.
The fix is to normalize the target path using the project root before
processing the sources, which removes the presence of the incorrect
messages.
---
The test attemps to emulate the behavior of the CLI as closely as
posible by patching some `pathlib.Path` methods and passing certain
reference paths to the context object and `black.get_sources`.
Before the associated fix was introduced, this test failed because
some of the captured files reported the presence of a symlink due to
an incorrectly formated path. The test also asserts that only a single
file is reported as ignored, which is part of the expected behavior.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
Revert deleted documentation on setting up Black using IntelliJ
external tool or file watcher utilities. These are still worth keeping
because some peole might not want to use a third-party plugin or
install Blackd's extra dependencies.
Co-authored-by: Richard Si <sichard26@gmail.com>
Co-authored-by: Jordan Ephron <JEphron@users.noreply.github.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
* Organize vim plugin section with headers to separate out Installation, Usage, and Troubleshooting for readability and easy linking
* Add missing plugin configuration options, with current defaults
* Add installation note for Arch Linux, now that the plugin is shipped with the python-black package (ref: https://bugs.archlinux.org/task/73024)
* Fix vim-plug specification to follow stable releases. Moving the same tag is an antipattern that doesn't re-resolve with vim-plug, see this discussion for more detail (https://github.com/junegunn/vim-plug/pull/720\#issuecomment-1105829356). Per vim-plug's maintainer's recommendation, use the 'tag' key instead with a shell wildcard. Wildcard should be '*.*.*' as that follows Black's versioning detailed here (https://black.readthedocs.io/en/latest/contributing/release_process.html\#cutting-a-release) and doesn't include current alpha releases.
* Do not move docker `latest_release` tag for Pre-Releases
- When we do a pre-release lets not move the latest_release tag
- This tag should only move on official real releases
Fixes#3453
* Make it prettier - TIL we format our yaml
* Remove separate 3.11 CI now deps support 3.11
- We can run everything now like all other stable versions of Python
- test in a 3.11 vent: `/tmp/tb/bin/tox -e py311,ci-py311`
```
py311: OK (28.99=setup[7.90]+cmd[5.29,0.66,6.94,6.08,1.89,0.24] seconds)
ci-py311: OK (30.33=setup[3.20]+cmd[3.66,0.31,17.43,4.60,0.90,0.23] seconds)
congratulations :) (59.35 seconds)
```
* Add to CHANGES.md
* Add fuzz run in 3.11
The bug is in the `get_leaves_inside_matching_brackets` on the third line below:
```python
assert xxxxxxxxx.xxxxxxxxx.xxxxxxxxx(
xxxxxxxxx
).xxxxxxxxxxxxxxxxxx(), (
"xxx {xxxxxxxxx} xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
)
```
Including the invisible paren, third line is `).xxxxxxxxxxxxxxxxxx()), (`, that it has a matched pair then an unmatched closing paren afterwards. This PR ensures the returned leaves are actually matched.
Fixes#3414.
Currently, empty and whitespace-only (with or without newlines) are
not modified. In some discussions (issues and pull requests) consensus
was to reformat whitespace-only files to empty or single-character
files, preserving line endings when possible. With that said, this
commit introduces the following behaviors:
* Empty files are left as is
* Whitespace-only files (no newline) reformat into empty files
* Whitespace-only files (1 or more newlines) reformat into a single
newline character
To implement these changes, we moved the initial check at
`format_file_contents` that raises `NothingChanged` if the source
(with no whitespaces) is an empty string. In the case of *.ipynb
files, `format_ipynb_string` checks a similar condition and removed
whitespaces. In the case of Python files, `format_str_once` includes a
check on the output that returns the correct newline character if
possible or an empty string otherwise.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Apply .gitignore files considering their location
When a .gitignore file contains the special rule to ignore every
subfolder content (`*/*`) and the file is located in a subfolder
relative to where the command is executed (root), the rule is
incorrectly applied and ignores every file at the same level of the
.gitignore file.
The reason for this is that the `gitignore` variable accumulates the
rules found in each .gitignore while traversing files and directories
recursively. This makes sense and, in general, works as expected. The
problem is that the gitignore rules are applied using as the relative
path from root to target directory as a reference. This is the cause
of the bug.
The implemented solution keeps track of every .gitignore file found
while traversing the targets and the absolute location of each
.gitignore file. Then, when matching files to the .gitignore rules,
compare each set of rules with the appropiate relative path to the
candidate target file.
To make this possible, we changed the single `gitignore` object with a
dictionary of similar objects, where the corresponding key is the
absolute path to the folder that contains that .gitignore file. This
required changing the signature of the `get_sources` function. Also, we
introduce a `is_ignored` function that compares a file with every set
of rules. Finally, some tests required an update to pass the gitignore
object in the new format.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Test .gitignore with `*/*` is applied correctly
The test contains three cases: 1) when the .gitignore with the special
rule to ignore every subfolder and its contents (*/*) is in the root,
2) when the file is inside a subfolder relative to root (nested), and
3) when the target folder contains the .gitignore and root is a parent
folder of the target. In all of these cases, we compare the files that
are visible by Black with a known list of paths containing the
expected values.
Before the fix introduced in the previous commit, these tests failed
when the .gitignore file was nested (second case). Now, the test is
passed for all cases.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Update CHANGES.md
Add entry about fixed bug and changes introduced: ignore files by
considering the location of each .gitignore file and the relative path
of each target
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Small refactor to improve code readability
These changes are small improvements to improve code readability:
rename a variable to a more descriptive name (from `exclude_is_None`
to `using_default_exclude`), use a better syntax to include the type
annotation for `gitignore` variable (from typing comment to
Python-style typing annotation), and replace an if-else block with a
single dictionary definition (in this case, we need to compare keys
instead of values, meaning that the change works)
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Make nested function a top-level function
The function to match a given path with every discovered .gitignore
file does not need to be a nested function and can be a top-level
function. The arguments did not change, but the naming of local
variables was improved for readability.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
When passing multiple src directories, the root gitignore was only
applied to the first processed source. The reason is that, in the
first source, exclude is `None`, but then the value gets overridden by
`re_compile_maybe_verbose(DEFAULT_EXCLUDES)`, so in the next iteration
where the source is a directory, the condition is not met and sets the
value of `gitignore` to `None`.
To fix this problem, we store a boolean indicating if `exclude` is
`None` and set the value of `exclude` to its default value if that's
the case. This makes sure that the flow enters the correct condition on
following iterations and also keeps the original value if the condition
is not met.
Also, the value of `gitignore` is initialized as `None` and overriden
if necessary. The value of `root_gitignore` is always calculated to
avoid using additional variables (at the small cost of additional
computations).
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
Provide a configuration parameter to the Vim plugin which will allow the
plugin to skip setting up a virtualenv. This is useful when there is a
system installation of black (e.g. from a Linux distribution) which the
user prefers to use.
Using a virtualenv remains the default.
- Fixes#3308
* Add option to skip the first line in source file
This commit adds a CLi option to skip the first line in the source
files, just like the Cpython command line allows [1]. By enabling the
flag, using `-x` or `--skip-source-first-line`, the first line is
removed temporarilly while the remaining contents are formatted. The
first line is added back before returning the formatted output.
[1]: https://docs.python.org/dev/using/cmdline.html#cmdoption-x
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Add tests for `--skip-source-first-line` option
When the flag is disabled (default), black formats the entire source
file, as in every line. In the other hand, if the flag is enabled, by
using `-x` or `--skip-source-first-line`, the first line is retained
while the rest of the source is formatted and then is added back.
These tests use an empty Python file that contains invalid syntax in
its first line (`invalid_header.py`, at `miscellaneous/`). First,
Black is invoked without enabling the flag which should result in an
exit code different than 0. When the flag is enabled, Black is
expected to return a successful exit code and the header is expected
to be retained (even if its not valid Python syntax).
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Support skip source first line option for blackd
The recently added option can be added as an acceptable header for
blackd. The arguments are passed in such a way that using the new
header will activate the skip source first line behaviour as expected
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Add skip source first line option to blackd docs
The new option can be passed to blackd as a header. This commit
updates the blackd docs to include the new header.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Update CHANGES.md
Include the new Black option to skip the first line of source code in
the configuration section
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Update skip first line test including valid syntax
Including valid Python syntax help us make sure that the file is still
actually valid after skipping the first line of the source file (which
contains invalid Python syntax)
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Skip first source line at `format_file_in_place`
Instead of skipping the first source line at `format_file_contents`,
do it before. This allow us to find the correct newline and encoding
on the actual source code (everything that's after the header).
This change is also applied at Blackd: take the header before passing
the source to `format_file_contents` and put the header back once we
get the formatted result.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Test output newlines when skipping first line
When skipping the first line of source code, the reference newline must
be taken from the second line of the file instead of the first one, in
case that the file mixes more than one kind of newline character
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Test that Blackd also skips first line correctly
Simliarly to the Black tests, we first compare that Blackd fails when
the first line is invalid Python syntax and then check that the result
is the expected when tha flag is activated
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Use the content encoding to decode the header
When decoding the header to put it back at the top of the contents of
the file, use the same encoding used in the content. This should be a
better "guess" that using the default value
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
Jupyter creates a checkpoint file every single time you create an .ipynb
file, and then it updates the checkpoint file every single time you
manually save your progress for the initial .ipynb. These checkpoints
are stored in a directory named `.ipynb_checkpoints`.
Co-authored-by: Batuhan Taskaya <isidentical@gmail.com>
This makes the location more explicit which hopefully makes the PR
process smoother for other first time contributors.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
To run the formatter on Jupyter Notebooks, Black must be installed
with an extra dependency (`black[jupyter]`). This commit adds an
optional argument to install Black with this dependency when using the
official GitHub Action [1]. To enable the formatter on Jupyter
Notebooks, just add `jupyter: true` as an argument. Feature requested
at [2].
[1]: https://black.readthedocs.io/en/stable/integrations/github_actions.html
[2]: https://github.com/psf/black/issues/3280
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
This implements PEP 621, obviating the need for `setup.py`, `setup.cfg`,
and `MANIFEST.in`. The build backend Hatchling (of which I am a
maintainer in the PyPA) is now used as that is the default in the
official Python packaging tutorial. Hatchling is available on all the
major distribution channels such as Debian, Fedora, and many more.
## Python support
The earliest supported Python 3 version of Hatchling is 3.7, therefore
I've also set that as the minimum here. Python 3.6 is EOL and other
build backends like flit-core and setuptools also dropped support.
Python 3.6 accounted for 3-4% of downloads in the last month.
## Plugins
Configuration is now completely static with the help of 3 plugins:
### Readme
hynek's hatch-fancy-pypi-readme allows for the dynamic construction of
the readme which was previously coded up in `setup.py`. Now it's simply:
```toml
[tool.hatch.metadata.hooks.fancy-pypi-readme]
content-type = "text/markdown"
fragments = [
{ path = "README.md" },
{ path = "CHANGES.md" },
]
```
### Versioning
hatch-vcs is currently just a wrapper around setuptools-scm (which
despite the legacy naming is actually now decoupled from setuptools):
```toml
[tool.hatch.version]
source = "vcs"
[tool.hatch.build.hooks.vcs]
version-file = "src/_black_version.py"
template = '''
version = "{version}"
'''
```
### mypyc
hatch-mypyc offers many benefits over the existing approach:
- No need to manually select files for inclusion
- Avoids the need for the current CI workaround for https://github.com/mypyc/mypyc/issues/946
- Intermediate artifacts (like `build/`) from setuptools and mypyc
itself no longer clutter the project directory
- Runtime dependencies required at build time no longer need to be
manually redeclared as this is a built-in option of Hatchling
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Previously _Black_ produces invalid code because the `# fmt: on` is used on a different block level.
While _Black_ requires `# fmt: off` and `# fmt: on` to be used at the same block level, incorrect usage shouldn't cause crashes.
The formatting behavior this PR introduces is, the code below the initial `# fmt: off` block level will be turned off for formatting, when `# fmt: on` is used on a different level or there is no `# fmt: on`. This also matches the current behavior when `# fmt: off` is used at the top-level without a matching `# fmt: on`, it turns off formatting for everything below `# fmt: off`.
- Fixes#2567
- Fixes#3184
- Fixes#2985
- Fixes#2882
- Fixes#2232
- Fixes#2140
- Fixes#1817
- Fixes#569
Bumps cibuildwheel from 2.8.1 to 2.10.0 which has 3.11 building enabled
by default. Unfortunately mypyc errors out on 3.11:
src/black/files.py:29:9: error: Name "tomllib" already defined (by an import) [no-redef]
... so we have to also hide the fallback import of tomli on older 3.11
alphas from mypy[c].
Make sure `gcc` is installed in the build env
The mypyc build requires `gcc` to be installed even if it's being built with `clang`, otherwise `clang` fails to find `libgcc`.
These two paragraphs were tucked away at the end of the section, after
the diversion on backslashes. I nearly missed the first paragraph and
opened a nonsense issue, and I think the second belongs higher up with
it too.
Fix a crash when formatting some dicts with parenthesis-wrapped long
string keys. When LL[0] is an atom string, we need to check the atom
node's siblings instead of LL[0] itself, e.g.:
dictsetmaker
atom
STRING '"This is a really long string that can\'t be expected to fit in one line and is used as a nested dict\'s key"'
/atom
COLON ':'
atom
LSQB ' ' '['
listmaker
STRING '"value"'
COMMA ','
STRING ' ' '"value"'
/listmaker
RSQB ']'
/atom
COMMA ','
/dictsetmaker
* Move 311 tests to install aiohttp without C extensions
- Configure tox to install aiohttp without extensions
- i.e. use `AIOHTTP_NO_EXTENSIONS=1` for pip install
- This allows us to reenable blackd tests that use aiohttp testing helpers etc.
- Had to ignore `cgi` module deprecation warning
- Filed issue for aiohttp to fix: https://github.com/aio-libs/aiohttp/issues/6905
Test:
- `/tmp/tb/bin/tox -e 311`
* Fix formatting + linting
* Add latest aiohttp for loop fix + Try to exempt deprecation warning but failed - will ask for help
* Remove unnecessary warning ignore
Co-authored-by: Cooper Ry Lees <me@wcooperlees.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
This is deprecated since aiohttp 4.0. If it doesn't exist just define a
no-op decorator that does nothing (after the other aiohttp imports
though!). By doing this, it's safe to ignore the DeprecationWarning
without needing to require the latest aiohttp once they remove
`@middleware`.
We've decided to a) convert stable back into a branch and b) to update
it immediately as part of the release process. We may as well automate
it. And about going back to a branch ...
Git tags are not the right tool, at all[^1]. They come with the
expectation that they will never change. Things will not work as
expected if they do change, doubly so if they change regularly. Once
you pull stable from the remote and it's copied in your local
repository, no matter how many times you run git pull you'll never see
it get updated automatically. Your only recourse is to delete the tag
via `git tag -d stable` before pulling.
This gets annoying really quickly since stable is supposed to be the
solution for folks "who want to move along as Black developers deem
the newest version reliable."[^2] See this comment for how this impacts
users using our Vim plugin[^3]. It also affects us developers[^4]. If
you have stable locally, once we cut a new release and update the stable
tag, a simple `git pull` / `git fetch` will not pull down the updated
stable tag. Unless you remember to delete stable before pulling, stable
will become stale and useless.
You can argue this is a good thing ("people should explicitly opt into
updating stable"), but IMO it does not match user expectations nor
developer expectations[^5]. Especially since not all our integrations
that use stable are bound by this security measure, for example our
GitHub Action (since it does a clean fetch of the repository every time
it's used). I believe consistency would be good here.
Finally, ever since we switched to a tag, we've been facing issues with
ReadTheDocs not picking up updates to stable unless we force a rebuild.
The initial rebuild on the stable update just pulls the commit the tag
previously pointed to. I'm not sure if switching back to a branch will
fix this, but I'd wager it will.
[^1]: https://git-scm.com/docs/git-tag#_on_re_tagging
[^2]: https://black.readthedocs.io/en/stable/contributing/release_process.html#moving-the-stable-tag
[^3]: https://github.com/psf/black/issues/2503#issuecomment-1196357379
[^4]: In fairness, most folks working on Black probably don't use the
`stable` ref anyway, especially us maintainers who'd know what is
the latest version by heart, but it'd still be nice to make it
usable for local dev though.
[^5]: Also what benefit does a `stable` ref have over explicit version
tags like `22.6.0`? If you're going to opt into some odd pin
mechanism, might as well use explicit version tags for clarity
and consistency.
- Formalise release cadence guidelines
- Overhaul release steps to be easier to follow and more thorough
- Reorder changelog template to something more sensible
- Update release automation docs to reflect recent improvements (notably
the addition of in-repo mypyc wheel builds)
Co-authored-by: Felix Hildén <felix.hilden@gmail.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Solves https://github.com/psf/black/issues/2598 where Black wouldn't
use .gitignore at folder/.gitignore if you ran `black folder` for
example.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Adds parentheses around implicit string concatenations when it's inside
a list, set, or tuple. Except when it's only element and there's no trailing
comma.
Looking at the order of the transformers here, we need to "wrap in
parens" before string_split runs. So my solution is to introduce a
"collaboration" between StringSplitter and StringParenWrapper where the
splitter "skips" the split until the wrapper adds the parens (and then
the line after the paren is split by StringSplitter) in another pass.
I have also considered an alternative approach, where I tried to add a
different "string paren wrapper" class, and it runs before string_split.
Then I found out it requires a different do_transform implementation
than StringParenWrapper.do_transform, since the later assumes it runs
after the delimiter_split transform. So I stopped researching that
route.
Originally function calls were also included in this change, but given
missing commas should usually result in a runtime error and the scary
amount of changes this cause on downstream code, they were removed in
later revisions.
os.cpu_count() can return None (sounds like a super arcane edge case
though) so the type annotation for the `workers` parameter of
`black.main` is wrong. This *could* technically cause a runtime
TypeError since it'd trip one of mypyc's runtime type checks so we
might as well fix it.
Reading the documentation (and cross-checking with the source code),
you are actually allowed to pass None as `max_workers` to
`concurrent.futures.ProcessPoolExecutor`. If it is None, the pool
initializer will simply call os.cpu_count() [^1] (defaulting to 1 if it
returns None [^2]). It'll even round down the worker count to a level
that's safe for Windows.
... so theoretically we don't even need to call os.cpu_count()
ourselves, but our Windows limit is 60 (unlike the stdlib's 61) and I'd
prefer not accidentally reintroducing a crash on machines with many,
many CPU cores.
[^1]: https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor
[^2]: a372a7d653/Lib/concurrent/futures/process.py (L600)
Loading .gitignore and compiling the exclude regex can take more than
15ms. We shouldn't and don't need to pay this cost if we're simply
formatting files given on the command line directly.
I would've loved to lazily import pathspec, but the patch won't be clean
until the file collection and discovery logic is refactored first.
Co-authored-by: Fabio Zadrozny <fabiofz@gmail.com>
`black.reformat_many` depends on a lot of slow-to-import modules. When
formatting simply a single file, the time paid to import those modules
is totally wasted. So I moved `black.reformat_many` and its helpers
to `black.concurrency` which is now *only* imported if there's more
than one file to reformat. This way, running Black over a single file
is snappier
Here are the numbers before and after this patch running `python -m
black --version`:
- interpreted: 411 ms +- 9 ms -> 342 ms +- 7 ms: 1.20x faster
- compiled: 365 ms +- 15 ms -> 304 ms +- 7 ms: 1.20x faster
Co-authored-by: Fabio Zadrozny <fabiofz@gmail.com>
There are a number of places this behaviour could be patched, for
instance, it's quite tempting to patch it in `get_sources`. However
I believe we generally have the invariant that project root contains all
files we want to format, in which case it seems prudent to keep that
invariant.
This also improves the accuracy of the "sources to be formatted" log
message with --stdin-filename.
Fixes GH-3207.
Updates action.yml to use the alternative $GITHUB_ACTION_PATH variable
instead of the original ${{ github.action_path }} which caused issues
with bash on the Windows runners. This removes the need for a Python
subprocess to call the main.py script.
Fixes#2734: a standalone comment causes strings to be merged into one far too long (and requiring two passes to do so).
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
- Had to exempt blackd tests for now due to aiohttp
- Skip by using `sys.version_info` tuple
- aiohttp does not compile in 3.11 yet - refer to #3230
- Add a deadsnakes ubuntu workflow to run 3.11-dev to ensure we don't regress
- Have it also format ourselves
Test:
- `tox -e 311`
Co-authored-by: Cooper Ry Lees <me@wcooperlees.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
uR is not a legal string prefix, so this test breaks (AssertionError:
cannot use --safe with this file; failed to parse source file AST:
invalid syntax) if changed to one in which the file is changed. I've
changed the last test to have u alone, and added an R to the test above
instead.
* makes install available for all users in docker image
moves the installation path from /root/.local to a
virtualenv. this way we still get the lightweight
multistage build without excluding non-root users.
* adds changelog entry for docker-image fix
A changelog entry has been added under the Integration
subheader
* changes dockerfile to use the venv activate script
we are now using the inbuilt venv activate script, as well
as explicitly mentioning the binary location in the entrypoint
cmd.
Co-authored-by: Nicolò <nicolo.intrieri@spinforward.it>
Co-authored-by: Cooper Lees <me@cooperlees.com>
Building executables without any testing is quite sketchy, let's at
least verify they won't crash on startup and format Black's own
codebase.
Also replaced "binaries" with "executables" since it's clearer and
won't be confused with mypyc.
Finally, I added colorama so all Windows users can get colour.
As error logs are emitted often (they happen when Black's cache
directory is created after blib2to3 tries to write its cache) and cause
issues to be filed by users who think Black isn't working correctly.
These errors are expected for now and aren't a cause for concern so
let's remove them to stop worrying users (and new issues from being
opened). We can improve the blib2to3 caching mechanism to write its
cache at the end of a successful command line invocation later.
As mentioned in GH-3185, when using Black as a Vim plugin, especially
automatically on save, the plugin's messages can be confusing, as
nothing indicates that they come from Black.
... in the middle of an expression or code block by adding a missing return.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
When the Leaf node with `# fmt: skip` is a NEWLINE inside a `suite`
Node, the nodes to ignore should be from the siblings of the parent
`suite` Node.
There is a also a special case for the ASYNC token, where it expands
to the grandparent Node where the ASYNC token is.
This fixes GH-2646, GH-3126, GH-2680, GH-2421, GH-2339, and GH-2138.
The former was a regression I introduced a long time ago. To avoid
changing the stable style too much, the regression is only fixed if
--preview is enabled
Annoyingly enough, as we currently always enforce a second format pass if
changes were made, there's no good way to prove the existence of the
docstring quote normalization instability issue. For posterity, here's
one failing example:
--- source
+++ first pass
@@ -1,7 +1,7 @@
def some_function(self):
- ''''<text here>
+ """ '<text here>
<text here, since without another non-empty line black is stable>
- '''
+ """
pass
--- first pass
+++ second pass
@@ -1,7 +1,7 @@
def some_function(self):
- """ '<text here>
+ """'<text here>
<text here, since without another non-empty line black is stable>
"""
pass
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
* Move to explicitly creating a new loop
- >= 3.10 add a warning that `get_event_loop` will not automatically create a loop
- Move to explicit API
Test:
- `python3.11 -m venv --upgrade-deps /tmp/tb`
- `/tmp/tb/bin/pip install -e .`
- Install deps and no blackd as aiohttp + yarl can't build still with 3.11
- https://github.com/aio-libs/aiohttp/issues/6600
- `export PYTHONWARNINGS=error`
```
cooper@l33t:~/repos/black$ /tmp/tb/bin/black .
All done! ✨🍰✨
44 files left unchanged.
```
Fixes#3110
* Add to CHANGES.md
* Fix a cooper typo yet again
* Set default asyncio loop to our explicitly created one + unset on exit
* Update CHANGES.md
Fix my silly typo.
Co-authored-by: Thomas Grainger <tagrain@gmail.com>
Co-authored-by: Cooper Ry Lees <me@wcooperlees.com>
Co-authored-by: Thomas Grainger <tagrain@gmail.com>
Doing so is invalid. Note this only fixes the preview style since the
logic putting closing docstring quotes on their own line if they violate
the line length limit is quite new.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Otherwise they'd be deleted which was a regression in 22.1.0 (oops! my
bad!). Also type comments are now tracked in the AST safety check on all
compatible platforms to error out if this happens again.
Overall the line rewriting code has been rewritten to do "the right
thing (tm)", I hope this fixes other potential bugs in the code (fwiw I
got to drop the bugfix in blib2to3.pytree.Leaf.clone since now bracket
metadata is properly copied over).
Fixes#2873
* Recommend using BlackConnect in IntelliJ IDEs
* IntelliJ IDEs integration docs: improve formatting
* Add changelog for recommending BlackConnect
* IntelliJ IDEs integration docs: improve formatting
* Apply suggestions from code review
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
* Fix indentation
* Apply italic to Black name
Consequently with other places in the document
* Move CHANGELOG entry to Unreleased section
* IntelliJ IDEs integration docs: bring back a point with formatting a file
* IntelliJ IDEs integration docs: fix extra whitespace and linebreak
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Covers GH-2926, GH-2990, GH-2991, and GH-3035.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
* Add run_self environment in tox
* Add run_self task as part of the lint CI flow
* Remove hard coded sources list
* Remove black from pre-commit
Co-authored-by: Cooper Lees <me@cooperlees.com>
We just got someone on Discord who was confused because the command as
written caused their shell to try to do command expansion.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
I realized we don't have a FAQ entry about this, let's change that so
compiled: yes/no doesn't surprise as many people :)
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
This is a tricky one as await is technically an expression and therefore
in certain situations requires brackets for operator precedence.
However, the vast majority of await usage is just await some_coroutine(...)
and similar in format to return statements. Therefore this PR removes
redundant parens around these await expressions.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Allows us to better control placement of return annotations by:
a) removing redundant parens
b) moves very long type annotations onto their own line
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
The 8.0.x series renamed its "die on LANG=C" function and the 8.1.x
series straight up deleted it.
Unfortunately this makes this test type check cleanly hard, so we'll
just lint with click 8.1+ (the pre-commit hook configuration was changed
mostly to just evict any now unsupported mypy environments)
aiohttp.test_utils.unittest_run_loop was deprecated since aiohttp 3.8
and aiohttp 4 (which isn't a thing quite yet) removes it. To maintain
compatibility with the full range of versions we declare to support,
test_blackd.py will now define a no-op replacement if it can't be
imported.
Also, mypy is painfully slow to use without a cache, let's reenable it.
Now PRs will run two diff-shades jobs, "preview-changes" which formats
all projects with preview=True, and "assert-no-changes" which formats
all projects with preview=False. The latter also fails if any changes
were made.
Pushes to main will only run "preview-changes"
Also the workflow_dispatch feature was dropped since it was
complicating everything for little gain.
It is falsely placed in preview features and always formats the power operators, it was added in #2789 but there is no check for formatting added along with it.
If a vim/neovim user wishes to suppress loading the vim plugin by
setting g:load_black in their VIMRC (for me, Arch linux automatically
adds the plugin to Neovim's RTP, even though I'm not using it), the
current location of the test comes after a call to has('python3'). This
adds, in my tests, between 35 and 45 ms to Vim load time (which I know
isn't a lot but it's also unnecessary). Moving the call to
`exists('g:load_black')` to before the call to `has('python3')` removes
this unnecessary test and speeds up loading.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
- use `Black` directly: the commands an autocommand runs are Ex commands, so no
execute or colon is necessary.
- use an `augroup` (best practice) to prevent duplicate autocommands from
hindering performance.
I did this manually for the last few releases and I think it's going to be
helpful in the future too. Unfortunately this adds a little more work during
the release (sorry @cooperlees).
This change will also improve the merge conflict situation a bit, because
changes to different sections won't merge conflict.
For the last release, the sections were in a kind of random order. In the
template I put highlights and "Style" first because they're most important
to users, and alphabetized the rest.
It was causing stability issues because the first pass
could cause a "magic trailing comma" to appear, meaning
that the second pass might get a different result. It's
not critical.
Some things format differently (with extra parens)
It turns out "simple_stmt" isn't that simple: it can contain multiple
statements separated by semicolons. Invisible parenthesis logic for
arithmetic expressions only looked at the first child of simple_stmt.
This causes instability in the presence of semicolons, since the next
run through the statement following the semicolon will be the first
child of another simple_stmt.
I believe this along with #2572 fix the known stability issues.
Fixes#2651. Fixes#2754. Fixes#2518. Fixes#2321.
This adds a test that lists a number of cases of unstable formatting
that we have seen in the issue tracker. Checking it in will ensure
that we don't regress on these cases.
At the moment, it's just a source of spurious CI failures and busywork
updating the configuration file.
Unlike diff-shades, it is run across many different platforms and
Python versions, but that doesn't seem essential. We already run unit
tests across platforms and versions.
I chose to leave the code around for now in case somebody is using it,
but CI will no longer run it.
Since power operators almost always have the highest binding power in expressions, it's often more readable to hug it with its operands. The main exception to this is when its operands are non-trivial in which case the power operator will not hug, the rule for this is the following:
> For power ops, an operand is considered "simple" if it's only a NAME, numeric CONSTANT, or attribute access (chained attribute access is allowed), with or without a preceding unary operator.
Fixes GH-538.
Closes GH-2095.
diff-shades results: https://gist.github.com/ichard26/ca6c6ad4bd1de5152d95418c8645354b
Co-authored-by: Diego <dpalma@evernote.com>
Co-authored-by: Felix Hildén <felix.hilden@gmail.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Closes#2360: I'd like to make passing SRC or `--code` mandatory and the arguments mutually exclusive. This will change our (partially already broken) promises of CLI behavior, but I'll comment below.
- State we're now stable and that we'll uphold our formatting changes as per policy
- Link to The Black Style doc.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
This PR is intended to have no change to semantics.
This is in preparation for #2784 which will likely introduce more logic
that depends on `current_line.depth`.
Inlining the subtraction gets rid of offsetting and makes it much easier
to see what the result will be.
Fixes#2506
``XDG_CACHE_HOME`` does not work on Windows. To allow for users to set a custom cache directory on all systems I added a new environment variable ``BLACK_CACHE_DIR`` to set the cache directory. The default remains the same so users will only notice a change if that environment variable is set.
The specific use case I have for this is I need to run black on in different processes at the same time. There is a race condition with the cache pickle file that made this rather difficult. A custom cache directory will remove the race condition.
I created ``get_cache_dir`` function in order to test the logic. This is only used to set the ``CACHE_DIR`` constant.
- Add Furo dependency to docs/requirements.txt
- Drop a fair bit of theme configuration
- Fix the toctree declarations in index.rst
- Move stuff around as Furo isn't 100% compatible with Alabaster
Furo was chosen as it provides excellent mobile support, user
controllable light/dark theming, and is overall easier to read
Fixes#2742.
This PR adds the ability to configure additional python cell magics. This
will allow formatting cells in Jupyter Notebooks that are using custom (python)
magics.
Black would now echo the location that it determined as the root path
for the project if `--verbose` is enabled by the user, according to
which it chooses the SRC paths, i.e. the absolute path of the project
is `{root}/{src}`.
Closes#1880
*blib2to3's support was left untouched because: 1) I don't want to touch
parsing machinery, and 2) it'll allow us to provide a more useful error
message if someone does try to format Python 2 code.
I believe it would be useful to split up the long list of changes a bit more.
Specific changes:
- Removed the entry for new flake8 plugins; this is purely internal and not of interest to users
- Put regex in the packaging section
- New section for Jupyter Notebook
- New section for Python 3.10, mostly match/case stuff
error: cannot format <string>: ('EOF in multi-line statement', (2, 0))
▲ before ▼ after
error: cannot format <string>: Cannot parse: 2:0: EOF in multi-line statement
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
We were no longer using it since GH-2644 and GH-2654. This should hopefully
make using Black easier to use as there's one less compiled dependency.
The core team also doesn't have to deal with the surprisingly frequent fires
the regex packaging setup goes through.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
* Treat functions/classes in blocks as if they're nested
One curveball is that we still want two preceding newlines before blocks
that are probably logically disconnected. In other words:
if condition:
def foo():
return "hi"
# <- aside: this is the goal of this commit
else:
def foo():
return "cya"
# <- the two newlines spacing here should stay
# since this probably isn't related
with open("db.json", encoding="utf-8") as f:
data = f.read()
Unfortunately that means we have to special case specific clause types
instead of just being able to just for a colon leaf. The hack used here
is to check whether we're adding preceding newlines for a standalone or
dependent clause. "Standalone" being a clause that doesn't need another
clause to be valid (eg. if) and vice versa.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
This removes all but one usage of the `regex` dependency. Tricky bits included:
- A bug in test_black.py where we were incorrectly using a character range. Fix also submitted separately in #2643.
- `tokenize.py` was the original use case for regex (#1047). The important bit is that we rely on `\w` to match anything valid in an identifier, and `re` fails to match a few characters as part of identifiers. My solution is to instead match all characters *except* those we know to mean something else in Python: whitespace and ASCII punctuation. This will make Black able to parse some invalid Python programs, like those that contain non-ASCII punctuation in the place of an identifier, but that seems fine to me.
- One import of `regex` remains, in `trans.py`. We use a recursive regex to parse f-strings, and only `regex` supports that. I haven't thought of a better fix there (except maybe writing a manual parser), so I'm leaving that for now.
My goal is to remove the `regex` dependency to reduce the risk of breakage due to dependencies and make life easier for users on platforms without wheels.
The recent 2021.4 release of pyinstaller-hooks-contrib now contains a
built-in hook for platformdirs. Manually specifying the hidden import
arg should no longer be needed.
Fixes https://github.com/psf/black/issues/2627 , a non-Python cell magic such as `%%writeline` can legitimately contain "incorrect" indentation, however this causes `tokenize-rt` to return an error. To avoid this, `validate_cell` should early detect cell magics (just like it detects `TransformerManager` transformations).
Test added too, in the shape of a "badly indented" `%%writefile` within `test_non_python_magics`.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Marco Edward Gorelli <marcogorelli@protonmail.com>
In Python 3.10 the exception generated by creating a process pool on
a Python build that doesn't support this is now `NotImplementedError`
Commit history before merge:
* Fix process pool fallback on Python 3.10
* Update CHANGES.md
* Update CHANGES.md
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
The implementation of the new backtracking logic depends heavily on deepcopying the current state of the parser before seeing one of the new keywords, which by default is an very expensive operations. On my system, formatting these 3 files takes 1.3 seconds.
```
$ touch tests/data/pattern_matching_*; time python -m black -tpy310 tests/data/pattern_matching_* 19ms
All done! ✨🍰✨
3 files left unchanged.
python -m black -tpy310 tests/data/pattern_matching_* 2,09s user 0,04s system 157% cpu 1,357 total
```
which can be optimized 3X if we integrate the existing copying logic (`clone`) to the deepcopy system;
```
$ touch tests/data/pattern_matching_*; time python -m black -tpy310 tests/data/pattern_matching_* 1ms
All done! ✨🍰✨
3 files left unchanged.
python -m black -tpy310 tests/data/pattern_matching_* 0,66s user 0,02s system 147% cpu 0,464 total
```
This still might have some potential, but that would be way trickier than this initial patch.
* Improve Python 2 only syntax detection
First of all this fixes a mistake I made in Python 2 deprecation PR
using token.* to check for print/exec statements. Turns out that
for nodes with a type value higher than 256 its numeric type isn't
guaranteed to be constant. Using syms.* instead fixes this.
Also add support for the following cases:
print "hello, world!"
exec "print('hello, world!')"
def set_position((x, y), value):
pass
try:
pass
except Exception, err:
pass
raise RuntimeError, "I feel like crashing today :p"
`wow_these_really_did_exist`
10L
* Add octal support, more test cases, and fixup long ints
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
`DEPRECATION: Python 2 support will be removed in the first stable releaseexpected in January 2022` - > `DEPRECATION: Python 2 support will be removed in the first stable release expected in January 2022`
* Update CHANGES.md for 21.10b0 release
* Update version in docs/usage_and_configuration/the_basics.md
* Also update docs/integrations/source_version_control.md ...
- Install build-essential to avoid build issues like #2568 when dependencies don't have prebuilt wheels available
- Use multi-stage build instead of trying to purge packages and cache from the image
Copying `/root/.local/` installs only black's built Python dependencies (< 20 MB).
So the image is barely larger than python:3-slim base image
* Prepare for Python 2 depreciation
- Use BlackRunner and .stdout in command line test
So the next commit won't break this test. This is in its own commit so
we can just revert the depreciation commit when dropping Python 2
support completely.
* Deprecate Python 2 formatting support
Existing test was actually running a full black-primer
run which could be slow. This goes from 8 seconds to
0.4 seconds on my machine.
Needed to move to top level scope to leverage the caplog
feature of pytest in order to test that the command line
was parsing the bogus arguments and dumping to stderr.
* fix: allow tests to be run from the tests/ directory
* fix: try fixing windows build with MarcoGorelli's suggestion
* Windows hotfix + better respect test's spirit
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
If the individual failures are verbose, it's useful to have
the summary at the end. Otherwise, it can be really difficult
to figure out which projects have an issue.
It currently prints both ASTs - this also
adds the line diff, making it much easier to visualize
the changes as well. Not too verbose since it's only a diff.
Fixes#2394. Eventually fixes#517.
This is essentially @pradyunsg's suggestion from #2394. I suggest that at the
same time we start the formal stability policy, we take a few other disruptive steps
and drop Python 2 and the "b" marker.
Co-authored-by: Pradyun Gedam <pradyunsg@gmail.com>
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
The main goals of this commit include:
* improving consistency on how strict the test suite is -- Jelle has
seen cases where a test did not fail to an incomplete test setup
even though it should've
* simplifying tests for both ease of creation and reading via
parametrization and helpers
* reorganizing the test suite by grouping more tests
* dropping test suite dependencies that aren't strictly necessary
The test suite could definitely do with more refactoring, but this is a
good first pass. Anyway it would've gotten too big to review effectively
if I did continue on this PR.
Commit history before squash merge:
* Drop parameterized dep and refactor format tests
Since the test suite is already using pytest-only features we can drop
the parameterized test dependency in favour of pytest's own offering.
I also added an utility function called assert_format that makes it
even easier to verify Black formats some code correctly. We already
have great tooling if the case is very simple in test_format.py but
any sort of complication makes it hard to use. Also if you're writing
a non-standard test case, you have to be careful to include all of
the steps so issues don't go undetected. assert_format aims to
1) improve consistency, 2) avoid wasted CPU cycles, and 3) avoid
logical errors that hide issues.
Finally, quite a few tests were either moved and/or simplified with
the new setup.
* Move file collection tests
* Add assert_collected_sources helper function
Testing source collection involves a lot of repetitive boilerplate,
something that black.files.get_sources's signature does not help with.
So to cut down on boilerplate like `report=black.Report()` I added
a convenience function to tests/test_black.py which wraps
black.get_sources. Its signature is designed to be much more lax to
make it much easier to use. Somehow this leads to cutting 100 lines!
Also IMO the test cases are much easier to read since it's more
declarative than really procedural now.
* Run isort on some test files
* Move cache tests
* Use pytest-style asserts & add parametrization
* Drop now unnecessary test dependencies
*pytest-cases might be interesting for further refactoring but I
haven't been able to wrap my head around it for the time being. We
can always revisit anyway.
Commit history before merge:
* Bump required aiohttp version to 3.7.4
This release includes an important security fix
(https://github.com/aio-libs/aiohttp/security/advisories/GHSA-v6wp-4m6f-gcjg) and many
other improvements.
* add changelog entry
* Let's not forget about Pipfile
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
* re-implement simple CORS middleware for blackd
* remove aiohttp-cors from setup.py
* Remove aiohttp-cors from Pipfile.lock
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Project packaging is using TOML due to pyproject.toml but fails to
mention it, causing installation failures with newer setuptools-scm 6.3.0.
Commit history before merge:
* Fix missing toml extra
Fixed breakage uncovered by setuptools-scm 6.3.0 where installation
would fail for project that missed to mention the toml extra.
* Bump setuptools[-scm] to avoid toml extra
https://github.com/psf/black/pull/2475#issuecomment-912730714
> If you constraint greater than 6.3.0 and setuptools greater than 45
> you can skip the extra,
* Actually for safety reasons, just use the extra
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Add new platformdirs dependencies as hidden imports when creating
PyInstaller-based binaries.
platformdirs imports the module for each platform dynamically, which
PyInstaller is unable to correctly detect for packing. By adding the
modules as hidden imports, we are telling PyInstaller to include the
modules in the packaged binary.
This issue seems to have been introduce when switching to platformdirs
in #2375. fixes#2464
Commit history before merge:
* Add hidden import to PyInstaller build
Add new platformdirs dependency as a hidden import when creating
PyInstaller based binaries.
* Only include the platformdirs for the relevant os
Draft releases don't trigger the workflows (that's good!) but since they only
Commit history before merge:
* fix: run pypi upload from published draft releases
* Fix broken task list markup in PR template
* change docker workflow to build on release publish
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
re. import, the ipynb code was assuming that typing-extensions would
always be available, but that's not the case! There's an environment
marker on the requirement meaning it won't get installed on 3.10 or
higher. The test suite didn't catch this issue since aiohttp pulls in
typing-extensions unconditionally.
Hopefully my first release doesn't end up in flames 🔥
Commit history before merge:
* Prepare CHANGES.md for release 21.8b0
* I need to add a check for this too.
The setuptools-scm dependency in setup.cfg did not have a version
specified, leading to the issues described in #2449 after a faulty release
of setuptools-scm was published. To avoid this issue in the future, the
last version before that faulty update is now pinned.
Commit history before merge:
* Pin setuptools-scm dependency version (#2449)
* Update CHANGES.md
* Let's pin in pyproject.toml too
Mostly since it's non-build-backend specific configuration and more
widely standardized file. Not sure what benefits pinning in setup.cfg
gives us on top of pyproject.toml but I'd rather not find out during
the release that is supposed to happen today 😉
Co-authored-by: FiNs <24248249+FabianNiehaus@users.noreply.github.com>
This also introduces a script so we can reference the latest version in
the example pre-commit configuration in the docs without forgetting to
update it when doing a release!
Commit history before merge:
* document jupyter hook
* note minimum version
* add check for pre-commit version
* use git tag
* curl api during ci
* parse version from changes file
* fixup script
* rename variables
* Tweak the docs & magical script
* fix couple of typos
* pin additional dependencies in hook
* Add types-PyYAML to lockfile
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Implementation stolen from PR davidhalter/parso#162. Thanks parso!
I could add support for these newer syntactical constructs in the
target version detection logic, but until I get diff-shades up
and running I don't feel very comfortable adding the code.
This fixes a bug where a trailing comma would be added to a
parenthesized return annotation changing its type to a tuple.
Here's one case where this bug shows up:
```
def spam() -> (
this_is_a_long_type_annotation_which_should_NOT_get_a_trailing_comma
):
pass
```
The root problem was that the type annotation was treated as if it was
a parameter & import list (is_body=True to linegen::bracket_split_build_line)
where a trailing comma is usually fine. Now there's another check in the
aforementioned function to make sure the body it's operating on isn't
a return annotation before truly adding a trailing comma.
* Add CPython repository into primer runs
- CPython tests is probably the best repo for black to test on as the stdlib's unittests should use all syntax
- Limit to running in recent versions of the python runtime - e.g. today >= 3.9
- This allows us to parse more syntax
- Exclude all failing files for now
- Definitely have bugs to explore there - Refer to #2407 for more details there
- Some test files on purpose have syntax errors, so we will never be able to parse them
- Add new black command arguments logging in debug mode; very handy for seeing how CLI arguments are formatted
CPython now succeeds ignoring 16 files:
```
Oh no! 💥💔💥
1859 files would be reformatted, 148 files would be left unchanged.
```
Testing
- Ran locally with and without string processing - Very little runtime difference BUT 3 more failed files
```
time /tmp/tb/bin/black --experimental-string-processing --check . 2>&1 | tee /tmp/black_cpython_esp
...
Oh no! 💥💔💥
1859 files would be reformatted, 148 files would be left unchanged, 16 files would fail to reformat.
real 4m8.563s
user 16m21.735s
sys 0m6.000s
```
- Add unittest for new covienence config file flattening that allows long arguments to be broke up into an array/list of strings
Addresses #2407
---
Commit history before merge:
* Add new `timeout_seconds` support into primer.json
- If present, will set forked process limit to that value in seconds
- Otherwise, stay with default 10 minutes (600 seconds)
* Add new "base_path" concept to black-primer
- Rather than start at the repo root start at a configured path within the repository
- e.g. for cpython only run black on `Lib`
* Disable by default - It's too much for GitHub Actions. But let's leave config for others to use
* Minor tweak to _flatten_cli_args
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
The fix for #1688 in #1761 breaks help("modules") introspection and also leads
to unhappy results when inadvertently importing blackd from Python. Basically
the sys.exit(-1) causes the whole Python REPL to exit -- not great to suffice.
Commit history before merge:
* Change sys.exit to Raise.
* Add #2440 to changelog.
* Fix lint error from prettier
* Remove exception chain for more helpful user message.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
While this development environment / requirements situation is a mess,
let's at least make it consistent. We're effectively supporting two
modes of development in this project, 1) tox based dev commands
(e.g. `tox -e fuzz`) that are dead simple to use, and 2) manual dev
commands (e.g. `pytest -n auto`) that give more control and are usually
faster.
Right now the Pipfile.lock based development environment is incomplete
missing the test requirements specified in ./test_requirements.txt.
This is annoying since manual test commands (e.g. `pytest -k fmtonoff`)
fail. Let's fix this by making Pipfile.lock basically a
"everything you need" requirements file (fuzzing not included since
running it locally is not something common).
Oh and let's bump some documentation deps (and bring some requirements
across .pre-commit-config.yaml, Pipfile, and docs/requirement.txt in
alignment again). Don't worry, I tested these changes so they should
be fine (hopefully!).
we don't accidentally add backslashes to them when normalizing quotes
because that's invalid syntax!
The problem this commit fixes is that matches would eat too much
blocking important matches to occur. For example, here's one f-string
body:
{a}{b}{c}
I know there's no risk of introducing backslashes here, but the regex
already goes sideways with this. Throwing this example at regex101
I get:
{a}{b}{c} # The As and Bs are the two matches, and the upper
---- ---- # case letters are the groups with those matches.
aAaa bbBb
... we've missed the middle expression (so if any backslashes in a
more complex example were introduced there we wouldn't bail out
even though we should -- hence the bug). As it stands the regex
needs somesort of extra character (or the start/end of the body)
around the expressions but that isn't always the case as shown
above.
The fix implemented here is to turn the "eat a surrounding non-curly
bracket character" groups ie. `(?:[^{]|^)` and `(?:[^}]|$)` into
negative lookaheads and lookbehinds. This still guarantees the
already specified rules but without problematically eating extra
characters ^^
Fixes#2359.
This commit now makes Black exit with an user-friendly error message if a
.gitignore file couldn't be parsed -- a massive improvement over an opaque
traceback!
* Remove `language_version` for pre-commit
At my company, we set the Python version in `default_language_version`
in each repo's `.pre-commit-config.yaml`,
so that all hooks are running with the same Python version.
However, this currently doesn't work for black,
as the `language_version` specified here
in the upstream `.pre-commit-hooks.yaml` takes precedence.
Currently, this requires us to manually set `language_version`
specifically for black,
duplicating the value from `default_language_version`.
The failure mode otherwise is subtle -
black works most of the time,
but try to add a walrus operator and it suddenly breaks!
Given that black's `setup.py` already has `python_requires>=3.6.2`,
specifying that `python3` must be used here isn't needed
as folks inadvertently using Python 2 will get hook-install-time failures anyways.
Remove the `language_version` from these upstream hook configs
so that users of black are able to use `default_language_version`
and have it apply to all their hooks, black included.
Example `.pre-commit-config.yaml` before:
```
default_language_version:
python: python3.8
repos:
- repo: https://github.com/psf/black
rev: 21.7b0
hooks:
- id: black
language_version: python3.8
```
After:
```
default_language_version:
python: python3.8
repos:
- repo: https://github.com/psf/black
rev: 21.7b0
hooks:
- id: black
```
* Add changelog entry
To summarise, based on what was discussed in that issue:
due to not being able to parse automagics (e.g. pip install black)
without a running IPython kernel, cells with syntax which is parseable
by neither ast.parse nor IPython will be skipped cells with multiline
magics will be skipped trailing semicolons will be preserved, as they
are often put there intentionally in Jupyter Notebooks to suppress
unnecessary output
Commit history before merge (excluding merge commits):
* wip
* fixup tests
* skip tests if no IPython
* install test requirements in ipynb tests
* if --ipynb format all as ipynb
* wip
* add some whole-notebook tests
* docstrings
* skip multiline magics
* add test for nested cell magic
* remove ipynb_test.yml, put ipynb tests in tox.ini
* add changelog entry
* typo
* make token same length as magic it replaces
* only include .ipynb by default if jupyter dependencies are found
* remove logic from const
* fixup
* fixup
* re.compile
* noop
* clear up
* new_src -> dst
* early exit for non-python notebooks
* add non-python test notebook
* add repo with many notebooks to black-primer
* install extra dependencies for black-primer
* fix planetary computer examples url
* dont run on ipynb files by default
* add scikit-lego (Expected to change) to black-primer
* add ipynb-specific diff
* fixup
* run on all (including ipynb) by default
* remove --include .ipynb from scikit-lego black-primer
* use tokenize so as to mirror the exact logic in IPython.core.displayhooks quiet
* fixup
* 🎨
* clarify docstring
* add test for when comment is after trailing semicolon
* enumerate(reversed) instead of [::-1]
* clarify docstrings
* wip
* use jupyter and no_jupyter marks
* use THIS_DIR
* windows fixup
* perform safe check cell-by-cell for ipynb
* only perform safe check in ipynb if not fast
* remove redundant Optional
* 🎨
* use typeguard
* dont process cell containing transformed magic
* require typing extensions before 3.10 so as to have TypeGuard
* use dataclasses
* mention black[jupyter] in docs as well as in README
* add faq
* add message to assertion error
* add test for indented quieted cell
* use tokenize_rt else we cant roundtrip
* fmake fronzet set for tokens to ignore when looking for trailing semicolon
* remove planetary code examples as recent commits result in changes
* use dataclasses which inherit from ast.NodeVisitor
* bump typing-extensions so that TypeGuard is available
* bump typing-extensions in Pipfile
* add test with notebook with empty metadata
* pipenv lock
* deprivative validate_cell
* Update README.md
* Update docs/getting_started.md
* dont cache notebooks if jupyter dependencies arent found
* dont write to cache if jupyter deps are not installed
* add notebook which cant be parsed
* use clirunner
* remove other subprocess calls
* add docstring
* make verbose and quiet keyword only
* 🎨
* run second many test on directory, not on file
* test for warning message when running on directory
* early return from non-python cell magics
* move NothingChanged to report to avoid circular import
* remove circular import
* reinstate --ipynb flag
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
The template weren't applying the default labels ever since I renamed
the labels.
There has been enough issues about documentation opened recently so it's
probably worth a template for it.
* Update CHANGES.md for 21.7b0 release
* move some changes to the right section
* another one
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
They seem to be used as test cases for a specific region of formatting
that was slow. Now performance testing is probably something end users
won't be needing to do, so this is an easy way of reducing the sdist
size sigificantly.
toml unfortunately has a lack of maintainership issue right now. It's
evident by the fact toml only supports TOML v0.5.0. TOML v1.0.0 has
been recently released and right now Black crashes hard on its usage.
tomli is a brand new parse only TOML library. It supports TOML
v1.0.0. Although TBH we're switching to this one mostly because
pip is doing the same.
*The upper bound was included at the library maintainer's request.
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
Co-authored-by: Taneli Hukkinen <3275109+hukkin@users.noreply.github.com>
Commit history before merge:
* Accept empty stdin (close#2337)
* Update tests/test_black.py
* Add changelog
* Assert Black reformats an empty string to an empty string (#2337) (#2346)
* fix
Click types have been moved to click repo itself. See pallets/click#1856
I've had some issues with typeshed types being outdated in another project
so might be good to avoid that here.
Commit history before merge:
* Get `click` types from main repo
* Fix mypy errors
* Require click v8 for type annotations
* Update Pipfile
This commit fixes parsing of the skip-string-normalization option in vim
plugin. Originally, the plugin read the string-normalization option,
which does not exist in help (--help) and it's not respected by black
on command line.
Commit history before merge:
* fix string normalization option in vim plugin
* fix string normalization option in vim plugin
* Finish and fix patch (thanks Matt Wozniski!)
FYI: this is totally the work and the comments below of Matt (AKA godlygeek)
This fixes two entirely different problems related to how pyproject.toml
files are handled by the vim plugin.
=== Problem #1 ===
The plugin fails to properly read boolean values from pyproject.toml.
For instance, if you create this pyproject.toml:
```
[tool.black]
quiet = true
```
the Black CLI is happy with it and runs without any messages, but the
:Black command provided by this plugin fails with:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 102, in Black
File "<string>", line 150, in get_configs
File "<string>", line 150, in <dictcomp>
File "/usr/lib/python3.6/distutils/util.py", line 311, in strtobool
val = val.lower()
AttributeError: 'bool' object has no attribute 'lower'
```
That's because the value returned by the toml.load() is already a
bool, but the vim plugin incorrectly tries to convert it from a str to a bool.
The value returned by toml_config.get() was always being passed to
flag.cast(), which is a function that either converts a string to an
int or a string to a bool, depending on the flag. vim.eval()
returns integers and strings all as str, which is why we need the cast,
but that's the wrong thing to do for values that came from toml.load().
We should be applying the cast only to the return from vim.eval()
(since we know it always gives us a string), rather than casting the
value that toml.load() found - which is already the right type.
=== Problem #2 ===
The vim plugin fails to take the value for skip_string_normalization
from pyproject.toml. That's because it looks for a string_normalization
key instead of a skip_string_normalization key, thanks to this line
saying the name of the flag is string_normalization:
black/autoload/black.vim (line 25 in 05b54b8)
```
Flag(name="string_normalization", cast=strtobool),
```
and this dictcomp looking up each flag's name in the config dict:
black/autoload/black.vim (lines 148 to 151 in 05b54b8)
```
return {
flag.var_name: flag.cast(toml_config.get(flag.name, vim.eval(flag.vim_rc_name)))
for flag in FLAGS
}
```
For the second issue, I think I'd do a slightly different patch. I'd
keep the change to invert this flag's meaning and change its name that
this PR proposes, but I'd also change the handling of the
g:black_skip_string_normalization and g:black_string_normalization
variables to make it clear that g:black_skip_string_normalization is
the expected name, and g:black_string_normalization is only checked
when the expected name is unset, for backwards compatibility.
My proposed behavior is to check if g:black_skip_string_normalization
is defined and to define it if not, using the inverse of
g:black_string_normalization if that is set, and otherwise to the
default of 0. The Python code in autoload/black.vim runs later, and
will use the value of g:black_skip_string_normalization (and ignore
g:black_string_normalization; it will only be used to set
g:black_skip_string_normalization if it wasn't already set).
---
Co-authored-by: Matt Wozniski <mwozniski@bloomberg.net>
* Fix plugin/black.vim (need to up my vim game)
Co-authored-by: Matt Wozniski <godlygeek@gmail.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Co-authored-by: Matt Wozniski <mwozniski@bloomberg.net>
Co-authored-by: Matt Wozniski <godlygeek@gmail.com>
Commit history before merge:
* Find pyproject from vim relative to current file
* Merge remote-tracking branch 'upstream/main' into find-pyproject-vim
* Finish and fix this patch (thanks Matt Wozniski!)
Both the existing code and the proposed code are broken.
The vim.eval() call (whether it's vim.eval("@%") or
vim.eval("fnamemodify(getcwd(), ':t')) returns a string, and it passes
that string to find_pyproject_toml, which expects a sequence of strings,
not a single string, and - since a string is a sequence of single
character strings - it gets turned into a list of ridiculous paths. I
tested with a file called foo.py, and added a print(path_srcs) into
find_project_root, which printed out:
[
PosixPath('/home/matt/f'),
PosixPath('/home/matt/o'),
PosixPath('/home/matt/o'),
PosixPath('/home/matt'),
PosixPath('/home/matt/p'),
PosixPath('/home/matt/y')
]
This does work for an unnamed buffer, too - we wind up calling
black.find_pyproject_toml(("",)), and that winds up prepending the
working directory to any relative paths, so "" just gets turned into
the current working directory.
Note that find_pyproject_toml needs to be passed a 1-tuple, not a
list, because it requires something hashable (thanks to
functools.lru_cache being used)
Co-authored-by: Matt Wozniski <mwozniski@bloomberg.net>
* I forgot the CHANGELOG entry ... again
* I'm really bad at dealing with merge conflicts sometimes
* Be more correct describing search behaviour
Co-authored-by: Austin Glaser <austin.glaser@spacex.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Co-authored-by: Matt Wozniski <mwozniski@bloomberg.net>
* Add STDIN test to primer
- Check that out STDIN black support stays working
- Add asyncio.subprocess STDIN pip via communicate
- We just check we format python code from primer's `lib.py`
Fixes#2310
`black.strings.get_string_prefix` used to lowercase the extracted
prefix before returning it. This is wrong because 1) it ignores the
fact we should leave R prefixes alone because of MagicPython, and 2)
there is dedicated prefix casing handling code that fixes issue 1.
`.lower` is too naive.
This was originally fixed in 20.8b0, but was reintroduced since 21.4b0.
I also added proper prefix normalization for docstrings by using the
`black.strings.normalize_string_prefix` helper.
Some more test strings were added to make sure strings with capitalized
prefixes aren't treated differently (actually happened with my original
patch, Jelle had to point it out to me).
This commit adds a short section discussing the non-processing of docstrings
besides spacing improvements, mentions comment moving and links to the
AST equivalence discussion. I also added a simple spacing test for good
measure.
Commit history before merge:
* Mention comment non-processing in documentation, add spacing test
* Mention special cases for comment spacing
* Add all special cases, improve wording
Not sure the fix is right. Here is what I found: issue is connected
with line
first.prefix = prefix[comment.consumed :]
in `comments.py`. `first.prefix` is a prefix of the line, that ends
with `# fmt: skip`, but `comment.consumed` is the length of the
`" # fmt: skip"` string. If prefix length is greater than 14,
`first.prefix` will grow every time we apply formatting.
Fixes#2254
See if we pass all our repos with experimental string processing enabled.
Django probably needed:
- Ignores >= 3.8 only
We could support PEP440 version specifiers, but that would introduce the packaging module as a dependency that I'd like to avoid ... Or I could implement a poor persons version or vendor
Commit history before merge:
* [primer] Enable everything
* Add exclude extend to django CLI args for primer
* Change default timeout to from 5 to 10 mins for a primer project
* Skip string normalization for Django
* Limit Django to >= 3.8 due to := operator
The random asyncio bug is just too frequent and annoying to be
worth the speed improvements. Our test suite is already quite fast.
Random test failures hurt for 3 reasons, 1) they are discouraging for
new contributors who won't understand it's out of their control, 2)
it's annoying and time consuming to rerun the workflow, and 3) it
makes single job failures feel less important (even they should be
treated as important!).
Closes#1246: This PR adds a new option (and automatically a toml entry, hooray for existing configuration management 🎉) to require a specific version of Black to be running.
For example: `black --required-version 20.8b -c "format = 'this'"`
Execution fails straight away if it doesn't match `__version__`.
Commit history before merge:
* Add black_version to github action
* Merge upstream/main into this branch
* Add version support for the Black action pt.2
Since we're moving to a composite based action, quite a few changes
were made. 1) Support was added for all OSes (Windows was painful).
2) Isolation from the rest of the workflow had to be done manually
with a virtual environment.
Other noteworthy changes:
- Rewrote basically all of the logic and put it in a Python script
for easy testing (not doing it here tho cause I'm lazy and I can't
think of a reasonable way of testing it).
- Renamed `black_version` to `version` to better fit the existing
input naming scheme.
- Added support for log groups, this makes our action's output a
bit more fancy (I may or may have not added some debug output too).
* Add more to and sorta rewrite the Action's docs
Reflect compatability and gotchas.
* Add CHANGELOG entry
* Merge main into this branch
* Remove debug; address typos; clean up action.yml
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
PR #2286 did not fix the edge-cases (e.g. when the string is just long
enough to cause a line to be 89 characters long). This PR corrects that
mistake.
Behavior other than output shouldn't depend on the verbose/quiet option. As far as I can tell this currently has no visible effect, since code after this function is called handles an empty list gracefully.
This commit makes use of HTML comments inside GitHub issue templates
to make sure that even if they aren't removed by the issue author they won't be shown
in the rendered output.
The goal is to simply make the issues less noisy by removing template messages.
There's some weird interaction between Click and
sphinxcontrib-programoutput on Windows that leads to an encoding error
during the printing of black-primer's help text.
Also symlinks aren't well supported on Windows so let's just use
includes which actually work because we now use MyST :D
* Add optional uvloop import
- If we find `uvloop` in the env for black, blackd or black-primer lets try and use it
- Add a uvloop extra install
Fixes#2257
Test:
- Add ci job to install black[uvloop] and run a primer run with uvloop
- Only with latest python (3.9)
- Will be handy to compare runtimes as a very unoffical benchmark
* Remove tox install
* Add to CHANGES/news
Resolves#2168 by disabling the insertion of a " " when the docstring is entirely empty.
Note that this PR is focussed only on the case of empty docstrings. In particular this does not make any changes to the behaviour that a " " is inserted if a non-empty docstring begins with the quoting character. That is, black still prefers:
""" "something" """
to:
""""something" """
and that:
""""Something""""
is not a legal docstring.
This commit creates a Frequently Asked Questions document for our users
to read. Hopefully they actually read it too. Items included are:
Black's non-API, AST safety, style stability, file discovery, Flake8
disagreements and Python 2 support. Hopefully I've got the answers
down in general.
Commit history before merge:
* Create FAQ
* Address feedback
* Move to single markdown file
* Minor wording improvements
* Add changelog entry
* Solved Problem with non-alphabetical .gitignore files
When .gitignore file in the user's project directory contained non-alphabetical
characters(Japanese, Korean, Chinese, etc), Nothing works and printed this
weird message in the console('cp949' is the encoding for Korean characters
in this case). It even blocks VSCode's formatting from working. This commit
solves the problem.
Traceback (most recent call last):
File "c:\users\username\anaconda3\envs\project-name\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\username\anaconda3\envs\project-name\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\username\anaconda3\envs\project-name\Scripts\black.exe\__main__.py", line 7, in <module>
File "c:\users\username\anaconda3\envs\project-name\lib\site-packages\black\__init__.py", line 1056, in patched_main
main()
File "c:\users\username\anaconda3\envs\project-name\lib\site-packages\click\core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "c:\users\username\anaconda3\envs\project-name\lib\site-packages\click\core.py", line 782, in main
rv = self.invoke(ctx)
File "c:\users\username\anaconda3\envs\project-name\lib\site-packages\click\core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "c:\users\username\anaconda3\envs\project-name\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "c:\users\username\anaconda3\envs\project-name\lib\site-packages\click\decorators.py", line 21, in new_func
return f(get_current_context(), *args, **kwargs)
File "c:\users\username\anaconda3\envs\project-name\lib\site-packages\black\__init__.py", line 394, in main
stdin_filename=stdin_filename,
File "c:\users\username\anaconda3\envs\project-name\lib\site-packages\black\__init__.py", line 445, in get_sources
gitignore = get_gitignore(root)
File "c:\users\username\anaconda3\envs\project-name\lib\site-packages\black\files.py", line 122, in get_gitignore
lines = gf.readlines()
UnicodeDecodeError: 'cp949' codec can't decode byte 0xb0 in position 13: illegal multibyte sequence
* Made .gitignore File Reader Detect Its Encoding
* Revert "Made .gitignore File Reader Detect Its Encoding"
This reverts commit 6c3a7ea42b5b1e441cc0026c8205d1cee68c1bba.
* Revert "Solved Problem with non-alphabetical .gitignore files"
This reverts commit b0100b5d91c2f5db544a60f34aafab120f0aa458.
* Made .gitignore Reader Open the File with Auto Encoding Detecting
https://docs.python.org/3.8/library/tokenize.html#tokenize.open
* Revert "Made .gitignore Reader Open the File with Auto Encoding Detecting"
This reverts commit 50dd80422938649ccc8c7f43aac752f9f6481779.
* Made .gitignore Reader Use UTF-8
* Updated CHANGES.md for #2229
* Updated CHANGES.md for #2229
* Update CHANGES.md
* Update CHANGES.md
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
The isort configuration currently in the Black code style document is
duplicated in Using Black with other tools document. I think it would
be better to consolidate information and simply link to the tool guide,
mentioning the easy profile in the original document.
I changed the link from isort PyPI page to Black's docs on isort
because for users it could be better to see the Black docs on why that
configuration is necessary and what isort is from Black's perspective.
Why? The default in Prettier 2.0 was
[changed](https://prettier.io/docs/en/options.html#end-of-line) from
`auto` to `LF`. This makes development on Windows awkward, because
every file is marked with changes both by Prettier and then by Git
regardless of repository line ending settings, making committing harder
than it should be.
---
Aside from that: I noticed that runnin pre-commit manually seems to add
line endings to symlink files, but they disappear when actually committing.
Don't know if that's a known.. quirk..(?) or not.
---
Commit history before merge:
* Make Prettier preserve line ending type
* Move options to .prettierrc
Commit history before merge:
Black now respects .gitignore files in all levels, not only root/.gitignore file
(apply .gitignore rules like git does).
* Fix: typo
* Fix: respect .gitignore files in all levels.
* Add: CHANGELOG note.
* Fix: TypeError: unsupported operand type(s) for +: 'NoneType' and 'PathSpec'
* Update docs.
* Fix: no parent .gitignore
* Add a comment since the if expression is a bit hard to understand
* Update tests - conver no parent .gitignore case.
* Use main's Pipfile.lock instead
The original changes in Pipfile.lock are whitespace only. The changes
turned the JSON's file indentation from 4 to 2. Effectively this
happened: `json.dumps(json.loads(old_pipfile_lock), indent=2) + "\n"`.
Just using main's Pipfile.lock instead of undoing the changes because
1) I don't know how to do that easily and quickly, and 2) there's a
merge conflict.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
* Merge remote-tracking branch 'upstream/main' into i1730 …
conflicts for days ay?
It appears sqlalchemy has recently reformatted their project with
Black 21.5b1.
Most of our dependencies have a lower bound and creating a test
environment with the oldest acceptable dependencies runs the full
Black test suite just fine. The only exception to this is aiohttp-cors.
It's unbounded and the oldest version 0.1.0 until 0.4.0 breaks the
test suite in such an old environment.
Failure with 0.1.0:
```
tests/test_blackd.py:10: in <module>
import blackd
testenv/lib/python3.8/site-packages/blackd/__init__.py:12: in <module>
import aiohttp_cors
testenv/lib/python3.8/site-packages/aiohttp_cors/__init__.py:29: in <module>
from .urldispatcher_router_adapter import UrlDistatcherRouterAdapter
testenv/lib/python3.8/site-packages/aiohttp_cors/urldispatcher_router_adapter.py:27: in <module>
class UrlDistatcherRouterAdapter(RouterAdapter):
testenv/lib/python3.8/site-packages/aiohttp_cors/urldispatcher_router_adapter.py:32: in UrlDistatcherRouterAdapter
def route_methods(self, route: web.Route):
E AttributeError: module 'aiohttp.web' has no attribute 'Route'
```
For 0.2.0:
```
tests/test_blackd.py:10: in <module>
import blackd
testenv/lib/python3.8/site-packages/blackd/__init__.py:12: in <module>
import aiohttp_cors
testenv/lib/python3.8/site-packages/aiohttp_cors/__init__.py:27: in <module>
from .cors_config import CorsConfig
testenv/lib/python3.8/site-packages/aiohttp_cors/cors_config.py:24: in <module>
from .urldispatcher_router_adapter import UrlDistatcherRouterAdapter
testenv/lib/python3.8/site-packages/aiohttp_cors/urldispatcher_router_adapter.py:27: in <module>
class UrlDistatcherRouterAdapter(AbstractRouterAdapter):
testenv/lib/python3.8/site-packages/aiohttp_cors/urldispatcher_router_adapter.py:32: in UrlDistatcherRouterAdapter
def route_methods(self, route: web.Route):
E AttributeError: module 'aiohttp.web' has no attribute 'Route'
```
For 0.3.0:
```
ERROR: Cannot install aiohttp-cors==0.3.0 and aiohttp==3.6.0 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested aiohttp==3.6.0
aiohttp-cors 0.3.0 depends on aiohttp<=0.20.2 and >=0.18.0
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies
```
- Test and Primer don't run for documentation only changes since it's
unnecessary, eating unnecessary cycles and slowing down CI since these
workflows eat up the 20 max workers limit quite easily!
- Documentation Build runs all of the time now since quite a bit of the
content depends on Black's code so even a simple 1-file change in
src/black/__init__.py may break the docs build. It's not like this is
a costly workflow anyway.
Fuzz is still running on all changes because with fuzzing, the more the
better in general. 6 or 7 jobs on a documentation only commit is much
better than 27/28 jobs anyway :p
I also found an error in our bug report issue template :)
We've depended on Click 7.x ever since we broke CI systems across the
world (oops lol) and flake8-mypy was purged a fair bit back: #1867
Also remove the primer tests import in tests/test_black.py because it's
annoying when just trying to actually target tests/test_black.py tests.
`pytest -k test_black.py` doesn't do what you expect due to that import.
* Add stable tag process to release process documentation
- Add reasoning + step commands
* Bah - I ran the linter but forgot to commit
* Update docs/contributing/release_process.md
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
2021-05-11 10:01:03 -07:00
424 changed files with 29787 additions and 8388 deletions
# See also: https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/configuring-issue-templates-for-your-repository#configuring-the-template-chooser
# This is the default and blank issues are useful so let's keep 'em.
blank_issues_enabled:true
contact_links:
- name:Chat on Python Discord
url:https://discord.gg/RtVdv86PrH
about:|
User support, questions, and other lightweight requests can be
handled via the#black-formatter text channel we have on Python
(echo "Please add '(#${{ github.event.pull_request.number }})' change line to CHANGES.md" && \
(echo "Please add '(#${{ github.event.pull_request.number }})' change line to CHANGES.md (or if appropriate, ask a maintainer to add the 'skip news' label)" && \
Currently, _Black_ uses the issue tracker for bugs, feature requests, proposed style
modifications, and general user support. Each of these issues have to be triaged so they
can be eventually be resolved somehow. This document outlines the triaging process and
also the current guidelines and recommendations.
```{tip}
If you're looking for a way to contribute without submitting patches, this might be
the area for you. Since _Black_ is a popular project, its issue tracker is quite busy
and always needs more attention than is available. While triage isn't the most
glamorous or technically challenging form of contribution, it's still important.
For example, we would love to know whether that old bug report is still reproducible!
You can get easily started by reading over this document and then responding to issues.
If you contribute enough and have stayed for a long enough time, you may even be
given Triage permissions!
```
## The basics
_Black_ gets a whole bunch of different issues, they range from bug reports to user
support issues. To triage is to identify, organize, and kickstart the issue's journey
through its lifecycle to resolution.
More specifically, to triage an issue means to:
- identify what type and categories the issue falls under
- confirm bugs
- ask questions / for further information if necessary
- link related issues
- provide the first initial feedback / support
Note that triage is typically the first response to an issue, so don't fret if the issue
doesn't make much progress after initial triage. The main goal of triaging to prepare
the issue for future more specific development or discussion, so _eventually_ it will be
resolved.
The lifecycle of a bug report or user support issue typically goes something like this:
1. _the issue is waiting for triage_
2. **identified** - has been marked with a type label and other relevant labels, more
details or a functional reproduction may be still needed (and therefore should be
marked with `S: needs repro` or `S: awaiting response`)
3. **confirmed** - the issue can reproduced and necessary details have been provided
4. **discussion** - initial triage has been done and now the general details on how the
issue should be best resolved are being hashed out
5. **awaiting fix** - no further discussion on the issue is necessary and a resolving PR
is the next step
6. **closed** - the issue has been resolved, reasons include:
- the issue couldn't be reproduced
- the issue has been fixed
- duplicate of another pre-existing issue or is invalid
For enhancement, documentation, and style issues, the lifecycle looks very similar but
the details are different:
1. _the issue is waiting for triage_
2. **identified** - has been marked with a type label and other relevant labels
3. **discussion** - the merits of the suggested changes are currently being discussed, a
PR would be acceptable but would be at significant risk of being rejected
4. **accepted & awaiting PR** - it's been determined the suggested changes are OK and a
PR would be welcomed (`S: accepted`)
5. **closed**: - the issue has been resolved, reasons include:
- the suggested changes were implemented
- it was rejected (due to technical concerns, ethos conflicts, etc.)
- duplicate of a pre-existing issue or is invalid
**Note**: documentation issues don't use the `S: accepted` label currently since they're
less likely to be rejected.
## Labelling
We use labels to organize, track progress, and help effectively divvy up work.
Our labels are divided up into several groups identified by their prefix:
- **T - Type**: the general flavor of issue / PR
- **C - Category**: areas of concerns, ranges from bug types to project maintenance
- **F - Formatting Area**: like C but for formatting specifically
- **S - Status**: what stage of resolution is this issue currently in?
- **R - Resolution**: how / why was the issue / PR resolved?
We also have a few standalone labels:
- **`good first issue`**: issues that are beginner-friendly (and will show up in GitHub
banners for first-time visitors to the repository)
- **`help wanted`**: complex issues that need and are looking for a fair bit of work as
to progress (will also show up in various GitHub pages)
- **`skip news`**: for PRs that are trivial and don't need a CHANGELOG entry (and skips
the CHANGELOG entry check)
```{note}
We do use labels for PRs, in particular the `skip news` label, but we aren't that
rigorous about it. Just follow your judgement on what labels make sense for the
specific PR (if any even make sense).
```
## Projects
For more general and broad goals we use projects to track work. Some may be longterm
projects with no true end (e.g. the "Amazing documentation" project) while others may be
more focused and have a definite end (like the "Getting to beta" project).
```{note}
To modify GitHub Projects you need the [Write repository permission level or higher](https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-permission-levels-for-an-organization#repository-access-for-each-permission-level).
```
## Closing issues
Closing an issue signifies the issue has reached the end of its life, so closing issues
should be taken with care. The following is the general recommendation for each type of
issue. Note that these are only guidelines and if your judgement says something else
it's totally cool to go with it instead.
For most issues, closing the issue manually or automatically after a resolving PR is
ideal. For bug reports specifically, if the bug has already been fixed, try to check in
with the issue opener that their specific case has been resolved before closing. Note
that we close issues as soon as they're fixed in the `main` branch. This doesn't
necessarily mean they've been released yet.
Design and enhancement issues should be also closed when it's clear the proposed change
won't be implemented, whether that has been determined after a lot of discussion or just
simply goes against _Black_'s ethos. If such an issue turns heated, closing and locking
is acceptable if it's severe enough (although checking in with the core team is probably
a good idea).
User support issues are best closed by the author or when it's clear the issue has been
resolved in some sort of manner.
Duplicates and invalid issues should always be closed since they serve no purpose and
add noise to an already busy issue tracker. Although be careful to make sure it's truly
a duplicate and not just very similar before labelling and closing an issue as
duplicate.
## Common reports
Some issues are frequently opened, like issues about _Black_ formatted code causing E203
messages. Even though these issues are probably heavily duplicated, they still require
triage sucking up valuable time from other things (although they usually skip most of
their lifecycle since they're closed on triage).
Here's some of the most common issues and also pre-made responses you can use:
### "The trailing comma isn't being removed by Black!"
```text
Black used to remove the trailing comma if the expression fits in a single line, but this was changed by #826 and #1288. Now a trailing comma tells Black to always explode the expression. This change was made mostly for the cases where you _know_ a collection or whatever will grow in the future. Having it always exploded as one element per line reduces diff noise when adding elements. Before the "magic trailing comma" feature, you couldn't anticipate a collection's growth reliably since collections that fitted in one line were ruthlessly collapsed regardless of your intentions. One of Black's goals is reducing diff noise, so this was a good pragmatic change.
So no, this is not a bug, but an intended feature. Anyway, [here's the documentation](https://github.com/psf/black/blob/master/docs/the_black_code_style.md#the-magic-trailing-comma) on the "magic trailing comma", including the ability to skip this functionality with the `--skip-magic-trailing-comma` option. Hopefully that helps solve the possible confusion.
```
### "Black formatted code is violating Flake8's E203!"
```text
Hi,
This is expected behaviour, please see the documentation regarding this case (emphasis
mine):
> PEP 8 recommends to treat : in slices as a binary operator with the lowest priority, and to leave an equal amount of space on either side, **except if a parameter is omitted (e.g. ham[1 + 1 :])**. It recommends no spaces around : operators for “simple expressions” (ham[lower:upper]), and **extra space for “complex expressions” (ham[lower : upper + offset])**. **Black treats anything more than variable names as “complex” (ham[lower : upper + 1]).** It also states that for extended slices, both : operators have to have the same amount of spacing, except if a parameter is omitted (ham[1 + 1 ::]). Black enforces these rules consistently.
> This behaviour may raise E203 whitespace before ':' warnings in style guide enforcement tools like Flake8. **Since E203 is not PEP 8 compliant, you should tell Flake8 to ignore these warnings**.
*Black* can be integrated into many environments, providing a better and smoother experience. Documentation for integrating *Black* with a tool can be found for the
following areas:
- :doc:`Editor / IDE <./editors>`
- :doc:`GitHub Actions <./github_actions>`
- :doc:`Source version control <./source_version_control>`
Editors and tools not listed will require external contributions.
Patches welcome! ✨ 🍰 ✨
Any tool can pipe code through *Black* using its stdio mode (just
`use \`-\` as the file name <https://www.tldp.org/LDP/abs/html/special-chars.html#DASHREF2>`_).
The formatted code will be returned on stdout (unless ``--check`` was passed). *Black*
will still emit messages on stderr but that shouldn't affect your use case.
This can be used for example with PyCharm's or IntelliJ's
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.