"dependency-groups" is the mechanism for storing package requirements in `pyproject.toml`, recommended for formatting tools (see https://packaging.python.org/en/latest/specifications/dependency-groups/ )
this change allow the black action to look also in those locations when determining the version of black to install
* [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/pre-commit/mirrors-mypy: v1.13.0 → v1.14.1](https://github.com/pre-commit/mirrors-mypy/compare/v1.13.0...v1.14.1)
* Fix wrapper's return types to be String or Text IO
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Cooper Ry Lees <me@cooperlees.com>
Fixes#4446
See https://github.com/python/cpython/issues/123821
It's possible this is too strict? We could instead do this anytime the
AST safety check fails, but feels weird to have that happen
non-deterministically
Extend the docstring of black's `find_project_root` to mention that it ignores
`pyproject.toml` files without a `[tool.black]` section.
This is relevant because that function is also used by other python packages
that use black. I found that e.g. datamodel-code-generator [1] uses that
function and that there the ignoring of the pyproject.toml files lead to
a degradation [2]. I think in that case it would be better to not use black's function
for finding the pyproject.toml, but in any case this behavior should be documented.
1: https://github.com/koxudaxi/datamodel-code-generator
2: https://github.com/koxudaxi/datamodel-code-generator/issues/2052
Co-authored-by: Michael Eliachevitch <Michael.Eliachevitch@blueyonder.com>
- Lets install black, then ask to install black with extrasC
- pip sees black is installed and just installs extra dependencies
Test:
- Build local container
- `docker build -t black_local .`
- Run blackd in container
- `docker run -p 45484:45484 --rm black_local blackd --bind-host 0.0.0.0`
```
cooper@home1:~/repos/black$ docker run -p 45484:45484 --rm black_local blackd --bind-host 0.0.0.0
blackd version 24.4.3.dev11+gad60e62 listening on 0.0.0.0 port 45484
INFO:aiohttp.access:10.255.255.1 [10/May/2024:14:40:36 +0000] "GET / HTTP/1.1" 405 204 "-" "curl/8.5.0"
cooper@home1:~/repos/black$ curl http://10.6.9.2:45484
405: Method Not Allowed
```
- Test version is compiled
```
cooper@home1:~/repos/black$ docker run --rm black_local black --version
black, 24.4.3.dev11+gad60e62 (compiled: yes)
Python (CPython) 3.12.3
```
Fixes#4163
* Prepare release 24.4.2
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Crashes are usually documented in the stable style portion of the
changelog. This patch doesn't affect the parser (e.g. blib2to3).
Noticed the second after I merged :-)
* Add support to style function definitions containing newlines before function stubs
* Relocated implementation for removal of newlines before function stubs with added tests for comments
---------
Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>
On windows the path `FoObAR` is the same as `foobar`, so the output
of `black` on a windows machine could output the path to `.gitignore`
with an upper or lower-case drive letter.
Fixes#4268
Previously we would allow whitespace changes in all strings, now
only in docstrings.
Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>
This relates to #4015, #4161 and the behaviour of os.getcwd()
Black is a big user of pathlib and as such loves doing `.resolve()`,
since for a long time it was the only good way of getting an absolute
path in pathlib. However, this has two problems:
The first minor problem is performance, e.g. in #3751 I (safely) got rid
of a bunch of `.resolve()` which made Black 40% faster on cached runs.
The second more important problem is that always resolving symlinks
results in unintuitive exclusion behaviour. For instance, a gitignored
symlink should never alter formatting of your actual code. This kind of
thing was reported by users a few times.
In #3846, I improved the exclusion rule logic for symlinks in
`gen_python_files` and everything was good.
But `gen_python_files` isn't enough, there's also `get_sources`, which
handles user specified paths directly (instead of files Black
discovers). So in #4015, I made a very similar change to #3846 for
`get_sources`, and this is where some problems began.
The core issue was the line:
```
root_relative_path = path.absolute().relative_to(root).as_posix()
```
The first issue is that despite root being computed from user inputs, we
call `.resolve()` while computing it (likely unecessarily). Which means
that `path` may not actually be relative to `root`. So I started off
this PR trying to fix that, when I ran into the second issue. Which is
that `os.getcwd()` (as called by `os.path.abspath` or `Path.absolute` or
`Path.cwd`) also often resolves symlinks!
```
>>> import os
>>> os.environ.get("PWD")
'/Users/shantanu/dev/black/symlink/bug'
>>> os.getcwd()
'/Users/shantanu/dev/black/actual/bug'
```
This also meant that the breakage often would not show up when input
relative paths.
This doesn't affect `gen_python_files` / #3846 because things are always
absolute and known to be relative to `root`.
Anyway, it looks like #4161 fixed the crash by just swallowing the error
and ignoring the file. Instead, we should just try to compute the actual
relative path. I think this PR should be quite safe, but we could also
consider reverting some of the previous changes; the associated issues
weren't too popular.
At the same time, I think there's still behaviour that can be improved
and I kind of want to make larger changes, but maybe I'll save that for
if we do something like #3952
Hopefully fixes#4205, fixes#4209, actual fix for #4077
This PR does not change any behaviour.
There have been 1-2 issues about symlinks recently. Both over and under
resolving can cause problems. This makes a case where we resolve more
explicit and prevent a resolved path from leaking out via the return.
A follow up to #4024 but for `if` guards in `case` statements. I noticed this
when #4024 was made stable, and noticed I had some code that had extra parens
around the `if` guard.
Fixes#2863
This is pretty desirable in a monorepo situation where you have
configuration in the root since it will mean you don't have to
reconfigure every project.
The good news for backward compatibility is that `find_project_root`
continues to stop at any git or hg root, so in all cases repo root
coincides with a pyproject.toml missing tool.black, we'll continue to
have the project root as before and end up using default config
(i.e. we're unlikely to randomly start using the user config).
The other thing we need to be a little careful about is that changing
find_project_root logic affects what `exclude` is relative to. Since we
only change in cases where there is no config, this only applies where
users were using `exclude` via command line arg (and had pyproject.toml
missing tool.black in a dir that was not repo root).
Finally, for the few who could be affected, the fix is to put an empty
`[tool.black]` in pyproject.toml
In #4096 I added a list of current preview/unstable features to the docs. I think
this is important for publicizing what's in our preview style. This PR adds an
automated test to ensure the list stays up to date in the future.
- Ensure total file length stays under 96
- Hash the path only if it's too long
- Proceed normally (with a warning) if the cache can't be read
Fixes#4172
This has been getting a little messy. These changes neaten things up, we
don't have to keep guarding against `self.previous_line is not None`, we
make it clearer what logic has side effects, we reduce the amount of
code that tricky `before` could touch, etc
This is a no-op change.
That function was not a good way to tell if something is a function or a
class, since it basically only worked for async functions by accident
(the parent of a suite / simple_stmt of async function body is a
funcdef).
Fixes#4043, fixes#619
These include nested functions and methods.
I think the nested function case quite clearly improves readability. I
think the method case improves consistency, adherence to PEP 8 and
resolves a point of contention.
* Add new flag for tests, --no-preview-line-length-1, to be used for test cases known to not work in preview mode with line-length=1. Also split out the problematic cases in three cases to separate files. Removed now redundant file which explicitly tested preview annotations with line-length=1
* mode.preview -> preview_mode, mark pep_572_remove_parens as failing with ll1
- Broke tagging images together
- Saved only a few mins
- x86_64 build is fast, time is all spent on cross compile of arm64
- Also remove evil copy pasta ... which is nice
Was worth an attempt.
* [docker ci] Split up amd64 (x86_64) and arm64 builds
- Lets run them seperately to cut down total time
- Will also more clearly show if either arch has specific problems
- Kept amd64 (x86_64) using qemu actions so if GitHub ever offers arm64 boxes it could stay working too
Fixes#3971
* Add CHANGES entry
---------
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
- Add to run on MacOS + Windows too
- Do not install [d] dependecies as blackd is not actually run / checked
- Move to default GitHub action version - which is 3.12 today
* Make black[d] install + test run with 3.12
- With aiohttp >= 3.9.0 we can now install all dependencies with 3.12
- Add actions to run 3.12
- Lint still needs to be 3.11
Test:
- `python3.12 -m venv /tmp/tb --upgrade-deps`
- `/tmp/tb/bin/pip install tox`
- `/tmp/tb/bin/pip install .[d]`
- `/tmp/tb/bin/tox -e py312`
```
py312: OK (37.61=setup[3.98]+cmd[3.83,0.36,19.54,6.46,3.00,0.44] seconds)
congratulations :) (37.63 seconds)
```
* Move to pypy-3.9
---------
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
The second `if` cannot be true at its execution point, because it is
already covered by the first `if`. The condition
`comma.parent.type == syms.subscriptlist` always holds if
`closing.parent.type == syms.trailer` holds, because `subscriptlist`
only appears inside `trailer` in the grammar:
```
trailer: '(' [arglist] ')' | '[' subscriptlist ']' | '.' NAME
subscriptlist: (subscript|star_expr) (',' (subscript|star_expr))* [',']
```
Bracket depth is not an accurate indicator of standalone comment position inside more complex blocks because bracket depth can be virtual (in loops' and lambdas' parameter blocks) or from optional parens. Here we try to stop cumulating lines upon standalone comments in complex blocks, and try to make standalone comment processing more simple. The fundamental idea is, that if we have a standalone comment, it needs to go on its own line, so we always have to split.
This is not perfect, but at least a first step.
Python does not consider f-strings to be docstrings, so we probably
shouldn't be formatting them as such
Fixes#4018
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
* Add release tool
- Add tool for release managers to use to generate commits
- I'm trying to only use stdlib so we have no depdencies ...
- Default is to change date strings in hard coded documentation files + CHANGES.md
- I write directly to files cause we have SCM to fix any screw ups ...
- We hackily convert calver to ints to sort (all for better ideas here)
- If we hit a ValueError we just set to 0 for sorting - This is alhpa + beta release we can safely ignore these days
- Add new CI to only run release unittests in 3.12 only on all platforms
- Update release docs
- Checked with `mypy --strict` + ensure we are `black --preview` formatted :D
Tests:
- Run it to generate template PR
- `python3.12 release.py --debug --add-changes-template`
- Run it to cleanup CHANGE.md + change version in specified doc files
```
crl-m1:black cooper$ python3.12 release.py -d
[2023-10-23 23:39:38,414] INFO: Current version detected to be 23.10.1 (release.py:221)
[2023-10-23 23:39:38,414] INFO: Next version will be 23.10.2 (release.py:222)
[2023-10-23 23:39:38,414] INFO: Cleaning up /Users/cooper/repos/black/CHANGES.md (release.py:127)
[2023-10-23 23:39:38,416] DEBUG: Finished Cleaning up /Users/cooper/repos/black/CHANGES.md (release.py:147)
[2023-10-23 23:39:38,416] INFO: Updating black version to 23.10.2 in /Users/cooper/repos/black/docs/integrations/source_version_control.md (release.py:173)
[2023-10-23 23:39:38,416] DEBUG: Finished updating black version to 23.10.2 in /Users/cooper/repos/black/docs/integrations/source_version_control.md (release.py:185)
[2023-10-23 23:39:38,416] INFO: Updating black version to 23.10.2 in /Users/cooper/repos/black/docs/usage_and_configuration/the_basics.md (release.py:173)
[2023-10-23 23:39:38,417] DEBUG: Finished updating black version to 23.10.2 in /Users/cooper/repos/black/docs/usage_and_configuration/the_basics.md (release.py:185)
```
- Add tests around some key logic
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix lints + add git to release CI
- Remove black + mypy as linting already runs it ...
- Ignore delete param to TemporaryDirectory as we can't set mypy to 3.12 :D
* Only run CI on linux/ubuntu for now
* Add lots of debug printing + directly run unitests (not via coverage)
* Overloading __str__ is bad on a TestCase
* Add more logging around git tag
* Print where git is in a step
* Rollback creating a fake black repo as we were not using it - I did plan to but I can't get it working on GitHub actions
* Do a deep checkout
* Add noqa for E701,E761 ... maybe we need this in our flake8 config now?
* Fix action to have correct workflow yaml to action on
- Also add fix to not double run when we push directly to psf/black
* All jelle suggestions
- Fix bug missing lines ending with --> in CHANGES.md to delete ...
- Update ci to run out of scripts dir too
- Update test_tuple_calver
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
See https://pre-commit.com/#confining-hooks-to-run-at-certain-stages
> If you are authoring a tool, it is usually a good idea to provide an appropriate `stages` property. For example a reasonable setting for a linter or code formatter would be `stages: [pre-commit, pre-merge-commit, pre-push, manual]`.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
* Prepare release 23.10.1
* Update docs/usage_and_configuration/the_basics.md
Add missed version string
We need to automate or remove this from docs ... It's painful.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
---------
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
* Fix CI failing
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* docs: update CHANGES.md
* docs: fix changelog location to unreleased
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Until we get new aiohttp wheels we need to build with 3.11.
You can see an example of a fail here:
Workaround for #3919 - Will leave it open until we can move to 3.21
* fix indentation of line breaks in long type hints by adding parentheses, and remove unnecessary parentheses
* add entry in CHANGES.md, make the style change only in preview mode
Remove mentions of runtime support of Python 3.7
Runtime support of Python 3.7 was removed in
b4dca26c7d but a few mentions of it being
supported have remained until now.
Build in separate jobs. This makes it clearer if e.g. a single Python
version is failing. It also potentially gets you more parallelism.
Build everything on push to master.
Only build Linux 3.8 and 3.11 wheels on PRs.
* Fix broken url in editors.md
Update a link pointing to the Arch Linux repos.
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
The idea behind this change is that we stop looking into previous body to determine if there should be a blank before a function or class definition.
Input:
```python
import sys
if sys.version_info > (3, 7):
class Nested1:
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
class Nested2:
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
if sys.version_info > (3, 7):
def nested1():
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
def nested2():
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
```
Stable style
```python
import sys
if sys.version_info > (3, 7):
class Nested1:
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
class Nested2:
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
if sys.version_info > (3, 7):
def nested1():
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
def nested2():
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
```
In the stable formatting, we have a blank line sometimes, not depending on the previous statement on the same level, but on the last (potentially nested) statement in the previous body.
#2783/#3564 fixes this for classes in preview style:
```python
import sys
if sys.version_info > (3, 7):
class Nested1:
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
class Nested2:
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
if sys.version_info > (3, 7):
def nested1():
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
def nested2():
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
```
This PR additionally fixes this for function definitions:
```python
if sys.version_info > (3, 7):
if sys.platform == "win32":
assignment = 1
def function_definition(self): ...
def f1(self) -> str: ...
if sys.platform != "win32":
def function_definition(self): ...
assignment = 1
def f2(self) -> str: ...
if sys.version_info > (3, 8):
if sys.platform == "win32":
assignment = 1
def function_definition(self): ...
class F1: ...
if sys.platform != "win32":
def function_definition(self): ...
assignment = 1
class F2: ...
```
You can see the effect of this change on typeshed in https://github.com/konstin/typeshed/pull/1/files. As baseline, the preview mode changes without this PR are at https://github.com/konstin/typeshed/pull/2.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
This PR updates an assert statement that checks the bounds of a
string-slicing operation. The updated assertion provides more accurate
and informative error handling by specifically checking the relative
values of the indices and the string length.
The original assertion was essentially checking if Python's string
slicing was behaving as expected. However, it wasn't providing any
guarantees or useful information about the bounds i and j themselves.
The updated assertion checks that the indices used for slicing are
within the bounds of the string. It will throw an AssertionError if the
indices are out of bounds or if i > j, providing a more specific and
informative error.
Fix unintentionally swapped words in index.md
I think the intent was to say "large changes in formatting", because it doesn't make sense to say "large formatting in changes".
`is_type_comment` now specifically deals with general type comments for a leaf.
`is_type_ignore_comment` now handles type comments contains ignore annotation for a leaf
`is_type_ignore_comment_string` used to determine if a string is an ignore type comment
IPython is a very expensive import, like, at least 300ms. I'd also
venture that it's much more common than tokenize-rt, which is like 30ms.
I work in a repo where I use black, have IPython installed and there
happen to be a couple notebooks (that we don't want formatted). I know I
can force exclude ipynb, but this change doesn't really have a cost.
Currently the verbose logging for "Sources to be formatted" is a little
suspect in that it is a completely different code path from
`get_sources`.
This can result in bugs like https://github.com/psf/black/pull/3216#issuecomment-1213557359
and generally limits the value of these logs.
This does change the "when" of this log, but the colours help separate
it from the even more verbose logs.
Several new versions of mypyc has been released since the last upgrade, and they include some performance improvements which could make the compiled version of Black run faster.
https://mypy-lang.org/news.html
The latest version of hatch-mypyc allows being installed next the 1.x series of mypy.
* Make phrasing for flake8 users more concise
max-line-length should be 80 with flake8-bugbear
Fixes#3716
* Re-add rationale and an explanation for
disabling E203
* Run pre-commit
Avoids a Python 3.12 deprecation warning.
Subtle difference: previously, timestamps in diff filenames had the
`+0000` separated from the timestamp by space. With this, the space is
there no more, and there is a colon, as in `+00:00`.
This patch changes the preview style so that string splitters respect
Unicode East Asian Width[^1] property. If you are not familiar to CJK
languages it is not clear immediately. Let me elaborate with some
examples.
Traditionally, East Asian characters (including punctuation) have
taken up space twice than European letters and stops when they are
rendered in monospace typeset. Compare the following characters:
```
abcdefg.
글、字。
```
The characters at the first line are half-width, and the second line
are full-width. (Also note that the last character with a small
circle, the East Asian period, is also full-width.) Therefore, if we
want to prevent those full-width characters to exceed the maximum
columns per line, we need to count their *width* rather than the number
of characters. Again, the following characters:
```
글、字。
```
These are just 4 characters, but their total width is 8.
Suppose we want to maintain up to 4 columns per line with the following
text:
```
abcdefg.
글、字。
```
How should it be then? We want it to look like:
```
abcd
efg.
글、
字。
```
However, Black currently turns it into like this:
```
abcd
efg.
글、字。
```
It's because Black currently counts the number of characters in the line
instead of measuring their width. So, how could we measure the width?
How can we tell if a character is full- or half-width? What if half-width
characters and full-width ones are mixed in a line? That's why Unicode
defined an attribute named `East_Asian_Width`. Unicode grouped every
single character according to their width in fixed-width typeset.
This partially addresses #1197, but only for string splitters. The other
parts need to be fixed as well in future patches.
This was implemented by copying rich's own approach to handling wide
characters: generate a table using wcwidth, check it into source
control, and use in to drive helper functions in Black's logic. This
gets us the best of both worlds: accuracy and performance (and let's us
update as per our stability policy too!).
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Richard Si <sichard26@gmail.com>
Fixes#3506
We can't simply escape the quotes in a naked f-string when merging string groups, because backslashes are invalid.
The quotes in f-string expressions should be toggled (this is safe since quotes can't be reused).
This fix also means implicitly concatenated f-strings with different quotes can now be merged or quote-normalized by changing the quotes used in expressions. e.g.:
```diff
raise sa_exc.UnboundExecutionError(
"Could not locate a bind configured on "
- f'{", ".join(context)} or this Session.'
+ f"{', '.join(context)} or this Session."
)
```
The option is `max-line-length` with dashes, not underscores. The config option name is given in the output of `pycodestyle -h`, which can also be checked on https://pep8.readthedocs.io/en/stable/intro.html#example-usage-and-output:
```
Configuration:
The project options are read from the [pycodestyle] section of the
tox.ini file or the setup.cfg file located in any parent folder of the
path(s) being processed. Allowed options are: exclude, filename,
select, ignore, max-line-length, max-doc-length, hang-closing, count,
format, quiet, show-pep8, show-source, statistics, verbose
```
When trying to format a project from the outside, the verbose output
shows says that there are symbolic links that points outside of the
project, but displays the wrong project path, meaning that these
messages are false positives.
This bug is triggered when the command is executed from outside a
project on a folder inside it, causing an inconsistency between the
path to the detected project root and the relative path to the target
contents.
The fix is to normalize the target path using the project root before
processing the sources, which removes the presence of the incorrect
messages.
---
The test attemps to emulate the behavior of the CLI as closely as
posible by patching some `pathlib.Path` methods and passing certain
reference paths to the context object and `black.get_sources`.
Before the associated fix was introduced, this test failed because
some of the captured files reported the presence of a symlink due to
an incorrectly formated path. The test also asserts that only a single
file is reported as ignored, which is part of the expected behavior.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
Revert deleted documentation on setting up Black using IntelliJ
external tool or file watcher utilities. These are still worth keeping
because some peole might not want to use a third-party plugin or
install Blackd's extra dependencies.
Co-authored-by: Richard Si <sichard26@gmail.com>
Co-authored-by: Jordan Ephron <JEphron@users.noreply.github.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
* Organize vim plugin section with headers to separate out Installation, Usage, and Troubleshooting for readability and easy linking
* Add missing plugin configuration options, with current defaults
* Add installation note for Arch Linux, now that the plugin is shipped with the python-black package (ref: https://bugs.archlinux.org/task/73024)
* Fix vim-plug specification to follow stable releases. Moving the same tag is an antipattern that doesn't re-resolve with vim-plug, see this discussion for more detail (https://github.com/junegunn/vim-plug/pull/720\#issuecomment-1105829356). Per vim-plug's maintainer's recommendation, use the 'tag' key instead with a shell wildcard. Wildcard should be '*.*.*' as that follows Black's versioning detailed here (https://black.readthedocs.io/en/latest/contributing/release_process.html\#cutting-a-release) and doesn't include current alpha releases.
* Do not move docker `latest_release` tag for Pre-Releases
- When we do a pre-release lets not move the latest_release tag
- This tag should only move on official real releases
Fixes#3453
* Make it prettier - TIL we format our yaml
* Remove separate 3.11 CI now deps support 3.11
- We can run everything now like all other stable versions of Python
- test in a 3.11 vent: `/tmp/tb/bin/tox -e py311,ci-py311`
```
py311: OK (28.99=setup[7.90]+cmd[5.29,0.66,6.94,6.08,1.89,0.24] seconds)
ci-py311: OK (30.33=setup[3.20]+cmd[3.66,0.31,17.43,4.60,0.90,0.23] seconds)
congratulations :) (59.35 seconds)
```
* Add to CHANGES.md
* Add fuzz run in 3.11
The bug is in the `get_leaves_inside_matching_brackets` on the third line below:
```python
assert xxxxxxxxx.xxxxxxxxx.xxxxxxxxx(
xxxxxxxxx
).xxxxxxxxxxxxxxxxxx(), (
"xxx {xxxxxxxxx} xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
)
```
Including the invisible paren, third line is `).xxxxxxxxxxxxxxxxxx()), (`, that it has a matched pair then an unmatched closing paren afterwards. This PR ensures the returned leaves are actually matched.
Fixes#3414.
Currently, empty and whitespace-only (with or without newlines) are
not modified. In some discussions (issues and pull requests) consensus
was to reformat whitespace-only files to empty or single-character
files, preserving line endings when possible. With that said, this
commit introduces the following behaviors:
* Empty files are left as is
* Whitespace-only files (no newline) reformat into empty files
* Whitespace-only files (1 or more newlines) reformat into a single
newline character
To implement these changes, we moved the initial check at
`format_file_contents` that raises `NothingChanged` if the source
(with no whitespaces) is an empty string. In the case of *.ipynb
files, `format_ipynb_string` checks a similar condition and removed
whitespaces. In the case of Python files, `format_str_once` includes a
check on the output that returns the correct newline character if
possible or an empty string otherwise.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Apply .gitignore files considering their location
When a .gitignore file contains the special rule to ignore every
subfolder content (`*/*`) and the file is located in a subfolder
relative to where the command is executed (root), the rule is
incorrectly applied and ignores every file at the same level of the
.gitignore file.
The reason for this is that the `gitignore` variable accumulates the
rules found in each .gitignore while traversing files and directories
recursively. This makes sense and, in general, works as expected. The
problem is that the gitignore rules are applied using as the relative
path from root to target directory as a reference. This is the cause
of the bug.
The implemented solution keeps track of every .gitignore file found
while traversing the targets and the absolute location of each
.gitignore file. Then, when matching files to the .gitignore rules,
compare each set of rules with the appropiate relative path to the
candidate target file.
To make this possible, we changed the single `gitignore` object with a
dictionary of similar objects, where the corresponding key is the
absolute path to the folder that contains that .gitignore file. This
required changing the signature of the `get_sources` function. Also, we
introduce a `is_ignored` function that compares a file with every set
of rules. Finally, some tests required an update to pass the gitignore
object in the new format.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Test .gitignore with `*/*` is applied correctly
The test contains three cases: 1) when the .gitignore with the special
rule to ignore every subfolder and its contents (*/*) is in the root,
2) when the file is inside a subfolder relative to root (nested), and
3) when the target folder contains the .gitignore and root is a parent
folder of the target. In all of these cases, we compare the files that
are visible by Black with a known list of paths containing the
expected values.
Before the fix introduced in the previous commit, these tests failed
when the .gitignore file was nested (second case). Now, the test is
passed for all cases.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Update CHANGES.md
Add entry about fixed bug and changes introduced: ignore files by
considering the location of each .gitignore file and the relative path
of each target
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Small refactor to improve code readability
These changes are small improvements to improve code readability:
rename a variable to a more descriptive name (from `exclude_is_None`
to `using_default_exclude`), use a better syntax to include the type
annotation for `gitignore` variable (from typing comment to
Python-style typing annotation), and replace an if-else block with a
single dictionary definition (in this case, we need to compare keys
instead of values, meaning that the change works)
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Make nested function a top-level function
The function to match a given path with every discovered .gitignore
file does not need to be a nested function and can be a top-level
function. The arguments did not change, but the naming of local
variables was improved for readability.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
When passing multiple src directories, the root gitignore was only
applied to the first processed source. The reason is that, in the
first source, exclude is `None`, but then the value gets overridden by
`re_compile_maybe_verbose(DEFAULT_EXCLUDES)`, so in the next iteration
where the source is a directory, the condition is not met and sets the
value of `gitignore` to `None`.
To fix this problem, we store a boolean indicating if `exclude` is
`None` and set the value of `exclude` to its default value if that's
the case. This makes sure that the flow enters the correct condition on
following iterations and also keeps the original value if the condition
is not met.
Also, the value of `gitignore` is initialized as `None` and overriden
if necessary. The value of `root_gitignore` is always calculated to
avoid using additional variables (at the small cost of additional
computations).
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
Provide a configuration parameter to the Vim plugin which will allow the
plugin to skip setting up a virtualenv. This is useful when there is a
system installation of black (e.g. from a Linux distribution) which the
user prefers to use.
Using a virtualenv remains the default.
- Fixes#3308
* Add option to skip the first line in source file
This commit adds a CLi option to skip the first line in the source
files, just like the Cpython command line allows [1]. By enabling the
flag, using `-x` or `--skip-source-first-line`, the first line is
removed temporarilly while the remaining contents are formatted. The
first line is added back before returning the formatted output.
[1]: https://docs.python.org/dev/using/cmdline.html#cmdoption-x
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Add tests for `--skip-source-first-line` option
When the flag is disabled (default), black formats the entire source
file, as in every line. In the other hand, if the flag is enabled, by
using `-x` or `--skip-source-first-line`, the first line is retained
while the rest of the source is formatted and then is added back.
These tests use an empty Python file that contains invalid syntax in
its first line (`invalid_header.py`, at `miscellaneous/`). First,
Black is invoked without enabling the flag which should result in an
exit code different than 0. When the flag is enabled, Black is
expected to return a successful exit code and the header is expected
to be retained (even if its not valid Python syntax).
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Support skip source first line option for blackd
The recently added option can be added as an acceptable header for
blackd. The arguments are passed in such a way that using the new
header will activate the skip source first line behaviour as expected
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Add skip source first line option to blackd docs
The new option can be passed to blackd as a header. This commit
updates the blackd docs to include the new header.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Update CHANGES.md
Include the new Black option to skip the first line of source code in
the configuration section
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Update skip first line test including valid syntax
Including valid Python syntax help us make sure that the file is still
actually valid after skipping the first line of the source file (which
contains invalid Python syntax)
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Skip first source line at `format_file_in_place`
Instead of skipping the first source line at `format_file_contents`,
do it before. This allow us to find the correct newline and encoding
on the actual source code (everything that's after the header).
This change is also applied at Blackd: take the header before passing
the source to `format_file_contents` and put the header back once we
get the formatted result.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Test output newlines when skipping first line
When skipping the first line of source code, the reference newline must
be taken from the second line of the file instead of the first one, in
case that the file mixes more than one kind of newline character
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Test that Blackd also skips first line correctly
Simliarly to the Black tests, we first compare that Blackd fails when
the first line is invalid Python syntax and then check that the result
is the expected when tha flag is activated
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Use the content encoding to decode the header
When decoding the header to put it back at the top of the contents of
the file, use the same encoding used in the content. This should be a
better "guess" that using the default value
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
Jupyter creates a checkpoint file every single time you create an .ipynb
file, and then it updates the checkpoint file every single time you
manually save your progress for the initial .ipynb. These checkpoints
are stored in a directory named `.ipynb_checkpoints`.
Co-authored-by: Batuhan Taskaya <isidentical@gmail.com>
This makes the location more explicit which hopefully makes the PR
process smoother for other first time contributors.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
To run the formatter on Jupyter Notebooks, Black must be installed
with an extra dependency (`black[jupyter]`). This commit adds an
optional argument to install Black with this dependency when using the
official GitHub Action [1]. To enable the formatter on Jupyter
Notebooks, just add `jupyter: true` as an argument. Feature requested
at [2].
[1]: https://black.readthedocs.io/en/stable/integrations/github_actions.html
[2]: https://github.com/psf/black/issues/3280
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
This implements PEP 621, obviating the need for `setup.py`, `setup.cfg`,
and `MANIFEST.in`. The build backend Hatchling (of which I am a
maintainer in the PyPA) is now used as that is the default in the
official Python packaging tutorial. Hatchling is available on all the
major distribution channels such as Debian, Fedora, and many more.
## Python support
The earliest supported Python 3 version of Hatchling is 3.7, therefore
I've also set that as the minimum here. Python 3.6 is EOL and other
build backends like flit-core and setuptools also dropped support.
Python 3.6 accounted for 3-4% of downloads in the last month.
## Plugins
Configuration is now completely static with the help of 3 plugins:
### Readme
hynek's hatch-fancy-pypi-readme allows for the dynamic construction of
the readme which was previously coded up in `setup.py`. Now it's simply:
```toml
[tool.hatch.metadata.hooks.fancy-pypi-readme]
content-type = "text/markdown"
fragments = [
{ path = "README.md" },
{ path = "CHANGES.md" },
]
```
### Versioning
hatch-vcs is currently just a wrapper around setuptools-scm (which
despite the legacy naming is actually now decoupled from setuptools):
```toml
[tool.hatch.version]
source = "vcs"
[tool.hatch.build.hooks.vcs]
version-file = "src/_black_version.py"
template = '''
version = "{version}"
'''
```
### mypyc
hatch-mypyc offers many benefits over the existing approach:
- No need to manually select files for inclusion
- Avoids the need for the current CI workaround for https://github.com/mypyc/mypyc/issues/946
- Intermediate artifacts (like `build/`) from setuptools and mypyc
itself no longer clutter the project directory
- Runtime dependencies required at build time no longer need to be
manually redeclared as this is a built-in option of Hatchling
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Previously _Black_ produces invalid code because the `# fmt: on` is used on a different block level.
While _Black_ requires `# fmt: off` and `# fmt: on` to be used at the same block level, incorrect usage shouldn't cause crashes.
The formatting behavior this PR introduces is, the code below the initial `# fmt: off` block level will be turned off for formatting, when `# fmt: on` is used on a different level or there is no `# fmt: on`. This also matches the current behavior when `# fmt: off` is used at the top-level without a matching `# fmt: on`, it turns off formatting for everything below `# fmt: off`.
- Fixes#2567
- Fixes#3184
- Fixes#2985
- Fixes#2882
- Fixes#2232
- Fixes#2140
- Fixes#1817
- Fixes#569
Bumps cibuildwheel from 2.8.1 to 2.10.0 which has 3.11 building enabled
by default. Unfortunately mypyc errors out on 3.11:
src/black/files.py:29:9: error: Name "tomllib" already defined (by an import) [no-redef]
... so we have to also hide the fallback import of tomli on older 3.11
alphas from mypy[c].
Make sure `gcc` is installed in the build env
The mypyc build requires `gcc` to be installed even if it's being built with `clang`, otherwise `clang` fails to find `libgcc`.
These two paragraphs were tucked away at the end of the section, after
the diversion on backslashes. I nearly missed the first paragraph and
opened a nonsense issue, and I think the second belongs higher up with
it too.
Fix a crash when formatting some dicts with parenthesis-wrapped long
string keys. When LL[0] is an atom string, we need to check the atom
node's siblings instead of LL[0] itself, e.g.:
dictsetmaker
atom
STRING '"This is a really long string that can\'t be expected to fit in one line and is used as a nested dict\'s key"'
/atom
COLON ':'
atom
LSQB ' ' '['
listmaker
STRING '"value"'
COMMA ','
STRING ' ' '"value"'
/listmaker
RSQB ']'
/atom
COMMA ','
/dictsetmaker
* Move 311 tests to install aiohttp without C extensions
- Configure tox to install aiohttp without extensions
- i.e. use `AIOHTTP_NO_EXTENSIONS=1` for pip install
- This allows us to reenable blackd tests that use aiohttp testing helpers etc.
- Had to ignore `cgi` module deprecation warning
- Filed issue for aiohttp to fix: https://github.com/aio-libs/aiohttp/issues/6905
Test:
- `/tmp/tb/bin/tox -e 311`
* Fix formatting + linting
* Add latest aiohttp for loop fix + Try to exempt deprecation warning but failed - will ask for help
* Remove unnecessary warning ignore
Co-authored-by: Cooper Ry Lees <me@wcooperlees.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
This is deprecated since aiohttp 4.0. If it doesn't exist just define a
no-op decorator that does nothing (after the other aiohttp imports
though!). By doing this, it's safe to ignore the DeprecationWarning
without needing to require the latest aiohttp once they remove
`@middleware`.
We've decided to a) convert stable back into a branch and b) to update
it immediately as part of the release process. We may as well automate
it. And about going back to a branch ...
Git tags are not the right tool, at all[^1]. They come with the
expectation that they will never change. Things will not work as
expected if they do change, doubly so if they change regularly. Once
you pull stable from the remote and it's copied in your local
repository, no matter how many times you run git pull you'll never see
it get updated automatically. Your only recourse is to delete the tag
via `git tag -d stable` before pulling.
This gets annoying really quickly since stable is supposed to be the
solution for folks "who want to move along as Black developers deem
the newest version reliable."[^2] See this comment for how this impacts
users using our Vim plugin[^3]. It also affects us developers[^4]. If
you have stable locally, once we cut a new release and update the stable
tag, a simple `git pull` / `git fetch` will not pull down the updated
stable tag. Unless you remember to delete stable before pulling, stable
will become stale and useless.
You can argue this is a good thing ("people should explicitly opt into
updating stable"), but IMO it does not match user expectations nor
developer expectations[^5]. Especially since not all our integrations
that use stable are bound by this security measure, for example our
GitHub Action (since it does a clean fetch of the repository every time
it's used). I believe consistency would be good here.
Finally, ever since we switched to a tag, we've been facing issues with
ReadTheDocs not picking up updates to stable unless we force a rebuild.
The initial rebuild on the stable update just pulls the commit the tag
previously pointed to. I'm not sure if switching back to a branch will
fix this, but I'd wager it will.
[^1]: https://git-scm.com/docs/git-tag#_on_re_tagging
[^2]: https://black.readthedocs.io/en/stable/contributing/release_process.html#moving-the-stable-tag
[^3]: https://github.com/psf/black/issues/2503#issuecomment-1196357379
[^4]: In fairness, most folks working on Black probably don't use the
`stable` ref anyway, especially us maintainers who'd know what is
the latest version by heart, but it'd still be nice to make it
usable for local dev though.
[^5]: Also what benefit does a `stable` ref have over explicit version
tags like `22.6.0`? If you're going to opt into some odd pin
mechanism, might as well use explicit version tags for clarity
and consistency.
- Formalise release cadence guidelines
- Overhaul release steps to be easier to follow and more thorough
- Reorder changelog template to something more sensible
- Update release automation docs to reflect recent improvements (notably
the addition of in-repo mypyc wheel builds)
Co-authored-by: Felix Hildén <felix.hilden@gmail.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Solves https://github.com/psf/black/issues/2598 where Black wouldn't
use .gitignore at folder/.gitignore if you ran `black folder` for
example.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Adds parentheses around implicit string concatenations when it's inside
a list, set, or tuple. Except when it's only element and there's no trailing
comma.
Looking at the order of the transformers here, we need to "wrap in
parens" before string_split runs. So my solution is to introduce a
"collaboration" between StringSplitter and StringParenWrapper where the
splitter "skips" the split until the wrapper adds the parens (and then
the line after the paren is split by StringSplitter) in another pass.
I have also considered an alternative approach, where I tried to add a
different "string paren wrapper" class, and it runs before string_split.
Then I found out it requires a different do_transform implementation
than StringParenWrapper.do_transform, since the later assumes it runs
after the delimiter_split transform. So I stopped researching that
route.
Originally function calls were also included in this change, but given
missing commas should usually result in a runtime error and the scary
amount of changes this cause on downstream code, they were removed in
later revisions.
os.cpu_count() can return None (sounds like a super arcane edge case
though) so the type annotation for the `workers` parameter of
`black.main` is wrong. This *could* technically cause a runtime
TypeError since it'd trip one of mypyc's runtime type checks so we
might as well fix it.
Reading the documentation (and cross-checking with the source code),
you are actually allowed to pass None as `max_workers` to
`concurrent.futures.ProcessPoolExecutor`. If it is None, the pool
initializer will simply call os.cpu_count() [^1] (defaulting to 1 if it
returns None [^2]). It'll even round down the worker count to a level
that's safe for Windows.
... so theoretically we don't even need to call os.cpu_count()
ourselves, but our Windows limit is 60 (unlike the stdlib's 61) and I'd
prefer not accidentally reintroducing a crash on machines with many,
many CPU cores.
[^1]: https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor
[^2]: a372a7d653/Lib/concurrent/futures/process.py (L600)
Loading .gitignore and compiling the exclude regex can take more than
15ms. We shouldn't and don't need to pay this cost if we're simply
formatting files given on the command line directly.
I would've loved to lazily import pathspec, but the patch won't be clean
until the file collection and discovery logic is refactored first.
Co-authored-by: Fabio Zadrozny <fabiofz@gmail.com>
`black.reformat_many` depends on a lot of slow-to-import modules. When
formatting simply a single file, the time paid to import those modules
is totally wasted. So I moved `black.reformat_many` and its helpers
to `black.concurrency` which is now *only* imported if there's more
than one file to reformat. This way, running Black over a single file
is snappier
Here are the numbers before and after this patch running `python -m
black --version`:
- interpreted: 411 ms +- 9 ms -> 342 ms +- 7 ms: 1.20x faster
- compiled: 365 ms +- 15 ms -> 304 ms +- 7 ms: 1.20x faster
Co-authored-by: Fabio Zadrozny <fabiofz@gmail.com>
There are a number of places this behaviour could be patched, for
instance, it's quite tempting to patch it in `get_sources`. However
I believe we generally have the invariant that project root contains all
files we want to format, in which case it seems prudent to keep that
invariant.
This also improves the accuracy of the "sources to be formatted" log
message with --stdin-filename.
Fixes GH-3207.
Updates action.yml to use the alternative $GITHUB_ACTION_PATH variable
instead of the original ${{ github.action_path }} which caused issues
with bash on the Windows runners. This removes the need for a Python
subprocess to call the main.py script.
Fixes#2734: a standalone comment causes strings to be merged into one far too long (and requiring two passes to do so).
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
- Had to exempt blackd tests for now due to aiohttp
- Skip by using `sys.version_info` tuple
- aiohttp does not compile in 3.11 yet - refer to #3230
- Add a deadsnakes ubuntu workflow to run 3.11-dev to ensure we don't regress
- Have it also format ourselves
Test:
- `tox -e 311`
Co-authored-by: Cooper Ry Lees <me@wcooperlees.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
uR is not a legal string prefix, so this test breaks (AssertionError:
cannot use --safe with this file; failed to parse source file AST:
invalid syntax) if changed to one in which the file is changed. I've
changed the last test to have u alone, and added an R to the test above
instead.
* makes install available for all users in docker image
moves the installation path from /root/.local to a
virtualenv. this way we still get the lightweight
multistage build without excluding non-root users.
* adds changelog entry for docker-image fix
A changelog entry has been added under the Integration
subheader
* changes dockerfile to use the venv activate script
we are now using the inbuilt venv activate script, as well
as explicitly mentioning the binary location in the entrypoint
cmd.
Co-authored-by: Nicolò <nicolo.intrieri@spinforward.it>
Co-authored-by: Cooper Lees <me@cooperlees.com>
Building executables without any testing is quite sketchy, let's at
least verify they won't crash on startup and format Black's own
codebase.
Also replaced "binaries" with "executables" since it's clearer and
won't be confused with mypyc.
Finally, I added colorama so all Windows users can get colour.
As error logs are emitted often (they happen when Black's cache
directory is created after blib2to3 tries to write its cache) and cause
issues to be filed by users who think Black isn't working correctly.
These errors are expected for now and aren't a cause for concern so
let's remove them to stop worrying users (and new issues from being
opened). We can improve the blib2to3 caching mechanism to write its
cache at the end of a successful command line invocation later.
As mentioned in GH-3185, when using Black as a Vim plugin, especially
automatically on save, the plugin's messages can be confusing, as
nothing indicates that they come from Black.
... in the middle of an expression or code block by adding a missing return.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
When the Leaf node with `# fmt: skip` is a NEWLINE inside a `suite`
Node, the nodes to ignore should be from the siblings of the parent
`suite` Node.
There is a also a special case for the ASYNC token, where it expands
to the grandparent Node where the ASYNC token is.
This fixes GH-2646, GH-3126, GH-2680, GH-2421, GH-2339, and GH-2138.
The former was a regression I introduced a long time ago. To avoid
changing the stable style too much, the regression is only fixed if
--preview is enabled
Annoyingly enough, as we currently always enforce a second format pass if
changes were made, there's no good way to prove the existence of the
docstring quote normalization instability issue. For posterity, here's
one failing example:
--- source
+++ first pass
@@ -1,7 +1,7 @@
def some_function(self):
- ''''<text here>
+ """ '<text here>
<text here, since without another non-empty line black is stable>
- '''
+ """
pass
--- first pass
+++ second pass
@@ -1,7 +1,7 @@
def some_function(self):
- """ '<text here>
+ """'<text here>
<text here, since without another non-empty line black is stable>
"""
pass
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
* Move to explicitly creating a new loop
- >= 3.10 add a warning that `get_event_loop` will not automatically create a loop
- Move to explicit API
Test:
- `python3.11 -m venv --upgrade-deps /tmp/tb`
- `/tmp/tb/bin/pip install -e .`
- Install deps and no blackd as aiohttp + yarl can't build still with 3.11
- https://github.com/aio-libs/aiohttp/issues/6600
- `export PYTHONWARNINGS=error`
```
cooper@l33t:~/repos/black$ /tmp/tb/bin/black .
All done! ✨🍰✨
44 files left unchanged.
```
Fixes#3110
* Add to CHANGES.md
* Fix a cooper typo yet again
* Set default asyncio loop to our explicitly created one + unset on exit
* Update CHANGES.md
Fix my silly typo.
Co-authored-by: Thomas Grainger <tagrain@gmail.com>
Co-authored-by: Cooper Ry Lees <me@wcooperlees.com>
Co-authored-by: Thomas Grainger <tagrain@gmail.com>
Doing so is invalid. Note this only fixes the preview style since the
logic putting closing docstring quotes on their own line if they violate
the line length limit is quite new.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Otherwise they'd be deleted which was a regression in 22.1.0 (oops! my
bad!). Also type comments are now tracked in the AST safety check on all
compatible platforms to error out if this happens again.
Overall the line rewriting code has been rewritten to do "the right
thing (tm)", I hope this fixes other potential bugs in the code (fwiw I
got to drop the bugfix in blib2to3.pytree.Leaf.clone since now bracket
metadata is properly copied over).
Fixes#2873
* Recommend using BlackConnect in IntelliJ IDEs
* IntelliJ IDEs integration docs: improve formatting
* Add changelog for recommending BlackConnect
* IntelliJ IDEs integration docs: improve formatting
* Apply suggestions from code review
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
* Fix indentation
* Apply italic to Black name
Consequently with other places in the document
* Move CHANGELOG entry to Unreleased section
* IntelliJ IDEs integration docs: bring back a point with formatting a file
* IntelliJ IDEs integration docs: fix extra whitespace and linebreak
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Covers GH-2926, GH-2990, GH-2991, and GH-3035.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
* Add run_self environment in tox
* Add run_self task as part of the lint CI flow
* Remove hard coded sources list
* Remove black from pre-commit
Co-authored-by: Cooper Lees <me@cooperlees.com>
We just got someone on Discord who was confused because the command as
written caused their shell to try to do command expansion.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
I realized we don't have a FAQ entry about this, let's change that so
compiled: yes/no doesn't surprise as many people :)
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
This is a tricky one as await is technically an expression and therefore
in certain situations requires brackets for operator precedence.
However, the vast majority of await usage is just await some_coroutine(...)
and similar in format to return statements. Therefore this PR removes
redundant parens around these await expressions.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Allows us to better control placement of return annotations by:
a) removing redundant parens
b) moves very long type annotations onto their own line
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
The 8.0.x series renamed its "die on LANG=C" function and the 8.1.x
series straight up deleted it.
Unfortunately this makes this test type check cleanly hard, so we'll
just lint with click 8.1+ (the pre-commit hook configuration was changed
mostly to just evict any now unsupported mypy environments)
aiohttp.test_utils.unittest_run_loop was deprecated since aiohttp 3.8
and aiohttp 4 (which isn't a thing quite yet) removes it. To maintain
compatibility with the full range of versions we declare to support,
test_blackd.py will now define a no-op replacement if it can't be
imported.
Also, mypy is painfully slow to use without a cache, let's reenable it.
Now PRs will run two diff-shades jobs, "preview-changes" which formats
all projects with preview=True, and "assert-no-changes" which formats
all projects with preview=False. The latter also fails if any changes
were made.
Pushes to main will only run "preview-changes"
Also the workflow_dispatch feature was dropped since it was
complicating everything for little gain.
It is falsely placed in preview features and always formats the power operators, it was added in #2789 but there is no check for formatting added along with it.
If a vim/neovim user wishes to suppress loading the vim plugin by
setting g:load_black in their VIMRC (for me, Arch linux automatically
adds the plugin to Neovim's RTP, even though I'm not using it), the
current location of the test comes after a call to has('python3'). This
adds, in my tests, between 35 and 45 ms to Vim load time (which I know
isn't a lot but it's also unnecessary). Moving the call to
`exists('g:load_black')` to before the call to `has('python3')` removes
this unnecessary test and speeds up loading.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
- use `Black` directly: the commands an autocommand runs are Ex commands, so no
execute or colon is necessary.
- use an `augroup` (best practice) to prevent duplicate autocommands from
hindering performance.
I did this manually for the last few releases and I think it's going to be
helpful in the future too. Unfortunately this adds a little more work during
the release (sorry @cooperlees).
This change will also improve the merge conflict situation a bit, because
changes to different sections won't merge conflict.
For the last release, the sections were in a kind of random order. In the
template I put highlights and "Style" first because they're most important
to users, and alphabetized the rest.
It was causing stability issues because the first pass
could cause a "magic trailing comma" to appear, meaning
that the second pass might get a different result. It's
not critical.
Some things format differently (with extra parens)
It turns out "simple_stmt" isn't that simple: it can contain multiple
statements separated by semicolons. Invisible parenthesis logic for
arithmetic expressions only looked at the first child of simple_stmt.
This causes instability in the presence of semicolons, since the next
run through the statement following the semicolon will be the first
child of another simple_stmt.
I believe this along with #2572 fix the known stability issues.
Fixes#2651. Fixes#2754. Fixes#2518. Fixes#2321.
This adds a test that lists a number of cases of unstable formatting
that we have seen in the issue tracker. Checking it in will ensure
that we don't regress on these cases.
At the moment, it's just a source of spurious CI failures and busywork
updating the configuration file.
Unlike diff-shades, it is run across many different platforms and
Python versions, but that doesn't seem essential. We already run unit
tests across platforms and versions.
I chose to leave the code around for now in case somebody is using it,
but CI will no longer run it.
Since power operators almost always have the highest binding power in expressions, it's often more readable to hug it with its operands. The main exception to this is when its operands are non-trivial in which case the power operator will not hug, the rule for this is the following:
> For power ops, an operand is considered "simple" if it's only a NAME, numeric CONSTANT, or attribute access (chained attribute access is allowed), with or without a preceding unary operator.
Fixes GH-538.
Closes GH-2095.
diff-shades results: https://gist.github.com/ichard26/ca6c6ad4bd1de5152d95418c8645354b
Co-authored-by: Diego <dpalma@evernote.com>
Co-authored-by: Felix Hildén <felix.hilden@gmail.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Closes#2360: I'd like to make passing SRC or `--code` mandatory and the arguments mutually exclusive. This will change our (partially already broken) promises of CLI behavior, but I'll comment below.
- State we're now stable and that we'll uphold our formatting changes as per policy
- Link to The Black Style doc.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
This PR is intended to have no change to semantics.
This is in preparation for #2784 which will likely introduce more logic
that depends on `current_line.depth`.
Inlining the subtraction gets rid of offsetting and makes it much easier
to see what the result will be.
Fixes#2506
``XDG_CACHE_HOME`` does not work on Windows. To allow for users to set a custom cache directory on all systems I added a new environment variable ``BLACK_CACHE_DIR`` to set the cache directory. The default remains the same so users will only notice a change if that environment variable is set.
The specific use case I have for this is I need to run black on in different processes at the same time. There is a race condition with the cache pickle file that made this rather difficult. A custom cache directory will remove the race condition.
I created ``get_cache_dir`` function in order to test the logic. This is only used to set the ``CACHE_DIR`` constant.
- Add Furo dependency to docs/requirements.txt
- Drop a fair bit of theme configuration
- Fix the toctree declarations in index.rst
- Move stuff around as Furo isn't 100% compatible with Alabaster
Furo was chosen as it provides excellent mobile support, user
controllable light/dark theming, and is overall easier to read
Fixes#2742.
This PR adds the ability to configure additional python cell magics. This
will allow formatting cells in Jupyter Notebooks that are using custom (python)
magics.
Black would now echo the location that it determined as the root path
for the project if `--verbose` is enabled by the user, according to
which it chooses the SRC paths, i.e. the absolute path of the project
is `{root}/{src}`.
Closes#1880
*blib2to3's support was left untouched because: 1) I don't want to touch
parsing machinery, and 2) it'll allow us to provide a more useful error
message if someone does try to format Python 2 code.
I believe it would be useful to split up the long list of changes a bit more.
Specific changes:
- Removed the entry for new flake8 plugins; this is purely internal and not of interest to users
- Put regex in the packaging section
- New section for Jupyter Notebook
- New section for Python 3.10, mostly match/case stuff
error: cannot format <string>: ('EOF in multi-line statement', (2, 0))
▲ before ▼ after
error: cannot format <string>: Cannot parse: 2:0: EOF in multi-line statement
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
We were no longer using it since GH-2644 and GH-2654. This should hopefully
make using Black easier to use as there's one less compiled dependency.
The core team also doesn't have to deal with the surprisingly frequent fires
the regex packaging setup goes through.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
* Treat functions/classes in blocks as if they're nested
One curveball is that we still want two preceding newlines before blocks
that are probably logically disconnected. In other words:
if condition:
def foo():
return "hi"
# <- aside: this is the goal of this commit
else:
def foo():
return "cya"
# <- the two newlines spacing here should stay
# since this probably isn't related
with open("db.json", encoding="utf-8") as f:
data = f.read()
Unfortunately that means we have to special case specific clause types
instead of just being able to just for a colon leaf. The hack used here
is to check whether we're adding preceding newlines for a standalone or
dependent clause. "Standalone" being a clause that doesn't need another
clause to be valid (eg. if) and vice versa.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
This removes all but one usage of the `regex` dependency. Tricky bits included:
- A bug in test_black.py where we were incorrectly using a character range. Fix also submitted separately in #2643.
- `tokenize.py` was the original use case for regex (#1047). The important bit is that we rely on `\w` to match anything valid in an identifier, and `re` fails to match a few characters as part of identifiers. My solution is to instead match all characters *except* those we know to mean something else in Python: whitespace and ASCII punctuation. This will make Black able to parse some invalid Python programs, like those that contain non-ASCII punctuation in the place of an identifier, but that seems fine to me.
- One import of `regex` remains, in `trans.py`. We use a recursive regex to parse f-strings, and only `regex` supports that. I haven't thought of a better fix there (except maybe writing a manual parser), so I'm leaving that for now.
My goal is to remove the `regex` dependency to reduce the risk of breakage due to dependencies and make life easier for users on platforms without wheels.
The recent 2021.4 release of pyinstaller-hooks-contrib now contains a
built-in hook for platformdirs. Manually specifying the hidden import
arg should no longer be needed.
Fixes https://github.com/psf/black/issues/2627 , a non-Python cell magic such as `%%writeline` can legitimately contain "incorrect" indentation, however this causes `tokenize-rt` to return an error. To avoid this, `validate_cell` should early detect cell magics (just like it detects `TransformerManager` transformations).
Test added too, in the shape of a "badly indented" `%%writefile` within `test_non_python_magics`.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Marco Edward Gorelli <marcogorelli@protonmail.com>
In Python 3.10 the exception generated by creating a process pool on
a Python build that doesn't support this is now `NotImplementedError`
Commit history before merge:
* Fix process pool fallback on Python 3.10
* Update CHANGES.md
* Update CHANGES.md
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
The implementation of the new backtracking logic depends heavily on deepcopying the current state of the parser before seeing one of the new keywords, which by default is an very expensive operations. On my system, formatting these 3 files takes 1.3 seconds.
```
$ touch tests/data/pattern_matching_*; time python -m black -tpy310 tests/data/pattern_matching_* 19ms
All done! ✨🍰✨
3 files left unchanged.
python -m black -tpy310 tests/data/pattern_matching_* 2,09s user 0,04s system 157% cpu 1,357 total
```
which can be optimized 3X if we integrate the existing copying logic (`clone`) to the deepcopy system;
```
$ touch tests/data/pattern_matching_*; time python -m black -tpy310 tests/data/pattern_matching_* 1ms
All done! ✨🍰✨
3 files left unchanged.
python -m black -tpy310 tests/data/pattern_matching_* 0,66s user 0,02s system 147% cpu 0,464 total
```
This still might have some potential, but that would be way trickier than this initial patch.
* Improve Python 2 only syntax detection
First of all this fixes a mistake I made in Python 2 deprecation PR
using token.* to check for print/exec statements. Turns out that
for nodes with a type value higher than 256 its numeric type isn't
guaranteed to be constant. Using syms.* instead fixes this.
Also add support for the following cases:
print "hello, world!"
exec "print('hello, world!')"
def set_position((x, y), value):
pass
try:
pass
except Exception, err:
pass
raise RuntimeError, "I feel like crashing today :p"
`wow_these_really_did_exist`
10L
* Add octal support, more test cases, and fixup long ints
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
`DEPRECATION: Python 2 support will be removed in the first stable releaseexpected in January 2022` - > `DEPRECATION: Python 2 support will be removed in the first stable release expected in January 2022`
* Update CHANGES.md for 21.10b0 release
* Update version in docs/usage_and_configuration/the_basics.md
* Also update docs/integrations/source_version_control.md ...
- Install build-essential to avoid build issues like #2568 when dependencies don't have prebuilt wheels available
- Use multi-stage build instead of trying to purge packages and cache from the image
Copying `/root/.local/` installs only black's built Python dependencies (< 20 MB).
So the image is barely larger than python:3-slim base image
* Prepare for Python 2 depreciation
- Use BlackRunner and .stdout in command line test
So the next commit won't break this test. This is in its own commit so
we can just revert the depreciation commit when dropping Python 2
support completely.
* Deprecate Python 2 formatting support
Existing test was actually running a full black-primer
run which could be slow. This goes from 8 seconds to
0.4 seconds on my machine.
Needed to move to top level scope to leverage the caplog
feature of pytest in order to test that the command line
was parsing the bogus arguments and dumping to stderr.
* fix: allow tests to be run from the tests/ directory
* fix: try fixing windows build with MarcoGorelli's suggestion
* Windows hotfix + better respect test's spirit
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
If the individual failures are verbose, it's useful to have
the summary at the end. Otherwise, it can be really difficult
to figure out which projects have an issue.
It currently prints both ASTs - this also
adds the line diff, making it much easier to visualize
the changes as well. Not too verbose since it's only a diff.
Fixes#2394. Eventually fixes#517.
This is essentially @pradyunsg's suggestion from #2394. I suggest that at the
same time we start the formal stability policy, we take a few other disruptive steps
and drop Python 2 and the "b" marker.
Co-authored-by: Pradyun Gedam <pradyunsg@gmail.com>
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
The main goals of this commit include:
* improving consistency on how strict the test suite is -- Jelle has
seen cases where a test did not fail to an incomplete test setup
even though it should've
* simplifying tests for both ease of creation and reading via
parametrization and helpers
* reorganizing the test suite by grouping more tests
* dropping test suite dependencies that aren't strictly necessary
The test suite could definitely do with more refactoring, but this is a
good first pass. Anyway it would've gotten too big to review effectively
if I did continue on this PR.
Commit history before squash merge:
* Drop parameterized dep and refactor format tests
Since the test suite is already using pytest-only features we can drop
the parameterized test dependency in favour of pytest's own offering.
I also added an utility function called assert_format that makes it
even easier to verify Black formats some code correctly. We already
have great tooling if the case is very simple in test_format.py but
any sort of complication makes it hard to use. Also if you're writing
a non-standard test case, you have to be careful to include all of
the steps so issues don't go undetected. assert_format aims to
1) improve consistency, 2) avoid wasted CPU cycles, and 3) avoid
logical errors that hide issues.
Finally, quite a few tests were either moved and/or simplified with
the new setup.
* Move file collection tests
* Add assert_collected_sources helper function
Testing source collection involves a lot of repetitive boilerplate,
something that black.files.get_sources's signature does not help with.
So to cut down on boilerplate like `report=black.Report()` I added
a convenience function to tests/test_black.py which wraps
black.get_sources. Its signature is designed to be much more lax to
make it much easier to use. Somehow this leads to cutting 100 lines!
Also IMO the test cases are much easier to read since it's more
declarative than really procedural now.
* Run isort on some test files
* Move cache tests
* Use pytest-style asserts & add parametrization
* Drop now unnecessary test dependencies
*pytest-cases might be interesting for further refactoring but I
haven't been able to wrap my head around it for the time being. We
can always revisit anyway.
Commit history before merge:
* Bump required aiohttp version to 3.7.4
This release includes an important security fix
(https://github.com/aio-libs/aiohttp/security/advisories/GHSA-v6wp-4m6f-gcjg) and many
other improvements.
* add changelog entry
* Let's not forget about Pipfile
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
* re-implement simple CORS middleware for blackd
* remove aiohttp-cors from setup.py
* Remove aiohttp-cors from Pipfile.lock
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Project packaging is using TOML due to pyproject.toml but fails to
mention it, causing installation failures with newer setuptools-scm 6.3.0.
Commit history before merge:
* Fix missing toml extra
Fixed breakage uncovered by setuptools-scm 6.3.0 where installation
would fail for project that missed to mention the toml extra.
* Bump setuptools[-scm] to avoid toml extra
https://github.com/psf/black/pull/2475#issuecomment-912730714
> If you constraint greater than 6.3.0 and setuptools greater than 45
> you can skip the extra,
* Actually for safety reasons, just use the extra
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Add new platformdirs dependencies as hidden imports when creating
PyInstaller-based binaries.
platformdirs imports the module for each platform dynamically, which
PyInstaller is unable to correctly detect for packing. By adding the
modules as hidden imports, we are telling PyInstaller to include the
modules in the packaged binary.
This issue seems to have been introduce when switching to platformdirs
in #2375. fixes#2464
Commit history before merge:
* Add hidden import to PyInstaller build
Add new platformdirs dependency as a hidden import when creating
PyInstaller based binaries.
* Only include the platformdirs for the relevant os
Draft releases don't trigger the workflows (that's good!) but since they only
Commit history before merge:
* fix: run pypi upload from published draft releases
* Fix broken task list markup in PR template
* change docker workflow to build on release publish
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
2021-08-30 13:26:21 -04:00
397 changed files with 27054 additions and 9174 deletions
(echo "Please add '(#${{ github.event.pull_request.number }})' change line to CHANGES.md" && \
(echo "Please add '(#${{ github.event.pull_request.number }})' change line to CHANGES.md (or if appropriate, ask a maintainer to add the 'skip news' label)" && \
*Black* can be integrated into many environments, providing a better and smoother experience. Documentation for integrating *Black* with a tool can be found for the
following areas:
- :doc:`Editor / IDE <./editors>`
- :doc:`GitHub Actions <./github_actions>`
- :doc:`Source version control <./source_version_control>`
Editors and tools not listed will require external contributions.
Patches welcome! ✨ 🍰 ✨
Any tool can pipe code through *Black* using its stdio mode (just
`use \`-\` as the file name <https://www.tldp.org/LDP/abs/html/special-chars.html#DASHREF2>`_).
The formatted code will be returned on stdout (unless ``--check`` was passed). *Black*
will still emit messages on stderr but that shouldn't affect your use case.
This can be used for example with PyCharm's or IntelliJ's
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.