* Add option to skip the first line in source file
This commit adds a CLi option to skip the first line in the source
files, just like the Cpython command line allows [1]. By enabling the
flag, using `-x` or `--skip-source-first-line`, the first line is
removed temporarilly while the remaining contents are formatted. The
first line is added back before returning the formatted output.
[1]: https://docs.python.org/dev/using/cmdline.html#cmdoption-x
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Add tests for `--skip-source-first-line` option
When the flag is disabled (default), black formats the entire source
file, as in every line. In the other hand, if the flag is enabled, by
using `-x` or `--skip-source-first-line`, the first line is retained
while the rest of the source is formatted and then is added back.
These tests use an empty Python file that contains invalid syntax in
its first line (`invalid_header.py`, at `miscellaneous/`). First,
Black is invoked without enabling the flag which should result in an
exit code different than 0. When the flag is enabled, Black is
expected to return a successful exit code and the header is expected
to be retained (even if its not valid Python syntax).
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Support skip source first line option for blackd
The recently added option can be added as an acceptable header for
blackd. The arguments are passed in such a way that using the new
header will activate the skip source first line behaviour as expected
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Add skip source first line option to blackd docs
The new option can be passed to blackd as a header. This commit
updates the blackd docs to include the new header.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Update CHANGES.md
Include the new Black option to skip the first line of source code in
the configuration section
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Update skip first line test including valid syntax
Including valid Python syntax help us make sure that the file is still
actually valid after skipping the first line of the source file (which
contains invalid Python syntax)
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Skip first source line at `format_file_in_place`
Instead of skipping the first source line at `format_file_contents`,
do it before. This allow us to find the correct newline and encoding
on the actual source code (everything that's after the header).
This change is also applied at Blackd: take the header before passing
the source to `format_file_contents` and put the header back once we
get the formatted result.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Test output newlines when skipping first line
When skipping the first line of source code, the reference newline must
be taken from the second line of the file instead of the first one, in
case that the file mixes more than one kind of newline character
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Test that Blackd also skips first line correctly
Simliarly to the Black tests, we first compare that Blackd fails when
the first line is invalid Python syntax and then check that the result
is the expected when tha flag is activated
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Use the content encoding to decode the header
When decoding the header to put it back at the top of the contents of
the file, use the same encoding used in the content. This should be a
better "guess" that using the default value
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
Jupyter creates a checkpoint file every single time you create an .ipynb
file, and then it updates the checkpoint file every single time you
manually save your progress for the initial .ipynb. These checkpoints
are stored in a directory named `.ipynb_checkpoints`.
Co-authored-by: Batuhan Taskaya <isidentical@gmail.com>
Previously _Black_ produces invalid code because the `# fmt: on` is used on a different block level.
While _Black_ requires `# fmt: off` and `# fmt: on` to be used at the same block level, incorrect usage shouldn't cause crashes.
The formatting behavior this PR introduces is, the code below the initial `# fmt: off` block level will be turned off for formatting, when `# fmt: on` is used on a different level or there is no `# fmt: on`. This also matches the current behavior when `# fmt: off` is used at the top-level without a matching `# fmt: on`, it turns off formatting for everything below `# fmt: off`.
- Fixes#2567
- Fixes#3184
- Fixes#2985
- Fixes#2882
- Fixes#2232
- Fixes#2140
- Fixes#1817
- Fixes#569
Bumps cibuildwheel from 2.8.1 to 2.10.0 which has 3.11 building enabled
by default. Unfortunately mypyc errors out on 3.11:
src/black/files.py:29:9: error: Name "tomllib" already defined (by an import) [no-redef]
... so we have to also hide the fallback import of tomli on older 3.11
alphas from mypy[c].
Fix a crash when formatting some dicts with parenthesis-wrapped long
string keys. When LL[0] is an atom string, we need to check the atom
node's siblings instead of LL[0] itself, e.g.:
dictsetmaker
atom
STRING '"This is a really long string that can\'t be expected to fit in one line and is used as a nested dict\'s key"'
/atom
COLON ':'
atom
LSQB ' ' '['
listmaker
STRING '"value"'
COMMA ','
STRING ' ' '"value"'
/listmaker
RSQB ']'
/atom
COMMA ','
/dictsetmaker
This is deprecated since aiohttp 4.0. If it doesn't exist just define a
no-op decorator that does nothing (after the other aiohttp imports
though!). By doing this, it's safe to ignore the DeprecationWarning
without needing to require the latest aiohttp once they remove
`@middleware`.
Solves https://github.com/psf/black/issues/2598 where Black wouldn't
use .gitignore at folder/.gitignore if you ran `black folder` for
example.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Adds parentheses around implicit string concatenations when it's inside
a list, set, or tuple. Except when it's only element and there's no trailing
comma.
Looking at the order of the transformers here, we need to "wrap in
parens" before string_split runs. So my solution is to introduce a
"collaboration" between StringSplitter and StringParenWrapper where the
splitter "skips" the split until the wrapper adds the parens (and then
the line after the paren is split by StringSplitter) in another pass.
I have also considered an alternative approach, where I tried to add a
different "string paren wrapper" class, and it runs before string_split.
Then I found out it requires a different do_transform implementation
than StringParenWrapper.do_transform, since the later assumes it runs
after the delimiter_split transform. So I stopped researching that
route.
Originally function calls were also included in this change, but given
missing commas should usually result in a runtime error and the scary
amount of changes this cause on downstream code, they were removed in
later revisions.
os.cpu_count() can return None (sounds like a super arcane edge case
though) so the type annotation for the `workers` parameter of
`black.main` is wrong. This *could* technically cause a runtime
TypeError since it'd trip one of mypyc's runtime type checks so we
might as well fix it.
Reading the documentation (and cross-checking with the source code),
you are actually allowed to pass None as `max_workers` to
`concurrent.futures.ProcessPoolExecutor`. If it is None, the pool
initializer will simply call os.cpu_count() [^1] (defaulting to 1 if it
returns None [^2]). It'll even round down the worker count to a level
that's safe for Windows.
... so theoretically we don't even need to call os.cpu_count()
ourselves, but our Windows limit is 60 (unlike the stdlib's 61) and I'd
prefer not accidentally reintroducing a crash on machines with many,
many CPU cores.
[^1]: https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor
[^2]: a372a7d653/Lib/concurrent/futures/process.py (L600)
Loading .gitignore and compiling the exclude regex can take more than
15ms. We shouldn't and don't need to pay this cost if we're simply
formatting files given on the command line directly.
I would've loved to lazily import pathspec, but the patch won't be clean
until the file collection and discovery logic is refactored first.
Co-authored-by: Fabio Zadrozny <fabiofz@gmail.com>
`black.reformat_many` depends on a lot of slow-to-import modules. When
formatting simply a single file, the time paid to import those modules
is totally wasted. So I moved `black.reformat_many` and its helpers
to `black.concurrency` which is now *only* imported if there's more
than one file to reformat. This way, running Black over a single file
is snappier
Here are the numbers before and after this patch running `python -m
black --version`:
- interpreted: 411 ms +- 9 ms -> 342 ms +- 7 ms: 1.20x faster
- compiled: 365 ms +- 15 ms -> 304 ms +- 7 ms: 1.20x faster
Co-authored-by: Fabio Zadrozny <fabiofz@gmail.com>
There are a number of places this behaviour could be patched, for
instance, it's quite tempting to patch it in `get_sources`. However
I believe we generally have the invariant that project root contains all
files we want to format, in which case it seems prudent to keep that
invariant.
This also improves the accuracy of the "sources to be formatted" log
message with --stdin-filename.
Fixes GH-3207.
Fixes#2734: a standalone comment causes strings to be merged into one far too long (and requiring two passes to do so).
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
As error logs are emitted often (they happen when Black's cache
directory is created after blib2to3 tries to write its cache) and cause
issues to be filed by users who think Black isn't working correctly.
These errors are expected for now and aren't a cause for concern so
let's remove them to stop worrying users (and new issues from being
opened). We can improve the blib2to3 caching mechanism to write its
cache at the end of a successful command line invocation later.
... in the middle of an expression or code block by adding a missing return.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
When the Leaf node with `# fmt: skip` is a NEWLINE inside a `suite`
Node, the nodes to ignore should be from the siblings of the parent
`suite` Node.
There is a also a special case for the ASYNC token, where it expands
to the grandparent Node where the ASYNC token is.
This fixes GH-2646, GH-3126, GH-2680, GH-2421, GH-2339, and GH-2138.
The former was a regression I introduced a long time ago. To avoid
changing the stable style too much, the regression is only fixed if
--preview is enabled
Annoyingly enough, as we currently always enforce a second format pass if
changes were made, there's no good way to prove the existence of the
docstring quote normalization instability issue. For posterity, here's
one failing example:
--- source
+++ first pass
@@ -1,7 +1,7 @@
def some_function(self):
- ''''<text here>
+ """ '<text here>
<text here, since without another non-empty line black is stable>
- '''
+ """
pass
--- first pass
+++ second pass
@@ -1,7 +1,7 @@
def some_function(self):
- """ '<text here>
+ """'<text here>
<text here, since without another non-empty line black is stable>
"""
pass
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
* Move to explicitly creating a new loop
- >= 3.10 add a warning that `get_event_loop` will not automatically create a loop
- Move to explicit API
Test:
- `python3.11 -m venv --upgrade-deps /tmp/tb`
- `/tmp/tb/bin/pip install -e .`
- Install deps and no blackd as aiohttp + yarl can't build still with 3.11
- https://github.com/aio-libs/aiohttp/issues/6600
- `export PYTHONWARNINGS=error`
```
cooper@l33t:~/repos/black$ /tmp/tb/bin/black .
All done! ✨🍰✨
44 files left unchanged.
```
Fixes#3110
* Add to CHANGES.md
* Fix a cooper typo yet again
* Set default asyncio loop to our explicitly created one + unset on exit
* Update CHANGES.md
Fix my silly typo.
Co-authored-by: Thomas Grainger <tagrain@gmail.com>
Co-authored-by: Cooper Ry Lees <me@wcooperlees.com>
Co-authored-by: Thomas Grainger <tagrain@gmail.com>
Doing so is invalid. Note this only fixes the preview style since the
logic putting closing docstring quotes on their own line if they violate
the line length limit is quite new.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Otherwise they'd be deleted which was a regression in 22.1.0 (oops! my
bad!). Also type comments are now tracked in the AST safety check on all
compatible platforms to error out if this happens again.
Overall the line rewriting code has been rewritten to do "the right
thing (tm)", I hope this fixes other potential bugs in the code (fwiw I
got to drop the bugfix in blib2to3.pytree.Leaf.clone since now bracket
metadata is properly copied over).
Fixes#2873
This is a tricky one as await is technically an expression and therefore
in certain situations requires brackets for operator precedence.
However, the vast majority of await usage is just await some_coroutine(...)
and similar in format to return statements. Therefore this PR removes
redundant parens around these await expressions.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Allows us to better control placement of return annotations by:
a) removing redundant parens
b) moves very long type annotations onto their own line
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Now PRs will run two diff-shades jobs, "preview-changes" which formats
all projects with preview=True, and "assert-no-changes" which formats
all projects with preview=False. The latter also fails if any changes
were made.
Pushes to main will only run "preview-changes"
Also the workflow_dispatch feature was dropped since it was
complicating everything for little gain.
It is falsely placed in preview features and always formats the power operators, it was added in #2789 but there is no check for formatting added along with it.
It was causing stability issues because the first pass
could cause a "magic trailing comma" to appear, meaning
that the second pass might get a different result. It's
not critical.
Some things format differently (with extra parens)
It turns out "simple_stmt" isn't that simple: it can contain multiple
statements separated by semicolons. Invisible parenthesis logic for
arithmetic expressions only looked at the first child of simple_stmt.
This causes instability in the presence of semicolons, since the next
run through the statement following the semicolon will be the first
child of another simple_stmt.
I believe this along with #2572 fix the known stability issues.
Since power operators almost always have the highest binding power in expressions, it's often more readable to hug it with its operands. The main exception to this is when its operands are non-trivial in which case the power operator will not hug, the rule for this is the following:
> For power ops, an operand is considered "simple" if it's only a NAME, numeric CONSTANT, or attribute access (chained attribute access is allowed), with or without a preceding unary operator.
Fixes GH-538.
Closes GH-2095.
diff-shades results: https://gist.github.com/ichard26/ca6c6ad4bd1de5152d95418c8645354b
Co-authored-by: Diego <dpalma@evernote.com>
Co-authored-by: Felix Hildén <felix.hilden@gmail.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Closes#2360: I'd like to make passing SRC or `--code` mandatory and the arguments mutually exclusive. This will change our (partially already broken) promises of CLI behavior, but I'll comment below.
This PR is intended to have no change to semantics.
This is in preparation for #2784 which will likely introduce more logic
that depends on `current_line.depth`.
Inlining the subtraction gets rid of offsetting and makes it much easier
to see what the result will be.
Fixes#2506
``XDG_CACHE_HOME`` does not work on Windows. To allow for users to set a custom cache directory on all systems I added a new environment variable ``BLACK_CACHE_DIR`` to set the cache directory. The default remains the same so users will only notice a change if that environment variable is set.
The specific use case I have for this is I need to run black on in different processes at the same time. There is a race condition with the cache pickle file that made this rather difficult. A custom cache directory will remove the race condition.
I created ``get_cache_dir`` function in order to test the logic. This is only used to set the ``CACHE_DIR`` constant.
Fixes#2742.
This PR adds the ability to configure additional python cell magics. This
will allow formatting cells in Jupyter Notebooks that are using custom (python)
magics.
Black would now echo the location that it determined as the root path
for the project if `--verbose` is enabled by the user, according to
which it chooses the SRC paths, i.e. the absolute path of the project
is `{root}/{src}`.
Closes#1880
*blib2to3's support was left untouched because: 1) I don't want to touch
parsing machinery, and 2) it'll allow us to provide a more useful error
message if someone does try to format Python 2 code.
error: cannot format <string>: ('EOF in multi-line statement', (2, 0))
▲ before ▼ after
error: cannot format <string>: Cannot parse: 2:0: EOF in multi-line statement
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
We were no longer using it since GH-2644 and GH-2654. This should hopefully
make using Black easier to use as there's one less compiled dependency.
The core team also doesn't have to deal with the surprisingly frequent fires
the regex packaging setup goes through.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
* Treat functions/classes in blocks as if they're nested
One curveball is that we still want two preceding newlines before blocks
that are probably logically disconnected. In other words:
if condition:
def foo():
return "hi"
# <- aside: this is the goal of this commit
else:
def foo():
return "cya"
# <- the two newlines spacing here should stay
# since this probably isn't related
with open("db.json", encoding="utf-8") as f:
data = f.read()
Unfortunately that means we have to special case specific clause types
instead of just being able to just for a colon leaf. The hack used here
is to check whether we're adding preceding newlines for a standalone or
dependent clause. "Standalone" being a clause that doesn't need another
clause to be valid (eg. if) and vice versa.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
This removes all but one usage of the `regex` dependency. Tricky bits included:
- A bug in test_black.py where we were incorrectly using a character range. Fix also submitted separately in #2643.
- `tokenize.py` was the original use case for regex (#1047). The important bit is that we rely on `\w` to match anything valid in an identifier, and `re` fails to match a few characters as part of identifiers. My solution is to instead match all characters *except* those we know to mean something else in Python: whitespace and ASCII punctuation. This will make Black able to parse some invalid Python programs, like those that contain non-ASCII punctuation in the place of an identifier, but that seems fine to me.
- One import of `regex` remains, in `trans.py`. We use a recursive regex to parse f-strings, and only `regex` supports that. I haven't thought of a better fix there (except maybe writing a manual parser), so I'm leaving that for now.
My goal is to remove the `regex` dependency to reduce the risk of breakage due to dependencies and make life easier for users on platforms without wheels.
Fixes https://github.com/psf/black/issues/2627 , a non-Python cell magic such as `%%writeline` can legitimately contain "incorrect" indentation, however this causes `tokenize-rt` to return an error. To avoid this, `validate_cell` should early detect cell magics (just like it detects `TransformerManager` transformations).
Test added too, in the shape of a "badly indented" `%%writefile` within `test_non_python_magics`.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Marco Edward Gorelli <marcogorelli@protonmail.com>
In Python 3.10 the exception generated by creating a process pool on
a Python build that doesn't support this is now `NotImplementedError`
Commit history before merge:
* Fix process pool fallback on Python 3.10
* Update CHANGES.md
* Update CHANGES.md
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
The implementation of the new backtracking logic depends heavily on deepcopying the current state of the parser before seeing one of the new keywords, which by default is an very expensive operations. On my system, formatting these 3 files takes 1.3 seconds.
```
$ touch tests/data/pattern_matching_*; time python -m black -tpy310 tests/data/pattern_matching_* 19ms
All done! ✨🍰✨
3 files left unchanged.
python -m black -tpy310 tests/data/pattern_matching_* 2,09s user 0,04s system 157% cpu 1,357 total
```
which can be optimized 3X if we integrate the existing copying logic (`clone`) to the deepcopy system;
```
$ touch tests/data/pattern_matching_*; time python -m black -tpy310 tests/data/pattern_matching_* 1ms
All done! ✨🍰✨
3 files left unchanged.
python -m black -tpy310 tests/data/pattern_matching_* 0,66s user 0,02s system 147% cpu 0,464 total
```
This still might have some potential, but that would be way trickier than this initial patch.
* Improve Python 2 only syntax detection
First of all this fixes a mistake I made in Python 2 deprecation PR
using token.* to check for print/exec statements. Turns out that
for nodes with a type value higher than 256 its numeric type isn't
guaranteed to be constant. Using syms.* instead fixes this.
Also add support for the following cases:
print "hello, world!"
exec "print('hello, world!')"
def set_position((x, y), value):
pass
try:
pass
except Exception, err:
pass
raise RuntimeError, "I feel like crashing today :p"
`wow_these_really_did_exist`
10L
* Add octal support, more test cases, and fixup long ints
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
`DEPRECATION: Python 2 support will be removed in the first stable releaseexpected in January 2022` - > `DEPRECATION: Python 2 support will be removed in the first stable release expected in January 2022`
* Prepare for Python 2 depreciation
- Use BlackRunner and .stdout in command line test
So the next commit won't break this test. This is in its own commit so
we can just revert the depreciation commit when dropping Python 2
support completely.
* Deprecate Python 2 formatting support
If the individual failures are verbose, it's useful to have
the summary at the end. Otherwise, it can be really difficult
to figure out which projects have an issue.
* re-implement simple CORS middleware for blackd
* remove aiohttp-cors from setup.py
* Remove aiohttp-cors from Pipfile.lock
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
re. import, the ipynb code was assuming that typing-extensions would
always be available, but that's not the case! There's an environment
marker on the requirement meaning it won't get installed on 3.10 or
higher. The test suite didn't catch this issue since aiohttp pulls in
typing-extensions unconditionally.
This also introduces a script so we can reference the latest version in
the example pre-commit configuration in the docs without forgetting to
update it when doing a release!
Commit history before merge:
* document jupyter hook
* note minimum version
* add check for pre-commit version
* use git tag
* curl api during ci
* parse version from changes file
* fixup script
* rename variables
* Tweak the docs & magical script
* fix couple of typos
* pin additional dependencies in hook
* Add types-PyYAML to lockfile
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Implementation stolen from PR davidhalter/parso#162. Thanks parso!
I could add support for these newer syntactical constructs in the
target version detection logic, but until I get diff-shades up
and running I don't feel very comfortable adding the code.
This fixes a bug where a trailing comma would be added to a
parenthesized return annotation changing its type to a tuple.
Here's one case where this bug shows up:
```
def spam() -> (
this_is_a_long_type_annotation_which_should_NOT_get_a_trailing_comma
):
pass
```
The root problem was that the type annotation was treated as if it was
a parameter & import list (is_body=True to linegen::bracket_split_build_line)
where a trailing comma is usually fine. Now there's another check in the
aforementioned function to make sure the body it's operating on isn't
a return annotation before truly adding a trailing comma.