`is_type_comment` now specifically deals with general type comments for a leaf.
`is_type_ignore_comment` now handles type comments contains ignore annotation for a leaf
`is_type_ignore_comment_string` used to determine if a string is an ignore type comment
Avoids a Python 3.12 deprecation warning.
Subtle difference: previously, timestamps in diff filenames had the
`+0000` separated from the timestamp by space. With this, the space is
there no more, and there is a colon, as in `+00:00`.
This patch changes the preview style so that string splitters respect
Unicode East Asian Width[^1] property. If you are not familiar to CJK
languages it is not clear immediately. Let me elaborate with some
examples.
Traditionally, East Asian characters (including punctuation) have
taken up space twice than European letters and stops when they are
rendered in monospace typeset. Compare the following characters:
```
abcdefg.
글、字。
```
The characters at the first line are half-width, and the second line
are full-width. (Also note that the last character with a small
circle, the East Asian period, is also full-width.) Therefore, if we
want to prevent those full-width characters to exceed the maximum
columns per line, we need to count their *width* rather than the number
of characters. Again, the following characters:
```
글、字。
```
These are just 4 characters, but their total width is 8.
Suppose we want to maintain up to 4 columns per line with the following
text:
```
abcdefg.
글、字。
```
How should it be then? We want it to look like:
```
abcd
efg.
글、
字。
```
However, Black currently turns it into like this:
```
abcd
efg.
글、字。
```
It's because Black currently counts the number of characters in the line
instead of measuring their width. So, how could we measure the width?
How can we tell if a character is full- or half-width? What if half-width
characters and full-width ones are mixed in a line? That's why Unicode
defined an attribute named `East_Asian_Width`. Unicode grouped every
single character according to their width in fixed-width typeset.
This partially addresses #1197, but only for string splitters. The other
parts need to be fixed as well in future patches.
This was implemented by copying rich's own approach to handling wide
characters: generate a table using wcwidth, check it into source
control, and use in to drive helper functions in Black's logic. This
gets us the best of both worlds: accuracy and performance (and let's us
update as per our stability policy too!).
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Fixes#3506
We can't simply escape the quotes in a naked f-string when merging string groups, because backslashes are invalid.
The quotes in f-string expressions should be toggled (this is safe since quotes can't be reused).
This fix also means implicitly concatenated f-strings with different quotes can now be merged or quote-normalized by changing the quotes used in expressions. e.g.:
```diff
raise sa_exc.UnboundExecutionError(
"Could not locate a bind configured on "
- f'{", ".join(context)} or this Session.'
+ f"{', '.join(context)} or this Session."
)
```
When trying to format a project from the outside, the verbose output
shows says that there are symbolic links that points outside of the
project, but displays the wrong project path, meaning that these
messages are false positives.
This bug is triggered when the command is executed from outside a
project on a folder inside it, causing an inconsistency between the
path to the detected project root and the relative path to the target
contents.
The fix is to normalize the target path using the project root before
processing the sources, which removes the presence of the incorrect
messages.
---
The test attemps to emulate the behavior of the CLI as closely as
posible by patching some `pathlib.Path` methods and passing certain
reference paths to the context object and `black.get_sources`.
Before the associated fix was introduced, this test failed because
some of the captured files reported the presence of a symlink due to
an incorrectly formated path. The test also asserts that only a single
file is reported as ignored, which is part of the expected behavior.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
Co-authored-by: Jordan Ephron <JEphron@users.noreply.github.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
The bug is in the `get_leaves_inside_matching_brackets` on the third line below:
```python
assert xxxxxxxxx.xxxxxxxxx.xxxxxxxxx(
xxxxxxxxx
).xxxxxxxxxxxxxxxxxx(), (
"xxx {xxxxxxxxx} xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
)
```
Including the invisible paren, third line is `).xxxxxxxxxxxxxxxxxx()), (`, that it has a matched pair then an unmatched closing paren afterwards. This PR ensures the returned leaves are actually matched.
Fixes#3414.
Currently, empty and whitespace-only (with or without newlines) are
not modified. In some discussions (issues and pull requests) consensus
was to reformat whitespace-only files to empty or single-character
files, preserving line endings when possible. With that said, this
commit introduces the following behaviors:
* Empty files are left as is
* Whitespace-only files (no newline) reformat into empty files
* Whitespace-only files (1 or more newlines) reformat into a single
newline character
To implement these changes, we moved the initial check at
`format_file_contents` that raises `NothingChanged` if the source
(with no whitespaces) is an empty string. In the case of *.ipynb
files, `format_ipynb_string` checks a similar condition and removed
whitespaces. In the case of Python files, `format_str_once` includes a
check on the output that returns the correct newline character if
possible or an empty string otherwise.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Apply .gitignore files considering their location
When a .gitignore file contains the special rule to ignore every
subfolder content (`*/*`) and the file is located in a subfolder
relative to where the command is executed (root), the rule is
incorrectly applied and ignores every file at the same level of the
.gitignore file.
The reason for this is that the `gitignore` variable accumulates the
rules found in each .gitignore while traversing files and directories
recursively. This makes sense and, in general, works as expected. The
problem is that the gitignore rules are applied using as the relative
path from root to target directory as a reference. This is the cause
of the bug.
The implemented solution keeps track of every .gitignore file found
while traversing the targets and the absolute location of each
.gitignore file. Then, when matching files to the .gitignore rules,
compare each set of rules with the appropiate relative path to the
candidate target file.
To make this possible, we changed the single `gitignore` object with a
dictionary of similar objects, where the corresponding key is the
absolute path to the folder that contains that .gitignore file. This
required changing the signature of the `get_sources` function. Also, we
introduce a `is_ignored` function that compares a file with every set
of rules. Finally, some tests required an update to pass the gitignore
object in the new format.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Test .gitignore with `*/*` is applied correctly
The test contains three cases: 1) when the .gitignore with the special
rule to ignore every subfolder and its contents (*/*) is in the root,
2) when the file is inside a subfolder relative to root (nested), and
3) when the target folder contains the .gitignore and root is a parent
folder of the target. In all of these cases, we compare the files that
are visible by Black with a known list of paths containing the
expected values.
Before the fix introduced in the previous commit, these tests failed
when the .gitignore file was nested (second case). Now, the test is
passed for all cases.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Update CHANGES.md
Add entry about fixed bug and changes introduced: ignore files by
considering the location of each .gitignore file and the relative path
of each target
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Small refactor to improve code readability
These changes are small improvements to improve code readability:
rename a variable to a more descriptive name (from `exclude_is_None`
to `using_default_exclude`), use a better syntax to include the type
annotation for `gitignore` variable (from typing comment to
Python-style typing annotation), and replace an if-else block with a
single dictionary definition (in this case, we need to compare keys
instead of values, meaning that the change works)
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Make nested function a top-level function
The function to match a given path with every discovered .gitignore
file does not need to be a nested function and can be a top-level
function. The arguments did not change, but the naming of local
variables was improved for readability.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
When passing multiple src directories, the root gitignore was only
applied to the first processed source. The reason is that, in the
first source, exclude is `None`, but then the value gets overridden by
`re_compile_maybe_verbose(DEFAULT_EXCLUDES)`, so in the next iteration
where the source is a directory, the condition is not met and sets the
value of `gitignore` to `None`.
To fix this problem, we store a boolean indicating if `exclude` is
`None` and set the value of `exclude` to its default value if that's
the case. This makes sure that the flow enters the correct condition on
following iterations and also keeps the original value if the condition
is not met.
Also, the value of `gitignore` is initialized as `None` and overriden
if necessary. The value of `root_gitignore` is always calculated to
avoid using additional variables (at the small cost of additional
computations).
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Add option to skip the first line in source file
This commit adds a CLi option to skip the first line in the source
files, just like the Cpython command line allows [1]. By enabling the
flag, using `-x` or `--skip-source-first-line`, the first line is
removed temporarilly while the remaining contents are formatted. The
first line is added back before returning the formatted output.
[1]: https://docs.python.org/dev/using/cmdline.html#cmdoption-x
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Add tests for `--skip-source-first-line` option
When the flag is disabled (default), black formats the entire source
file, as in every line. In the other hand, if the flag is enabled, by
using `-x` or `--skip-source-first-line`, the first line is retained
while the rest of the source is formatted and then is added back.
These tests use an empty Python file that contains invalid syntax in
its first line (`invalid_header.py`, at `miscellaneous/`). First,
Black is invoked without enabling the flag which should result in an
exit code different than 0. When the flag is enabled, Black is
expected to return a successful exit code and the header is expected
to be retained (even if its not valid Python syntax).
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Support skip source first line option for blackd
The recently added option can be added as an acceptable header for
blackd. The arguments are passed in such a way that using the new
header will activate the skip source first line behaviour as expected
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Add skip source first line option to blackd docs
The new option can be passed to blackd as a header. This commit
updates the blackd docs to include the new header.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Update CHANGES.md
Include the new Black option to skip the first line of source code in
the configuration section
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Update skip first line test including valid syntax
Including valid Python syntax help us make sure that the file is still
actually valid after skipping the first line of the source file (which
contains invalid Python syntax)
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Skip first source line at `format_file_in_place`
Instead of skipping the first source line at `format_file_contents`,
do it before. This allow us to find the correct newline and encoding
on the actual source code (everything that's after the header).
This change is also applied at Blackd: take the header before passing
the source to `format_file_contents` and put the header back once we
get the formatted result.
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Test output newlines when skipping first line
When skipping the first line of source code, the reference newline must
be taken from the second line of the file instead of the first one, in
case that the file mixes more than one kind of newline character
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Test that Blackd also skips first line correctly
Simliarly to the Black tests, we first compare that Blackd fails when
the first line is invalid Python syntax and then check that the result
is the expected when tha flag is activated
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
* Use the content encoding to decode the header
When decoding the header to put it back at the top of the contents of
the file, use the same encoding used in the content. This should be a
better "guess" that using the default value
Signed-off-by: Antonio Ossa Guerra <aaossa@uc.cl>
Previously _Black_ produces invalid code because the `# fmt: on` is used on a different block level.
While _Black_ requires `# fmt: off` and `# fmt: on` to be used at the same block level, incorrect usage shouldn't cause crashes.
The formatting behavior this PR introduces is, the code below the initial `# fmt: off` block level will be turned off for formatting, when `# fmt: on` is used on a different level or there is no `# fmt: on`. This also matches the current behavior when `# fmt: off` is used at the top-level without a matching `# fmt: on`, it turns off formatting for everything below `# fmt: off`.
- Fixes#2567
- Fixes#3184
- Fixes#2985
- Fixes#2882
- Fixes#2232
- Fixes#2140
- Fixes#1817
- Fixes#569
Fix a crash when formatting some dicts with parenthesis-wrapped long
string keys. When LL[0] is an atom string, we need to check the atom
node's siblings instead of LL[0] itself, e.g.:
dictsetmaker
atom
STRING '"This is a really long string that can\'t be expected to fit in one line and is used as a nested dict\'s key"'
/atom
COLON ':'
atom
LSQB ' ' '['
listmaker
STRING '"value"'
COMMA ','
STRING ' ' '"value"'
/listmaker
RSQB ']'
/atom
COMMA ','
/dictsetmaker
* Move 311 tests to install aiohttp without C extensions
- Configure tox to install aiohttp without extensions
- i.e. use `AIOHTTP_NO_EXTENSIONS=1` for pip install
- This allows us to reenable blackd tests that use aiohttp testing helpers etc.
- Had to ignore `cgi` module deprecation warning
- Filed issue for aiohttp to fix: https://github.com/aio-libs/aiohttp/issues/6905
Test:
- `/tmp/tb/bin/tox -e 311`
* Fix formatting + linting
* Add latest aiohttp for loop fix + Try to exempt deprecation warning but failed - will ask for help
* Remove unnecessary warning ignore
Co-authored-by: Cooper Ry Lees <me@wcooperlees.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Solves https://github.com/psf/black/issues/2598 where Black wouldn't
use .gitignore at folder/.gitignore if you ran `black folder` for
example.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Adds parentheses around implicit string concatenations when it's inside
a list, set, or tuple. Except when it's only element and there's no trailing
comma.
Looking at the order of the transformers here, we need to "wrap in
parens" before string_split runs. So my solution is to introduce a
"collaboration" between StringSplitter and StringParenWrapper where the
splitter "skips" the split until the wrapper adds the parens (and then
the line after the paren is split by StringSplitter) in another pass.
I have also considered an alternative approach, where I tried to add a
different "string paren wrapper" class, and it runs before string_split.
Then I found out it requires a different do_transform implementation
than StringParenWrapper.do_transform, since the later assumes it runs
after the delimiter_split transform. So I stopped researching that
route.
Originally function calls were also included in this change, but given
missing commas should usually result in a runtime error and the scary
amount of changes this cause on downstream code, they were removed in
later revisions.
`black.reformat_many` depends on a lot of slow-to-import modules. When
formatting simply a single file, the time paid to import those modules
is totally wasted. So I moved `black.reformat_many` and its helpers
to `black.concurrency` which is now *only* imported if there's more
than one file to reformat. This way, running Black over a single file
is snappier
Here are the numbers before and after this patch running `python -m
black --version`:
- interpreted: 411 ms +- 9 ms -> 342 ms +- 7 ms: 1.20x faster
- compiled: 365 ms +- 15 ms -> 304 ms +- 7 ms: 1.20x faster
Co-authored-by: Fabio Zadrozny <fabiofz@gmail.com>
There are a number of places this behaviour could be patched, for
instance, it's quite tempting to patch it in `get_sources`. However
I believe we generally have the invariant that project root contains all
files we want to format, in which case it seems prudent to keep that
invariant.
This also improves the accuracy of the "sources to be formatted" log
message with --stdin-filename.
Fixes GH-3207.
Fixes#2734: a standalone comment causes strings to be merged into one far too long (and requiring two passes to do so).
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
- Had to exempt blackd tests for now due to aiohttp
- Skip by using `sys.version_info` tuple
- aiohttp does not compile in 3.11 yet - refer to #3230
- Add a deadsnakes ubuntu workflow to run 3.11-dev to ensure we don't regress
- Have it also format ourselves
Test:
- `tox -e 311`
Co-authored-by: Cooper Ry Lees <me@wcooperlees.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
uR is not a legal string prefix, so this test breaks (AssertionError:
cannot use --safe with this file; failed to parse source file AST:
invalid syntax) if changed to one in which the file is changed. I've
changed the last test to have u alone, and added an R to the test above
instead.
... in the middle of an expression or code block by adding a missing return.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
When the Leaf node with `# fmt: skip` is a NEWLINE inside a `suite`
Node, the nodes to ignore should be from the siblings of the parent
`suite` Node.
There is a also a special case for the ASYNC token, where it expands
to the grandparent Node where the ASYNC token is.
This fixes GH-2646, GH-3126, GH-2680, GH-2421, GH-2339, and GH-2138.
The former was a regression I introduced a long time ago. To avoid
changing the stable style too much, the regression is only fixed if
--preview is enabled
Annoyingly enough, as we currently always enforce a second format pass if
changes were made, there's no good way to prove the existence of the
docstring quote normalization instability issue. For posterity, here's
one failing example:
--- source
+++ first pass
@@ -1,7 +1,7 @@
def some_function(self):
- ''''<text here>
+ """ '<text here>
<text here, since without another non-empty line black is stable>
- '''
+ """
pass
--- first pass
+++ second pass
@@ -1,7 +1,7 @@
def some_function(self):
- """ '<text here>
+ """'<text here>
<text here, since without another non-empty line black is stable>
"""
pass
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Doing so is invalid. Note this only fixes the preview style since the
logic putting closing docstring quotes on their own line if they violate
the line length limit is quite new.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Otherwise they'd be deleted which was a regression in 22.1.0 (oops! my
bad!). Also type comments are now tracked in the AST safety check on all
compatible platforms to error out if this happens again.
Overall the line rewriting code has been rewritten to do "the right
thing (tm)", I hope this fixes other potential bugs in the code (fwiw I
got to drop the bugfix in blib2to3.pytree.Leaf.clone since now bracket
metadata is properly copied over).
Fixes#2873
* Add run_self environment in tox
* Add run_self task as part of the lint CI flow
* Remove hard coded sources list
* Remove black from pre-commit
Co-authored-by: Cooper Lees <me@cooperlees.com>
This is a tricky one as await is technically an expression and therefore
in certain situations requires brackets for operator precedence.
However, the vast majority of await usage is just await some_coroutine(...)
and similar in format to return statements. Therefore this PR removes
redundant parens around these await expressions.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Allows us to better control placement of return annotations by:
a) removing redundant parens
b) moves very long type annotations onto their own line
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
The 8.0.x series renamed its "die on LANG=C" function and the 8.1.x
series straight up deleted it.
Unfortunately this makes this test type check cleanly hard, so we'll
just lint with click 8.1+ (the pre-commit hook configuration was changed
mostly to just evict any now unsupported mypy environments)
aiohttp.test_utils.unittest_run_loop was deprecated since aiohttp 3.8
and aiohttp 4 (which isn't a thing quite yet) removes it. To maintain
compatibility with the full range of versions we declare to support,
test_blackd.py will now define a no-op replacement if it can't be
imported.
Also, mypy is painfully slow to use without a cache, let's reenable it.
It was causing stability issues because the first pass
could cause a "magic trailing comma" to appear, meaning
that the second pass might get a different result. It's
not critical.
Some things format differently (with extra parens)
Fixes#2651. Fixes#2754. Fixes#2518. Fixes#2321.
This adds a test that lists a number of cases of unstable formatting
that we have seen in the issue tracker. Checking it in will ensure
that we don't regress on these cases.
Since power operators almost always have the highest binding power in expressions, it's often more readable to hug it with its operands. The main exception to this is when its operands are non-trivial in which case the power operator will not hug, the rule for this is the following:
> For power ops, an operand is considered "simple" if it's only a NAME, numeric CONSTANT, or attribute access (chained attribute access is allowed), with or without a preceding unary operator.
Fixes GH-538.
Closes GH-2095.
diff-shades results: https://gist.github.com/ichard26/ca6c6ad4bd1de5152d95418c8645354b
Co-authored-by: Diego <dpalma@evernote.com>
Co-authored-by: Felix Hildén <felix.hilden@gmail.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Closes#2360: I'd like to make passing SRC or `--code` mandatory and the arguments mutually exclusive. This will change our (partially already broken) promises of CLI behavior, but I'll comment below.
Fixes#2506
``XDG_CACHE_HOME`` does not work on Windows. To allow for users to set a custom cache directory on all systems I added a new environment variable ``BLACK_CACHE_DIR`` to set the cache directory. The default remains the same so users will only notice a change if that environment variable is set.
The specific use case I have for this is I need to run black on in different processes at the same time. There is a race condition with the cache pickle file that made this rather difficult. A custom cache directory will remove the race condition.
I created ``get_cache_dir`` function in order to test the logic. This is only used to set the ``CACHE_DIR`` constant.
Fixes#2742.
This PR adds the ability to configure additional python cell magics. This
will allow formatting cells in Jupyter Notebooks that are using custom (python)
magics.
Black would now echo the location that it determined as the root path
for the project if `--verbose` is enabled by the user, according to
which it chooses the SRC paths, i.e. the absolute path of the project
is `{root}/{src}`.
Closes#1880
*blib2to3's support was left untouched because: 1) I don't want to touch
parsing machinery, and 2) it'll allow us to provide a more useful error
message if someone does try to format Python 2 code.
error: cannot format <string>: ('EOF in multi-line statement', (2, 0))
▲ before ▼ after
error: cannot format <string>: Cannot parse: 2:0: EOF in multi-line statement
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
* Treat functions/classes in blocks as if they're nested
One curveball is that we still want two preceding newlines before blocks
that are probably logically disconnected. In other words:
if condition:
def foo():
return "hi"
# <- aside: this is the goal of this commit
else:
def foo():
return "cya"
# <- the two newlines spacing here should stay
# since this probably isn't related
with open("db.json", encoding="utf-8") as f:
data = f.read()
Unfortunately that means we have to special case specific clause types
instead of just being able to just for a colon leaf. The hack used here
is to check whether we're adding preceding newlines for a standalone or
dependent clause. "Standalone" being a clause that doesn't need another
clause to be valid (eg. if) and vice versa.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Fixes https://github.com/psf/black/issues/2627 , a non-Python cell magic such as `%%writeline` can legitimately contain "incorrect" indentation, however this causes `tokenize-rt` to return an error. To avoid this, `validate_cell` should early detect cell magics (just like it detects `TransformerManager` transformations).
Test added too, in the shape of a "badly indented" `%%writefile` within `test_non_python_magics`.
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Marco Edward Gorelli <marcogorelli@protonmail.com>
The implementation of the new backtracking logic depends heavily on deepcopying the current state of the parser before seeing one of the new keywords, which by default is an very expensive operations. On my system, formatting these 3 files takes 1.3 seconds.
```
$ touch tests/data/pattern_matching_*; time python -m black -tpy310 tests/data/pattern_matching_* 19ms
All done! ✨🍰✨
3 files left unchanged.
python -m black -tpy310 tests/data/pattern_matching_* 2,09s user 0,04s system 157% cpu 1,357 total
```
which can be optimized 3X if we integrate the existing copying logic (`clone`) to the deepcopy system;
```
$ touch tests/data/pattern_matching_*; time python -m black -tpy310 tests/data/pattern_matching_* 1ms
All done! ✨🍰✨
3 files left unchanged.
python -m black -tpy310 tests/data/pattern_matching_* 0,66s user 0,02s system 147% cpu 0,464 total
```
This still might have some potential, but that would be way trickier than this initial patch.
* Improve Python 2 only syntax detection
First of all this fixes a mistake I made in Python 2 deprecation PR
using token.* to check for print/exec statements. Turns out that
for nodes with a type value higher than 256 its numeric type isn't
guaranteed to be constant. Using syms.* instead fixes this.
Also add support for the following cases:
print "hello, world!"
exec "print('hello, world!')"
def set_position((x, y), value):
pass
try:
pass
except Exception, err:
pass
raise RuntimeError, "I feel like crashing today :p"
`wow_these_really_did_exist`
10L
* Add octal support, more test cases, and fixup long ints
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
* Prepare for Python 2 depreciation
- Use BlackRunner and .stdout in command line test
So the next commit won't break this test. This is in its own commit so
we can just revert the depreciation commit when dropping Python 2
support completely.
* Deprecate Python 2 formatting support
Existing test was actually running a full black-primer
run which could be slow. This goes from 8 seconds to
0.4 seconds on my machine.
Needed to move to top level scope to leverage the caplog
feature of pytest in order to test that the command line
was parsing the bogus arguments and dumping to stderr.
* fix: allow tests to be run from the tests/ directory
* fix: try fixing windows build with MarcoGorelli's suggestion
* Windows hotfix + better respect test's spirit
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
If the individual failures are verbose, it's useful to have
the summary at the end. Otherwise, it can be really difficult
to figure out which projects have an issue.
It currently prints both ASTs - this also
adds the line diff, making it much easier to visualize
the changes as well. Not too verbose since it's only a diff.
The main goals of this commit include:
* improving consistency on how strict the test suite is -- Jelle has
seen cases where a test did not fail to an incomplete test setup
even though it should've
* simplifying tests for both ease of creation and reading via
parametrization and helpers
* reorganizing the test suite by grouping more tests
* dropping test suite dependencies that aren't strictly necessary
The test suite could definitely do with more refactoring, but this is a
good first pass. Anyway it would've gotten too big to review effectively
if I did continue on this PR.
Commit history before squash merge:
* Drop parameterized dep and refactor format tests
Since the test suite is already using pytest-only features we can drop
the parameterized test dependency in favour of pytest's own offering.
I also added an utility function called assert_format that makes it
even easier to verify Black formats some code correctly. We already
have great tooling if the case is very simple in test_format.py but
any sort of complication makes it hard to use. Also if you're writing
a non-standard test case, you have to be careful to include all of
the steps so issues don't go undetected. assert_format aims to
1) improve consistency, 2) avoid wasted CPU cycles, and 3) avoid
logical errors that hide issues.
Finally, quite a few tests were either moved and/or simplified with
the new setup.
* Move file collection tests
* Add assert_collected_sources helper function
Testing source collection involves a lot of repetitive boilerplate,
something that black.files.get_sources's signature does not help with.
So to cut down on boilerplate like `report=black.Report()` I added
a convenience function to tests/test_black.py which wraps
black.get_sources. Its signature is designed to be much more lax to
make it much easier to use. Somehow this leads to cutting 100 lines!
Also IMO the test cases are much easier to read since it's more
declarative than really procedural now.
* Run isort on some test files
* Move cache tests
* Use pytest-style asserts & add parametrization
* Drop now unnecessary test dependencies
*pytest-cases might be interesting for further refactoring but I
haven't been able to wrap my head around it for the time being. We
can always revisit anyway.
* re-implement simple CORS middleware for blackd
* remove aiohttp-cors from setup.py
* Remove aiohttp-cors from Pipfile.lock
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
Implementation stolen from PR davidhalter/parso#162. Thanks parso!
I could add support for these newer syntactical constructs in the
target version detection logic, but until I get diff-shades up
and running I don't feel very comfortable adding the code.
This fixes a bug where a trailing comma would be added to a
parenthesized return annotation changing its type to a tuple.
Here's one case where this bug shows up:
```
def spam() -> (
this_is_a_long_type_annotation_which_should_NOT_get_a_trailing_comma
):
pass
```
The root problem was that the type annotation was treated as if it was
a parameter & import list (is_body=True to linegen::bracket_split_build_line)
where a trailing comma is usually fine. Now there's another check in the
aforementioned function to make sure the body it's operating on isn't
a return annotation before truly adding a trailing comma.
* Add CPython repository into primer runs
- CPython tests is probably the best repo for black to test on as the stdlib's unittests should use all syntax
- Limit to running in recent versions of the python runtime - e.g. today >= 3.9
- This allows us to parse more syntax
- Exclude all failing files for now
- Definitely have bugs to explore there - Refer to #2407 for more details there
- Some test files on purpose have syntax errors, so we will never be able to parse them
- Add new black command arguments logging in debug mode; very handy for seeing how CLI arguments are formatted
CPython now succeeds ignoring 16 files:
```
Oh no! 💥💔💥
1859 files would be reformatted, 148 files would be left unchanged.
```
Testing
- Ran locally with and without string processing - Very little runtime difference BUT 3 more failed files
```
time /tmp/tb/bin/black --experimental-string-processing --check . 2>&1 | tee /tmp/black_cpython_esp
...
Oh no! 💥💔💥
1859 files would be reformatted, 148 files would be left unchanged, 16 files would fail to reformat.
real 4m8.563s
user 16m21.735s
sys 0m6.000s
```
- Add unittest for new covienence config file flattening that allows long arguments to be broke up into an array/list of strings
Addresses #2407
---
Commit history before merge:
* Add new `timeout_seconds` support into primer.json
- If present, will set forked process limit to that value in seconds
- Otherwise, stay with default 10 minutes (600 seconds)
* Add new "base_path" concept to black-primer
- Rather than start at the repo root start at a configured path within the repository
- e.g. for cpython only run black on `Lib`
* Disable by default - It's too much for GitHub Actions. But let's leave config for others to use
* Minor tweak to _flatten_cli_args
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
we don't accidentally add backslashes to them when normalizing quotes
because that's invalid syntax!
The problem this commit fixes is that matches would eat too much
blocking important matches to occur. For example, here's one f-string
body:
{a}{b}{c}
I know there's no risk of introducing backslashes here, but the regex
already goes sideways with this. Throwing this example at regex101
I get:
{a}{b}{c} # The As and Bs are the two matches, and the upper
---- ---- # case letters are the groups with those matches.
aAaa bbBb
... we've missed the middle expression (so if any backslashes in a
more complex example were introduced there we wouldn't bail out
even though we should -- hence the bug). As it stands the regex
needs somesort of extra character (or the start/end of the body)
around the expressions but that isn't always the case as shown
above.
The fix implemented here is to turn the "eat a surrounding non-curly
bracket character" groups ie. `(?:[^{]|^)` and `(?:[^}]|$)` into
negative lookaheads and lookbehinds. This still guarantees the
already specified rules but without problematically eating extra
characters ^^
Fixes#2359.
This commit now makes Black exit with an user-friendly error message if a
.gitignore file couldn't be parsed -- a massive improvement over an opaque
traceback!
To summarise, based on what was discussed in that issue:
due to not being able to parse automagics (e.g. pip install black)
without a running IPython kernel, cells with syntax which is parseable
by neither ast.parse nor IPython will be skipped cells with multiline
magics will be skipped trailing semicolons will be preserved, as they
are often put there intentionally in Jupyter Notebooks to suppress
unnecessary output
Commit history before merge (excluding merge commits):
* wip
* fixup tests
* skip tests if no IPython
* install test requirements in ipynb tests
* if --ipynb format all as ipynb
* wip
* add some whole-notebook tests
* docstrings
* skip multiline magics
* add test for nested cell magic
* remove ipynb_test.yml, put ipynb tests in tox.ini
* add changelog entry
* typo
* make token same length as magic it replaces
* only include .ipynb by default if jupyter dependencies are found
* remove logic from const
* fixup
* fixup
* re.compile
* noop
* clear up
* new_src -> dst
* early exit for non-python notebooks
* add non-python test notebook
* add repo with many notebooks to black-primer
* install extra dependencies for black-primer
* fix planetary computer examples url
* dont run on ipynb files by default
* add scikit-lego (Expected to change) to black-primer
* add ipynb-specific diff
* fixup
* run on all (including ipynb) by default
* remove --include .ipynb from scikit-lego black-primer
* use tokenize so as to mirror the exact logic in IPython.core.displayhooks quiet
* fixup
* 🎨
* clarify docstring
* add test for when comment is after trailing semicolon
* enumerate(reversed) instead of [::-1]
* clarify docstrings
* wip
* use jupyter and no_jupyter marks
* use THIS_DIR
* windows fixup
* perform safe check cell-by-cell for ipynb
* only perform safe check in ipynb if not fast
* remove redundant Optional
* 🎨
* use typeguard
* dont process cell containing transformed magic
* require typing extensions before 3.10 so as to have TypeGuard
* use dataclasses
* mention black[jupyter] in docs as well as in README
* add faq
* add message to assertion error
* add test for indented quieted cell
* use tokenize_rt else we cant roundtrip
* fmake fronzet set for tokens to ignore when looking for trailing semicolon
* remove planetary code examples as recent commits result in changes
* use dataclasses which inherit from ast.NodeVisitor
* bump typing-extensions so that TypeGuard is available
* bump typing-extensions in Pipfile
* add test with notebook with empty metadata
* pipenv lock
* deprivative validate_cell
* Update README.md
* Update docs/getting_started.md
* dont cache notebooks if jupyter dependencies arent found
* dont write to cache if jupyter deps are not installed
* add notebook which cant be parsed
* use clirunner
* remove other subprocess calls
* add docstring
* make verbose and quiet keyword only
* 🎨
* run second many test on directory, not on file
* test for warning message when running on directory
* early return from non-python cell magics
* move NothingChanged to report to avoid circular import
* remove circular import
* reinstate --ipynb flag
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
toml unfortunately has a lack of maintainership issue right now. It's
evident by the fact toml only supports TOML v0.5.0. TOML v1.0.0 has
been recently released and right now Black crashes hard on its usage.
tomli is a brand new parse only TOML library. It supports TOML
v1.0.0. Although TBH we're switching to this one mostly because
pip is doing the same.
*The upper bound was included at the library maintainer's request.
Co-authored-by: Łukasz Langa <lukasz@langa.pl>
Co-authored-by: Taneli Hukkinen <3275109+hukkin@users.noreply.github.com>
Commit history before merge:
* Accept empty stdin (close#2337)
* Update tests/test_black.py
* Add changelog
* Assert Black reformats an empty string to an empty string (#2337) (#2346)
* fix
Click types have been moved to click repo itself. See pallets/click#1856
I've had some issues with typeshed types being outdated in another project
so might be good to avoid that here.
Commit history before merge:
* Get `click` types from main repo
* Fix mypy errors
* Require click v8 for type annotations
* Update Pipfile
* Add STDIN test to primer
- Check that out STDIN black support stays working
- Add asyncio.subprocess STDIN pip via communicate
- We just check we format python code from primer's `lib.py`
Fixes#2310
`black.strings.get_string_prefix` used to lowercase the extracted
prefix before returning it. This is wrong because 1) it ignores the
fact we should leave R prefixes alone because of MagicPython, and 2)
there is dedicated prefix casing handling code that fixes issue 1.
`.lower` is too naive.
This was originally fixed in 20.8b0, but was reintroduced since 21.4b0.
I also added proper prefix normalization for docstrings by using the
`black.strings.normalize_string_prefix` helper.
Some more test strings were added to make sure strings with capitalized
prefixes aren't treated differently (actually happened with my original
patch, Jelle had to point it out to me).
This commit adds a short section discussing the non-processing of docstrings
besides spacing improvements, mentions comment moving and links to the
AST equivalence discussion. I also added a simple spacing test for good
measure.
Commit history before merge:
* Mention comment non-processing in documentation, add spacing test
* Mention special cases for comment spacing
* Add all special cases, improve wording
Not sure the fix is right. Here is what I found: issue is connected
with line
first.prefix = prefix[comment.consumed :]
in `comments.py`. `first.prefix` is a prefix of the line, that ends
with `# fmt: skip`, but `comment.consumed` is the length of the
`" # fmt: skip"` string. If prefix length is greater than 14,
`first.prefix` will grow every time we apply formatting.
Fixes#2254
Closes#1246: This PR adds a new option (and automatically a toml entry, hooray for existing configuration management 🎉) to require a specific version of Black to be running.
For example: `black --required-version 20.8b -c "format = 'this'"`
Execution fails straight away if it doesn't match `__version__`.
PR #2286 did not fix the edge-cases (e.g. when the string is just long
enough to cause a line to be 89 characters long). This PR corrects that
mistake.
Resolves#2168 by disabling the insertion of a " " when the docstring is entirely empty.
Note that this PR is focussed only on the case of empty docstrings. In particular this does not make any changes to the behaviour that a " " is inserted if a non-empty docstring begins with the quoting character. That is, black still prefers:
""" "something" """
to:
""""something" """
and that:
""""Something""""
is not a legal docstring.
Commit history before merge:
Black now respects .gitignore files in all levels, not only root/.gitignore file
(apply .gitignore rules like git does).
* Fix: typo
* Fix: respect .gitignore files in all levels.
* Add: CHANGELOG note.
* Fix: TypeError: unsupported operand type(s) for +: 'NoneType' and 'PathSpec'
* Update docs.
* Fix: no parent .gitignore
* Add a comment since the if expression is a bit hard to understand
* Update tests - conver no parent .gitignore case.
* Use main's Pipfile.lock instead
The original changes in Pipfile.lock are whitespace only. The changes
turned the JSON's file indentation from 4 to 2. Effectively this
happened: `json.dumps(json.loads(old_pipfile_lock), indent=2) + "\n"`.
Just using main's Pipfile.lock instead of undoing the changes because
1) I don't know how to do that easily and quickly, and 2) there's a
merge conflict.
Co-authored-by: Richard Si <63936253+ichard26@users.noreply.github.com>
* Merge remote-tracking branch 'upstream/main' into i1730 …
conflicts for days ay?
We've depended on Click 7.x ever since we broke CI systems across the
world (oops lol) and flake8-mypy was purged a fair bit back: #1867
Also remove the primer tests import in tests/test_black.py because it's
annoying when just trying to actually target tests/test_black.py tests.
`pytest -k test_black.py` doesn't do what you expect due to that import.
There's three optimizations in this commit:
1. Don't check if Black's output is stable or equivalant if no changes
were made in the first place. It's not like passing the same code
(for both source and actual) through black.assert_equivalent or
black.assert_stable is useful. It's not a big deal for the smaller
tests, but it eats a lot of time in tests/test_format.py since
its test cases are big. This is also closer to how Black works IRL.
2. Use a smaller file for `test_root_logger_not_used_directly` since
the logging it's checking happens during blib2to3's startup so the
file doesn't really matter.
3. If we're checking a file is formatting (i.e. test_source_is_formatted)
don't run Black over it again with `black.format_file_in_place`.
`tests/test_format.py::TestSimpleFormat.check_file` is good enough.
* Move string-related utility to functions to strings.py, const.py
* Move Leaf/Node-related functionality to nodes.py
* Move comment-related functions to comments.py
* Move caching to cache.py and Mode/TargetVersion/Feature to mode.py
* Move some leftover functions to nodes.py, comments.py, strings.py
* Add missing files to source list for test runs
* Move line-related functionality into lines.py, brackets into brackets.py
* Move transformers to trans.py
* Move file handling, output, parsing, concurrency, debug, and report
* Move two more functions to nodes.py
* Add CHANGES
* Add numeric.py
* Add linegen.py
* More docstrings
* Include new files in tests
Co-authored-by: Jelle Zijlstra <jelle.zijlstra@gmail.com>
This is a follow-up of #2203 that uses a pytest marker instead of a bunch of
`skipUnless`. Similarly to the Python 2 tests, they are running by default and
will crash on an unsuspecting contributor with missing dependencies. This is
by design, we WANT contributors to test everything. Unless we actually don't
and then we can run:
pytest --run-optional=no_blackd
Relatedly, bump required aiohttp to 3.6.0 at least to get rid of expected
failures on Python 3.8 (see 6b5eb7d465).