Browse Source

Merge pull request #1022 from yarikoptic/enh-codespell

codespell: move configuration into pyproject.toml and fix "re-use" typo
Adina Wagner 7 months ago
parent
commit
5386b40f2f

+ 7 - 5
.github/workflows/codespell.yml

@@ -7,14 +7,16 @@ on:
   pull_request:
     branches: [main]
 
+permissions:
+  contents: read
+
 jobs:
   codespell:
     name: Check for spelling errors
     runs-on: ubuntu-latest
 
     steps:
-      - uses: actions/checkout@v2
-      - uses: codespell-project/actions-codespell@master
-        with:
-          skip: "*.svg,.git,venvs,_build,versioneer.py,DL-*,.github"
-          ignore_words_list: "wit"
+      - name: Checkout
+        uses: actions/checkout@v3
+      - name: Codespell
+        uses: codespell-project/actions-codespell@v2

+ 2 - 2
docs/basics/101-106-nesting.rst

@@ -32,8 +32,8 @@ superdataset history? This was the subdataset's own history.
 Apart from stand-alone histories of super- or subdatasets, this highlights another
 very important advantage that nesting provides: Note that the ``longnow`` dataset
 is a completely independent, standalone dataset that was once created and
-published. Nesting allows for a modular re-use of any other DataLad dataset,
-and this re-use is possible and simple precisely because all of the information
+published. Nesting allows for a modular reuse of any other DataLad dataset,
+and this reuse is possible and simple precisely because all of the information
 is kept within a (sub)dataset.
 
 But now let's also check out how the *superdataset's* (``DataLad-101``) history

+ 1 - 1
docs/basics/101-107-summary.rst

@@ -101,5 +101,5 @@ existing datasets:
 You have procedurally experienced how to install a dataset, and simultaneously you have
 learned a lot about the principles and features of DataLad datasets.
 Cloning datasets and getting their content allows you to consume published datasets.
-By nesting datasets within each other, you can re-use datasets in a modular fashion. While this may
+By nesting datasets within each other, you can reuse datasets in a modular fashion. While this may
 appear abstract, upcoming sections will demonstrate many examples of why this can be handy.

+ 2 - 2
docs/basics/101-127-yoda.rst

@@ -252,7 +252,7 @@ history of all of these components.
 
 Principle 1, therefore, encourages to structure data analysis
 projects in a clear and modular fashion that makes use of nested
-DataLad datasets, yielding comprehensible structures and re-usable
+DataLad datasets, yielding comprehensible structures and reusable
 components. Having each component version-controlled --
 regardless of size --  will aid keeping directories clean and
 organized, instead of piling up different versions of code, data,
@@ -287,7 +287,7 @@ files that a colleague sent you via email, a plain :dlcmd:`save`
 with a helpful commit message goes a very long way to fulfill this principle
 on its own already.
 
-One core aspect of this principle is *linking* between re-usable data
+One core aspect of this principle is *linking* between reusable data
 resource units (i.e., DataLad subdatasets containing pure data). You will
 be happy to hear that this is achieved by simply installing datasets
 as subdatasets.

+ 2 - 2
docs/basics/101-132-advancednesting.rst

@@ -17,8 +17,8 @@ completely stand-alone history:
    $ git log --oneline
 
 In principle, this is no news to you. From section :ref:`nesting` and the
-YODA principles you already know that nesting allows for a modular re-use of
-any other DataLad dataset, and that this re-use is possible and simple
+YODA principles you already know that nesting allows for a modular reuse of
+any other DataLad dataset, and that this reuse is possible and simple
 precisely because all of the information is kept within a (sub)dataset.
 
 What is new now, however, is that you applied changes to the dataset. While

+ 1 - 1
docs/usecases/supervision.rst

@@ -254,7 +254,7 @@ projects. It requires minimal effort, but comes with great benefit:
   of any external inputs of a project make it possible (when a project is completed)
   that a supervisor can efficiently test the integrity of the inputs, discard them
   (if unmodified), and only archive the outputs that are unique to the project --
-  which then can become a modular component for re-use in a future project.
+  which then can become a modular component for reuse in a future project.
 
 
 .. rubric:: Footnotes

+ 4 - 0
pyproject.toml

@@ -1,2 +1,6 @@
 [build-system]
 requires = ["setuptools >= 30.3.0", "wheel"]
+
+[tool.codespell]
+skip = '.git,*.pdf,*.svg,venvs,versioneer.py,DL-*,.github,_build'
+ignore-words-list = 'wit'