There are many libraries for traversing directories. You can also do this using the standard library. This particular library is a bit different in that:
- ⚗️ Filtering by file extensions, text patterns in
.gitignoreformat, and using custom callables. - 🐍 Natively works with both
Pathobjects from the standard library and strings. - ❌ Support for cancellation tokens.
- 👯♂️ Combining multiple crawling methods in one object.
You can install dirstree using pip:
pip install dirstreeYou can also quickly try out this and other packages without having to install using instld.
It's very easy to work with the library in your own code:
- Create a crawler object, passing the path to the base directory and, if necessary, additional arguments.
- Iterate through it.
The simplest code example would look like this:
from dirstree import Crawler
crawler = Crawler('.')
for file in crawler:
print(file)↑ Here we output recursively (that is, including the contents of nested directories) all files from the current directory. At each iteration, we get a new
Pathobject.
Iterating through the files in the directory, you may not want to view all files, but only files of a certain type. To do this, ignore all other files. How to do it? There are 3 ways:
- Bypass only files with the specified extensions, such as
.txt,.doc, or.py. - Bypass files whose paths follow a specific text pattern.
- Use an arbitrary function to determine whether you need each specific path or not.
To select a specific method, you need to pass a specific parameter when creating the crawler object. Of course, all the methods can be combined with each other.
To set the file extensions you are interested in, use the extensions parameter:
crawler = Crawler('.', extensions=['.txt']) # Iterate only on .txt files.Also, if you only need Python files, you can use a special class to bypass them only, without specifying extensions:
from dirstree import PythonCrawler
crawler = PythonCrawler('.') # Iterate only on .py files.To specify which files and directories you do NOT want to iterate over, use the exclude parameter:
crawler = Crawler('.', exclude=['.git', 'venv']) # Exclude ".git" and "venv" directories.↑ Please note that we use the
.gitignoreformat here.
If you need a universal way to filter out unnecessary paths, pass your function as the filter parameter:
crawler = Crawler('.', filter = lambda path: len(str(path)) == 7) # Iterate only on paths that are 7 characters long.You can set an arbitrary condition under which file traversal will stop using cancellation tokens from the cantok library.
There are 2 ways to do this ↓
- If you use the crawler as a one-time object for a single iteration, set the token when creating it:
for path in Crawler('.', token=TimeoutToken(0.0001)): # Limit the iteration time to 0.0001 seconds.
print(path)- If you plan to use the crawler object several times, use the
go()method for iteration and pass a new token to it everytime:
crawler = Crawler('.')
for path in crawler.go(token=TimeoutToken(0.0001)): # Limit the iteration time to 0.0001 seconds.
print(path)↑ Follow these rules to avoid accidentally "baking" an expired token inside a crawler object.
You can combine multiple crawler objects into one using the usual addition operator, like this:
for path in Crawler('../dirstree') + Crawler('../cantok'):
print(path)↑ The paths that you will iterate on will be automatically deduplicated.
↑ You can also impose arbitrary restrictions on each of the summed objects, all of them will be taken into account.
You can also pass multiple paths to a single crawler object:
for path in Crawler('../dirstree', '../cantok'):
print(path)↑ In this case, there is no deduplication of paths.