Collection of small Python utility functions and classes
MIT License
Collection of small Python utility functions and classes. Each one was created because I needed it and it didn't exist or I didn't like the existing implementations. 100% of code is used in real-world projects.
(And one day, the documentation is going to be actually good. In the meanwhile, don't hesitate to ask if something's not clear.)
pip install notalib
Or with poetry:
poetry add notalib
Version numbers follow the semver rules.
While I try to fix bugs, add new features, and review any PRs when I have time, there're no promises and no set timeframes, even if a bug is critical. That's a project I do in my free time, free of charge.
If that's not enough for you or you have an urgent request, there are paid maintenance options (bugfixing, features, expedite PR review, 24h security responses). Contact me for prices: [email protected]
Also feel free to just send me money:
Donations are always appreciated, even if you send 10$.
notalib.utf.BOM
: contains string b'\xEF\xBB\xBF'
(UTF-8 little endian byte order mark).Iterates over your array in chunks of at most N elements.
from notalib.array import as_chunks
arr = [1,2,3,4,5]
for chunk in as_chunks(arr, 2):
print(chunk)
# [1,2]
# [3,4]
# [5]
Keeps iterable things like lists intact, turns single values into single-element lists. Useful for functions that can accept both.
ensure_iterable([1,2,3]) # --> [1,2,3]
ensure_iterable((1,2,3)) # --> (1,2,3)
ensure_iterable(1) # --> [1]
ensure_iterable('smth') # --> ['smth']
def my_function(one_or_multiple_args):
for arg in ensure_iterable(one_or_multiple_args):
...
my_function(['log', 'smog'])
my_function('dog')
Batch data from the iterable into tuples of length n.
from notalib.array import batched
def generate_numbers():
for i in range(10):
yield i
batches = list(batched(generate_numbers(), 5)) # --> [(0, 1, 2, 3, 4), (5, 6, 7, 8, 9)]
batches = list(batched("Hello", 2)) # --> [('H', 'e'), ('l', 'l'), ('o',)]
Re-formats a date, parsing it as any of the input_formats
and outputting it as output_format
.
This function uses Arrow date formats. See Arrow docs for details.
Args:
s: The source date in one of the input_formats to be converted to target format.
input_formats: Source date representation formats.
output_format: The format in which the date will be output.
allow_empty: if true, `None` input will produce `None` output, otherwise a ValueError will be thrown.
Example:
>>> normalize_date('12.07.2023', ('D.M.YYYY', 'DD.MM.YYYY'), 'YYYY-MM-DD', False)
'2023-07-12'
Returns:
Converted date string from any of the input formats to the specified output format.
Removed in 2.0.0. Use get_week
instead. If you want the "old" week numbering, use get_week with WeekNumbering.MATCH_YEAR
and add 1 to week number.
Returns named tuple with week number for the given date. Accepts Python dates and Arrow timestamps.
Optional argument mode
tells what to do if the week started in previous year:
from notalib.date import get_week, WeekNumbering
from datetime import date
date1, date2 = date(2022, 12, 31), date(2023, 1, 1)
get_week(date1, WeekNumbering.NORMAL)
# Week(week=52, year=2022)
get_week(date1, WeekNumbering.MATCH_YEAR)
# Week(week=52, year=2022)
get_week(date2, WeekNumbering.NORMAL)
# Week(week=52, year=2022)
get_week(date2, WeekNumbering.MATCH_YEAR)
# Week(week=0, year=2023)
⚠️ Experimental API. Subject to change. Don't use in production. You've been warned.
Merges two dicts.
Modifies input. Use copy.deepcopy
to create a deep copy of the original dictionary if you need it.
Accepts three arguments:
Filters a dictionary, removing any keys except for the ones you choose.
from notalib.dict import filter_dict
src = {
'Some': "BODY",
'once': "told me",
'the world': "is gonna roll me",
}
filter_dict(src, ["Some", "once"])
# {'Some': 'BODY', 'once': 'told me'}
filter_dict(src, [])
# {}
⚠️ Experimental API. Subject to change. Don't use in production. You've been warned.
Converts standard timedelta object to specified formats.
Allowed formats: 's', 'ms'.
from notalib.timedelta import convert_timedelta
from datetime import timedelta
td = timedelta(seconds=1, milliseconds=23)
convert_timedelta(td, 's')
# 1.023
convert_timedelta(td, 'ms')
# 1023
Prints an HTML table, row by row, from the given data, using attrs or dictionary keys as columns.
Two ways to use it:
from notalib.hypertext import TablePrinter
t = TablePrinter(['a', 'b'])
t.header()
# '<table><thead><tr><th>a</th><th>b</th></tr></thead><tbody>'
t.entry({'a': 1, 'b': 2})
# '<tr><td>1</td><td>2</td></tr>\n'
t.entry({'a': 11, 'b': 22})
# '<tr><td>11</td><td>22</td></tr>\n'
t.footer()
# '</tbody></table>'
from notalib.hypertext import TablePrinter
t = TablePrinter(['a', 'b'])
list(t.iterator_over([ {'a': 11, 'b': 22} ]))
# ['<table><thead><tr><th>a</th><th>b</th></tr></thead><tbody>',
# '<tr><td>11</td><td>22</td></tr>\n',
# '</tbody></table>']
18023/2000000 294.8/sec Processing transaction ID#84378473 (2020-01-04)
The CLI progress indicator you've always dreamt of: shows current and total if available, measures current speed, can show your comments for each element, makes sure not to slow down your terminal with frequent updates. See this short demo.
Cheat sheet
## Basic usage
with polosa() as p:
p.tick()
# 467344 201.2/sec
## Specify total number of elements:
with polosa(total=1337) as p:
# 26/1337 1.2/sec
## Print something useful about every element:
p.tick(caption=my_order.time_created)
# 1723910/2000000 319231.2/sec 2020-01-01 15:37:00
Measures time spent on executing your code. Killer feature: it can be used as a reusable context.
timing = Timing()
...
with timing:
do_something()
# That's it, do something with the measurement
log(f'Operation took {timing.result} sec')
If you just want to print measurements into console, there's a shorthand:
timing = Timing(auto_print=True)
...
with timing:
do_something()
⚠️ Experimental API. Subject to change. Don't use in production. You've been warned.
Returns short description and hash of last commit of current branch.
If function is called outside git repo or there are no commits in the history, None
will be returned.
from notalib.git import get_current_commit
commit = get_current_commit()
# Commit(hash='db0e5c1de83f233abef823fd92490727f4ee9d50', short_description='Add timedelta module with convert_timedelta function')
⚠️ Experimental API. Subject to change. Don't use in production. You've been warned.
Returns last tag label and hash. If function is called outside git repo or there are no tags in the history, None
will be returned.
from notalib.git import get_last_tag
tag = get_last_tag()
# Tag(hash='c4b6e06f57ab4773e2881d286804d4e3141b5195', label='v1.4.0')
Iterates over byte buffer and yields chunks of specified size.
with open("<file_path>", mode="rb") as file:
for chunk in file_iterator(file):
...
Replaces all types of null values in a DataFrame with the given value.
df = pd.DataFrame({'A': [pd.NA, pd.NaT, 'SomeVal', None]})
new_df = replace_null_objects(df, "Hello, notalib!")
new_df
# A
# 0 Hello, notalib!
# 1 Hello, notalib!
# 2 SomeVal
# 3 Hello, notalib!
Endpoints for easier authentication in APIs. Requires Django REST framework.
Provides endpoints:
GET /xauth/check
— returns code 200 if client is authenticated (or global permissions are set to AllowAny), 403 if notPOST /xauth/auth-post
— authenticates a client; accepts two POST parameters username
and password
; returns code 200 on success and 403 on failurePOST /xauth/logout
— de-authenticates a clientHow to use:
'notalib.django_xauth'
to INSTALLED_APPS.manage.py migrate django_xauth
(doesn't actually change your DB).path('xauth/', include('notalib.django_xauth.urls')),
Spec-compliant HTTP 303 See Other redirect (Django only provides deprecated 301 and 302).
Spec-compliant HTTP 307 Temporary Redirect (Django only provides deprecated 301 and 302).
Stream all elements of iterable object as JSON array using the StreamingHttpResponse class. Unlike DRF's Response class, it can handle arrays of any size.
class SomeViewSet(...):
...
def list(self, request, *args, **kwargs):
...
return stream_json(data)
Stream bytes IO part by the RANGE header value or all buffer content.
class SomeView(...):
def get(self, request, *args, **kwargs):
with open("<file_path>", mode="rb") as file:
...
return get_stream_bytes_response(file, request, content_type="<file_content_type>")
Deprecated since 2.2.0.
Required packages: clickhouse-sqlalchemy
Required two django.settings variables:
A wrapper for SQLAlchemy's select
with some useful postprocessing options.
execute
— no postprocessingexecute_val
— returns single valueexecute_list
— returns single column as listexecute_kv
— returns dict, first column becomes keys, second column becomes valuesexecute_na
— returns number of affected rowsUsage example:
q = Query(
select([ SomeTable.c.notalib ])
)
q.execute_list()
# ["Example", "OOOOO", "my", "defence", ...]
Returns the number of mutations in progress for the specified table.
get_mutations_in_progress_count("SOME_DATABASE", "SOME_TABLE_IN_DATABASE")
# 5
Waits until all mutations for the given table are complete.
# blah blah blah, ALTER TABLE ... UPDATE ...
wait_result("SOME_DATABASE", "SOME_TABLE_IN_DATABASE")
# UPDATE complete, continue
⚠️ Experimental API. Subject to change. Don't use in production. You've been warned.
Required packages: tablib
⚠️ Experimental API. Subject to change. Don't use in production. You've been warned.
Allows to simplify the loading of datasets, even custom ones.
from notalib.tablib.shortcuts import load_dataset
with open("report.xlsx", mode='rb') as file:
ds = load_dataset(file)
### Or you can use custom Dataset class and format
from tablib import Dataset
class MyDataset(Dataset):
pass
with open("report.csv", mode='rb') as file:
ds = load_dataset(file, 'csv', MyDataset)
⚠️ Experimental API. Subject to change. Don't use in production. You've been warned.
Extended tablib.Dataset class, which adds useful data processing methods.
Removes all duplicate rows from the ExtendedDataset
object while maintaining the original order.
Removes rows with empty data in specified columns.
Removes rows in which all values are empty.
Applies the function to the values in the specified column.
Replaces empty values with a new one.
Returns a new dataset based on the specified header labels.
Returns a list of Boolean objects based on a given set of header labels.
Calculates the index of the header by its label.
Sets tags to rows and returns list of groups for filtering.
Renames header labels.