= GhApi() api
Pagination
Paged operations
Some GitHub API operations return their results one page at a time. For instance, there are many thousands of gists, but if we call list_public
we only see the first 30:
= api.gists.list_public()
gists len(gists)
30
That’s because this operation takes two optional parameters, per_page
, and page
:
api.gists.list_public
gists.list_public(since, per_page, page): List public gists
This is a common pattern for list_*
operations in the GitHub API. One way to get more results is to increase per_page
:
len(api.gists.list_public(per_page=100))
100
However, per_page
has a maximum of 100
, so if you want more, you’ll have to pass page=
to get pages beyond the first. An easy way to iterate through all pages is to use paged
. paged
returns a generator
paged
paged (oper, *args, per_page=30, max_pages=9999, **kwargs)
Convert operation oper(*args,**kwargs)
into an iterator
We’ll demonstrate this using the repos.list_for_org
method:
api.repos.list_for_org
repos.list_for_org(org, type, sort, direction, per_page, page): List organization repositories
= api.repos.list_for_org(org='fastai')
repos len(repos),repos[0].name
(30, 'docs')
To convert this operation into a Python iterator, pass the operation itself, along with any arguments (either keyword or positional) to paged
. Note how the function and arguments are passed separately:
= paged(api.repos.list_for_org, org='fastai') repos
Note that the object returned from paged
is a generator. You can iterate through this generator repos
in the normal way:
for page in repos: print(len(page), page[0].name)
30 docs
30 fastscript
25 wireguard-fast
Link header (RFC 5988)
GitHub tells us how many pages are available using the link header. Unfortunately the pypi LinkHeader library appears to no longer be maintained, so we’ve put a refactored version of it here.
parse_link_hdr
parse_link_hdr (header)
Parse an RFC 5988 link header, returning a dict
from rels to a tuple
of URL and attrs dict
Here’s an example of a link header with just one link:
'<http://example.com>; rel="foo bar"; type=text/html') parse_link_hdr(
{'foo bar': ('http://example.com', {'type': 'text/html'})}
= parse_link_hdr('<http://example.com>; rel="foo bar"; type=text/html')
links = links['foo bar']
link 0], 'http://example.com')
test_eq(link[1]['type'], 'text/html') test_eq(link[
Let’s test it on the headers we received on our last call to GitHub. You can access the last call’s headers in `recv_hdrs’:
'Link'] api.recv_hdrs[
'<https://api.github.com/organizations/20547620/repos?per_page=30&page=4>; rel="prev", <https://api.github.com/organizations/20547620/repos?per_page=30&page=4>; rel="last", <https://api.github.com/organizations/20547620/repos?per_page=30&page=1>; rel="first"'
Here’s what happens when we parse that:
'Link']) parse_link_hdr(api.recv_hdrs[
{'prev': ('https://api.github.com/organizations/20547620/repos?per_page=30&page=4',
{}),
'last': ('https://api.github.com/organizations/20547620/repos?per_page=30&page=4',
{}),
'first': ('https://api.github.com/organizations/20547620/repos?per_page=30&page=1',
{})}
Getting pages in parallel
Rather than requesting each page one at a time, we can save some time by getting all the pages we need in parallel.
GhApi.last_page
GhApi.last_page ()
Parse RFC 5988 link header from most recent operation, and extract the last page
To help us know the number of pages needed, we can use last_page
, which uses the link header we just looked at to grab the last page from GitHub.
We will need multiple pages to get all the repos in the github
organization, even if we get 100 at a time:
'github', per_page=100)
api.repos.list_for_org( api.last_page()
4
pages
pages (oper, n_pages, *args, n_workers=None, per_page=100, **kwargs)
Get n_pages
pages from oper(*args,**kwargs)
pages
by default passes per_page=100
to the operation.
Let’s look at some examples. To get all the pages for the repos in the github
organization in parallel, we can use this:
= pages(api.repos.list_for_org, api.last_page(), 'github').concat()
gh_repos len(gh_repos)
367
If you already know ahead of time the number of pages required, there’s no need to call last_page
. For instance, the GitHub docs specify that we can get at most 3000 gists:
= pages(api.gists.list_public, 30).concat()
gists len(gists)
3000
GitHub ignores the per_page
parameter for some API calls, such as listing public events, which it limits to 8 pages of 30 items per page. To retrieve all pages in these cases, you need to explicitly pass the lower per page limit:
api.activity.list_public_events() api.last_page()
8
= pages(api.activity.list_public_events, api.last_page(), per_page=30).concat()
evts len(evts)
232