Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Publish wheels to pypi #62

Closed
vfazio opened this issue Mar 6, 2024 · 55 comments
Closed

Publish wheels to pypi #62

vfazio opened this issue Mar 6, 2024 · 55 comments

Comments

@vfazio
Copy link
Contributor

vfazio commented Mar 6, 2024

There are a number of reasons to provide precompiled wheels:

  1. Not all distributions provide compiler support and thus cannot compile sdists
  2. For space constrained or "sealed" systems, distributors/admins may not deploy toolchains due to disk requirements or to prevent people from deploying unapproved binaries
  3. Performance-wise, it's cheaper to unpack a wheel vs downloading the sdist, building, and installing the result.
  4. The time it takes either on a commandline for a maintainer or a CI pipeline is a one-time cost and requires less time & energy vs the requirement that 1000s of machines build and deploy the binaries (if they even can).
  5. Having public wheels means people don't have to spin up their own pypi server to host their built wheels because they can just use the upstream package.

Building wheels on the command line or via pipeline is generally pretty straightforward. There is a utility called cibuildwheel that accomplishes most of this by using containers with very specific combinations of toolchains and utilities to build libraries that will work across distributions, generally rooted off of the major glibc version available.

From the command line, with an sdist:

vfazio@vfazio4 /tmp/tmp.nYZrtOYERx $ wget https://files.pythonhosted.org/packages/a8/56/730573fe8d03c4d32a31e7182d27317b0cef298c9170b5a2994e2248986f/gpiod-2.1.3.tar.gz
--2024-03-05 08:23:48--  https://files.pythonhosted.org/packages/a8/56/730573fe8d03c4d32a31e7182d27317b0cef298c9170b5a2994e2248986f/gpiod-2.1.3.tar.gz
Resolving files.pythonhosted.org (files.pythonhosted.org)... 199.232.96.223, 2a04:4e42:58::223
Connecting to files.pythonhosted.org (files.pythonhosted.org)|199.232.96.223|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 53092 (52K) [application/octet-stream]
Saving to: ‘gpiod-2.1.3.tar.gz’

gpiod-2.1.3.tar.gz                                 100%[==============================================================================================================>]  51.85K  --.-KB/s    in 0.02s   

2024-03-05 08:23:49 (2.54 MB/s) - ‘gpiod-2.1.3.tar.gz’ saved [53092/53092]

vfazio@vfazio4 /tmp/tmp.nYZrtOYERx $ tar -xf gpiod-2.1.3.tar.gz 
vfazio@vfazio4 /tmp/tmp.nYZrtOYERx $ cd gpiod-2.1.3/
vfazio@vfazio4 /tmp/tmp.nYZrtOYERx/gpiod-2.1.3 $ python3 -m venv venv
vfazio@vfazio4 /tmp/tmp.nYZrtOYERx/gpiod-2.1.3 $ . venv/bin/activate
(venv) vfazio@vfazio4 /tmp/tmp.nYZrtOYERx/gpiod-2.1.3 $ pip install cibuildwheel

</snip>

     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 69.5/69.5 KB 14.5 MB/s eta 0:00:00
Installing collected packages: typing-extensions, tomli, platformdirs, packaging, filelock, certifi, bracex, bashlex, cibuildwheel
Successfully installed bashlex-0.18 bracex-2.4 certifi-2024.2.2 cibuildwheel-2.16.5 filelock-3.13.1 packaging-23.2 platformdirs-4.2.0 tomli-2.0.1 typing-extensions-4.10.0


(venv) vfazio@vfazio4 /tmp/tmp.nYZrtOYERx/gpiod-2.1.3 $ CIBW_BUILD='cp3*' cibuildwheel --platform linux --archs x86_64,aarch64

     _ _       _ _   _       _           _
 ___|_| |_ _ _|_| |_| |_ _ _| |_ ___ ___| |
|  _| | . | | | | | . | | | |   | -_| -_| |
|___|_|___|___|_|_|___|_____|_|_|___|___|_|

cibuildwheel version 2.16.5

Build options:
  platform: linux
  architectures: aarch64, x86_64
  build_selector: 
    build_config: cp3*
    skip_config: 
    requires_python: >=3.9.0
    prerelease_pythons: False
  container_engine: docker
  output_dir: /tmp/tmp.nYZrtOYERx/gpiod-2.1.3/wheelhouse
  package_dir: /tmp/tmp.nYZrtOYERx/gpiod-2.1.3
  test_selector: 
    skip_config:
  before_all: 
  before_build: 
  before_test: 
  build_frontend: None
  build_verbosity: 0
  config_settings: 
  dependency_constraints: pinned
  environment: 
  manylinux_images: 
    x86_64: quay.io/pypa/manylinux2014_x86_64:2024-01-23-12ffabc
    i686: quay.io/pypa/manylinux2014_i686:2024-01-23-12ffabc
    pypy_x86_64: quay.io/pypa/manylinux2014_x86_64:2024-01-23-12ffabc
    aarch64: quay.io/pypa/manylinux2014_aarch64:2024-01-23-12ffabc
    ppc64le: quay.io/pypa/manylinux2014_ppc64le:2024-01-23-12ffabc
    s390x: quay.io/pypa/manylinux2014_s390x:2024-01-23-12ffabc
    pypy_aarch64: quay.io/pypa/manylinux2014_aarch64:2024-01-23-12ffabc
    pypy_i686: quay.io/pypa/manylinux2014_i686:2024-01-23-12ffabc
  musllinux_images: 
    x86_64: quay.io/pypa/musllinux_1_1_x86_64:2024-01-23-12ffabc
    i686: quay.io/pypa/musllinux_1_1_i686:2024-01-23-12ffabc
    aarch64: quay.io/pypa/musllinux_1_1_aarch64:2024-01-23-12ffabc
    ppc64le: quay.io/pypa/musllinux_1_1_ppc64le:2024-01-23-12ffabc
    s390x: quay.io/pypa/musllinux_1_1_s390x:2024-01-23-12ffabc
  repair_command: auditwheel repair -w {dest_dir} {wheel}
  test_command: 
  test_extras: 
  test_requires: 

Cache folder: /home/vfazio/.cache/cibuildwheel

Here we go!

</snip>

16 wheels produced in 10 minutes:
  gpiod-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl    96 kB
  gpiod-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl      95 kB
  gpiod-2.1.3-cp310-cp310-musllinux_1_1_aarch64.whl                           98 kB
  gpiod-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl                            98 kB
  gpiod-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl    97 kB
  gpiod-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl      97 kB
  gpiod-2.1.3-cp311-cp311-musllinux_1_1_aarch64.whl                          100 kB
  gpiod-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl                           100 kB
  gpiod-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl    97 kB
  gpiod-2.1.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl      96 kB
  gpiod-2.1.3-cp312-cp312-musllinux_1_1_aarch64.whl                          102 kB
  gpiod-2.1.3-cp312-cp312-musllinux_1_1_x86_64.whl                           102 kB
  gpiod-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl      95 kB
  gpiod-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl        95 kB
  gpiod-2.1.3-cp39-cp39-musllinux_1_1_aarch64.whl                             98 kB
  gpiod-2.1.3-cp39-cp39-musllinux_1_1_x86_64.whl                              97 kB

This build used the default target images for building wheels, which is probably fine as they are not EOL (see https://github.com/pypa/manylinux). The one suggestion I may make is to possibly generate manylinux_2_28 wheels as the OS used to generate manylinux2014 will technically be EOL in June though may still receive support from the provided docker images beyond it's EOL date.

Having prebuilt wheels will likely also increase package adoption. I'm currently stuck on the old pure-python implementation (and thus the v1 kernel interface) until wheels are provided unless I package and deploy them to my own internal server.

@pdcastro
Copy link

pdcastro commented Apr 5, 2024

Agreed! I have been pre-building my own wheels too, but instead of running my own pypi server, I copy the wheel file to target device (Raspberry Pi) and install it with the command:

pip install -r requirements.txt gpiod.whl

Where the requirements.txt file does not include the gpiod package (it includes other dependencies). I am sharing this in case it helps someone else who comes across this issue.

@brgl, let us know if you would like a contribution (PR) towards automating the building and publishing of wheels using GitHub Actions (free of charge for public repos). I have actually found a short guide in the cibuildwheel documentation: https://cibuildwheel.pypa.io/en/stable/deliver-to-pypi/

@vfazio
Copy link
Contributor Author

vfazio commented Apr 5, 2024

Ive spoken to @brgl about this, and he's generally in favor of doing something but hasn't had time to look into it. Maybe a quick PR to generate wheels is a good first step to show how it'd work. Im not sure if other kernel scm mirrors have set precedent of including gitlab or github CI files.

Generally, releases are triggered on tags and the python bindings follow their own versioning cadence semi independent of the libgpiod version, so maybe python binding specific tags can live here for CI purposes.

@brgl
Copy link
Owner

brgl commented Apr 8, 2024

It's evident by now that I don't and most likely will not have the time to address this. After the xz fiasco I'm also reluctant to have other people prepare binary releases of any kind. @pdcastro @vfazio Do you know what it would take (within the libgpiod repo) to make it possible to generate wheels for a number of popular architectures? Let's say x86-64, arm, aarch64, mips?

@vfazio
Copy link
Contributor Author

vfazio commented Apr 8, 2024

@brgl i don't think itd be too difficult. The first step is identifying how you're creating the sdists. Once thats determined, building the actual wheels is pretty straightforward. Then we'd need to set up a pipeline stage that uses secrets only you have access to and are stored in github to perform the actual publish via twine or some other tool. This stage would only happen when you tag a new release for the bindings, so that would need to be defined too, but we should prioritize just getting things automatically building for now.

For architectures, wheels via cibuildwheel are limited to x86-64, i686, arm64, ppc64le, and s390x. Im not sure why they don't have mips containers and arm is a hellscape due to armv7/armv6 differences and i dont see any tagged manylinux images available at a glance.

I think getting x86-64 and arm64 up should be priorities since rpi and friends are now shipping 64bit userland for armv8 boards. Piwheels can worry about building arm 32bit variants if they want.

My availability varies a lot based on fires at work but I can try to work on this sometime this or next week unless @pdcastro has time they can guarantee to work on it sooner.

@brgl
Copy link
Owner

brgl commented Apr 8, 2024

I prepare them like this: LIBGPIOD_VERSION=2.1 python3 setup.py sdist where LIBGPIOD_VERSION specifies which libgpiod tarball is to be downloaded from kernel.org (with it hash verified) and incorporated into the gpiod extension module statically.

@pdcastro
Copy link

pdcastro commented Apr 8, 2024

Then we'd need to set up a pipeline stage that uses secrets only you have access to and are stored in github to perform the actual publish via twine or some other tool.

I haven’t setup GitHub Actions before but I was doing some reading. Regarding “secrets” and the tool for publishing:

This action is used by the GitHub Actions ”official” workflow template (python-publish.yml) that is recommended by GitHub when I visit the Actions tab of my fork of the libgpiod repo:

github-actions-pypi

That particular template is several years old and still uses a password attribute, but the current pypa GitHub Action explicitly documents support of Trusted Publishing.

Meanwhile, twine (which I have not personally used before) does not seem to mention Trusted Publishing in its documentation. Instead it mentions username and password.

I would be inclined to try using pypa’s GitHub Action to publish the wheels to PyPI, with Trusted Publishing.

@pdcastro
Copy link

pdcastro commented Apr 8, 2024

This stage would only happen when you tag a new release for the bindings, so that would need to be defined too, but we should prioritize just getting things automatically building for now.

Let’s talk a bit about this. I gather that a GitHub Action workflow can be manually triggered and given some input parameters (?), and we could start with that, but let’s also talk about what @brgl would be comfortable with as the end result.

@vfazio previous mentioned that libgpiod’s Python Binding releases “follow their own versioning cadence semi independent of the libgpiod version, so maybe python binding specific tags can live here for CI purposes.” Indeed, at the moment I gather that the latest gpiod release on PyPI is version 2.1.3. Not only this doesn’t match the version number (tag) of libgpiod on GitHub and kernel.org (where the latest version is currently 2.1.1), but also the latest version of gpiod on PyPI.org doesn't match the library source code of the latest version of libgpiod on GitHub / kernel.org. With some source code diffing, I found that gpiod v2.1.3 on PyPI matches the source code of libgpiod v2.1.0 on GitHub (not v2.1.1, as I had expected).

I can think of several arguments to justify this arrangement. Perhaps the changes from libgpiod 2.1.0 to 2.1.1 are not relevant to the Python Bindings and therefore a new release of the Python Bindings on PyPI.org would be “just noise” that might trigger users to update their app unnecessarily. Or a scenario where a tiny change that only affects the PyPI release — say, PyPI metadata about the package itself — might warrant a new release of the PyPI package but not a new release of libgpiod.

However, I suspect that making such considerations — deciding for sure whether some code change requires new releases to PyPI or not, what version number to use, and how/when to trigger the new release — is extra work for maintainers, compared to fully automated releases from a single source of truth. Also, in my view, the “principle of least surprise” and the conceptual simplicity of “same version number everywhere” outweigh the benefits of different version numbers.

In short, my opinion is that whenever a new tag or release happened on GitHub, this should automatically cause a new release to PyPI.org, using the same version number. Not just the wheels, but the source as well, README, PyPI metadata (that is stored in the repo)... All the maintainer needs to do is create a tag on kernel.org / GitHub. The rest is automated through GitHub Actions.

That’s my opinion but we don't have to do that. We could keep the versioning as it is, with manual releases to PyPI. It’s up to @brgl.

If we were to keep things as they are, it would be useful to discuss what we envision the trigger to be, in GitHub, to cause wheels to be published to PyPI. GitHub documents the full list of possible events, like a tag being created through GitHub’s web UI or pushed through a git command line.

For example I thought that a tag following a name pattern like python-wheel-2.1.3 could be associated with a particular libgpiod commit on GitHub. For example, tag python-wheel-2.1.3 would point to the same commit as libgpiod tag v2.1.0. Creating this tag would cause a GitHub action to build the wheels out of libgpiod v2.1.0, and upload them to the PyPI gpiod release version 2.1.3.

What do you guys think?

@pdcastro
Copy link

pdcastro commented Apr 8, 2024

For architectures, wheels via cibuildwheel are limited to x86-64, i686, arm64, ppc64le, and s390x. ... Piwheels can worry about building arm 32bit variants if they want.

Indeed it seems that piwheels have already been producing 32-bit arm images for the gpiod package:
https://www.piwheels.org/project/gpiod/

I haven’t tested those wheels. For a project of mine (Raspberry Pi 2, armv7l), I have been building armv7l wheels using just Docker Desktop, with a docker build --platform 'linux/arm/v7' command and the following Dockerfile:

# https://hub.docker.com/_/python
FROM python:3.12.0-alpine3.18

# Install dependencies
RUN apk add autoconf autoconf-archive automake bash curl g++ git libtool \
  linux-headers make patchelf pkgconfig
RUN pip3 install auditwheel

# libgpiod
WORKDIR /usr/src/libgpiod
ARG BRANCH=v2.0.2
RUN cd /usr/src && git clone --branch $BRANCH --depth 1 --no-tags \
  https://git.kernel.org/pub/scm/libs/libgpiod/libgpiod.git

RUN ./autogen.sh --enable-tools=no --enable-bindings-python && make
RUN cd bindings/python && python setup.py sdist bdist_wheel
# $(uname -m) is either 'armv7l' (32 bits) or 'aarch64' (64 bits)
RUN cd bindings/python/dist && LD_LIBRARY_PATH="/usr/src/libgpiod/lib/.libs" \
  auditwheel repair --plat "musllinux_1_2_$(uname -m)" *.whl

This builds the wheels while building the Docker image. Then I take the wheels out and discard the image. It may be possible to change it such that the wheels are built when the image is executed with docker run --platform 'linux/arm/v7', rather than while the image is built. In this case, it might be doable through GitHub Actions as well. I read about a docker/setup-qemu-action GitHub Action that enables building/running images/containers for arm, but I haven’t tried it yet.

“After the xz fiasco I'm also reluctant to have other people prepare binary releases of any kind.”

This might be a reason to prefer building the 32-bit arm images through GitHub Actions as well, if we can make it work through Docker for example, rather than leaving it to piwheels.org. It’s an option.

And yes, we would start with cibuildwheel for the architectures that it supports. There is a chance that cibuildwheel will support building 32-bit arm images in the not-too-distant future too, given pypa/cibuildwheel#1421 and pypa/manylinux#1455.

@pdcastro
Copy link

pdcastro commented Apr 8, 2024

My availability varies a lot based on fires at work but I can try to work on this sometime this or next week unless @pdcastro has time they can guarantee to work on it sooner.

I can have a go, yes. Advice / suggestions / contributions are also very welcome, of course. 👍

@brgl
Copy link
Owner

brgl commented Apr 9, 2024

Is there any chance of me being able to do it locally on my laptop with the right toolchains installed?

@brgl
Copy link
Owner

brgl commented Apr 9, 2024

@vfazio previous mentioned that libgpiod’s Python Binding releases “follow their own versioning cadence semi independent of the libgpiod version, so maybe python binding specific tags can live here for CI purposes.” Indeed, at the moment I gather that the latest gpiod release on PyPI is version 2.1.3. Not only this doesn’t match the version number (tag) of libgpiod on GitHub and kernel.org (where the latest version is currently 2.1.1), but also the latest version of gpiod on PyPI.org doesn't match the library source code of the latest version of libgpiod on GitHub / kernel.org. With some source code diffing, I found that gpiod v2.1.3 on PyPI matches the source code of libgpiod v2.1.0 on GitHub (not v2.1.1, as I had expected).

Yes. I detached the versioning of libgpiod and python bindings for various reasons. There's gpiod.api_version in the bindings that allow to inspect which libgpiod version the bindings were linked against.

@vfazio
Copy link
Contributor Author

vfazio commented Apr 9, 2024

Is there any chance of me being able to do it locally on my laptop with the right toolchains installed?

@brgl yes, this is super simple and takes roughly 10 minutes, please see the example in the initial issue description. It does not require any toolchain install for you. The toolchains are in the docker containers used by cibuildwheel.

Unless you're asking if this can be done without cibuildwheel. It automates a lot of the steps and is maintained by pypa so has a lot of eyes on its code. In order to generate wheels, youd want to use the manylinux containers they (pypa) publish which have the right mix of toolchains and libraries to conform to the PEP definitions for the wheel labels. You'd build the wheel as usual and then run auditwheel which essentially vendors in runtime libraries and modifes rpath to point to those libraries if i recall correctly. Cibuildwheel just performs all of this for you and iterates through the defined list of architectures and platforms making it easier, so its not technically required

I understand the abundance of caution, the whole community is on high alert after Jia Tan's shenanigans in xz and everything is getting extra scrutiny.

@brgl
Copy link
Owner

brgl commented Apr 9, 2024

Yeah, I was thinking about something that'd use toolchains from my computer and no containers. I'm currently trying to finish the RFC version of the DBus API I want to send for review. Then I'm leaving for EOSS and coming back to work on April 22nd. I'll try to give cibuildwheel a spin then.

@vfazio
Copy link
Contributor Author

vfazio commented Apr 9, 2024

Thanks for being open to this request. Its a lot to take in but would be extremely helpful to us. I appreciate your and Kent's work on gpiod. I haven't had a good chance to contribute to the project yet, but hope to in the future.

@pdcastro
Copy link

pdcastro commented Apr 9, 2024

I was thinking about something that'd use toolchains from my computer and no containers.

Why “no containers,” if I may ask? Would you also be opposed to a GitHub Actions workflow that involved containers? GitHub Actions install dependencies in a virtual machine at a minimum, see Standard GitHub-hosted runners for Public repositories. The use of containers by GitHub Actions might potentially be useful to emulate a 32-bit arm architecture for example — to be confirmed.

@vfazio
Copy link
Contributor Author

vfazio commented Apr 12, 2024

I took a couple of hours to work up the CI on my fork. The results can be seen here: https://github.com/vfazio/libgpiod/actions/runs/8654825182

This example workflow triggers on tags named "bindings-python-*" (in case you end up adding other bindings and want to trigger CI builds for those).

It checks out the repo from the tag, moves into the bindings directory, and then triggers an sdist build via python3 -m build -s. The sdist is then copied to the wheel stage and uses cibuildwheel to compile the wheels.

It builds wheels for x86_64 and aarch64 for both manylinux2014 and manylinux_2_28. Python distributions should detect the highest compatible version and use that when wheels are available (trying 2_28 first, then falling back to 2014) then builds from the sdist if no compatible wheel is found.

@pdcastro
Copy link

@vfazio, I had got that far too: 🙂

Given that it seems to be easier for you:

  • Building wheels on the command line or via pipeline is generally pretty straightforward.
  • this is super simple and takes roughly 10 minutes
  • I took a couple of hours to work up the CI on my fork

And given that the work seems to have become urgent for you:

  • I can try to work on this sometime this or next week unless @pdcastro has time they can guarantee to work on it sooner.

It may be better that I step out of the way and let you finish it.

@vfazio
Copy link
Contributor Author

vfazio commented Apr 12, 2024

I wouldn't say "urgent". Theres not much I can do since I do not control the pypi repo. But I did have some extra time today after getting off work and wanted to putz around with github actions a bit. Seemed like a good learning opportunity.

Bart may still want to avoid using CI to have finer control of what is used to build these. Im guessing there is still some concern about some sort of injection attack.

Some of this can be allayed by verifying the containers and the outputs and sticking with explicit, non-moving tags for github actions. Auto bumping utilities like dependabot will likely be avoided as will using major version tags for GHA since they actually silently move. I thought about using hashes, and still can, but thought it was a bit overkill.

@pdcastro
Copy link

Theres not much I can do since I do not control the pypi repo.

What I had planned to do next was:

  • Create a temporary PyPI repo to test the publishing of wheels.
  • Implement PyPI publishing in GitHub Actions with the trusted publishing mechanism mentioned in previous comments.
  • Create a PR that if Bartosz merged, the above would “just work” as much as possible.
  • Implement the building of 32-bit armv7l wheels in GitHub Actions using just Docker+QEMU. This is fundamentally what cibuildwheel already does for aarch64, but using custom Alpine (musllinux) and Debian-like (manylinux) Dockerfiles (similar to the one I shared in a previous comment).
  • Create a PR for that too.

However, ending up with competing / overlapping PRs doesn’t sound to me like the best use of time. @vfazio, I now assume that you are going to continue the work. If that’s not the case, let me know.

@pdcastro
Copy link

Hey @vfazio, have you had a chance to prepare a pull request to move this issue forward? I don’t see related PRs in this repo at this time. If you have found yourself without the spare time, or for any other reason, I could still contribute the PRs mentioned in my previous comment.

@vfazio
Copy link
Contributor Author

vfazio commented May 26, 2024

Hey @vfazio, have you had a chance to prepare a pull request to move this issue forward? I don’t see related PRs in this repo at this time. If you have found yourself without the spare time, or for any other reason, I could still contribute the PRs mentioned in my previous comment.

Its less about time and more about respecting Bart's wishes about how he wishes to build and deploy wheels.

My initial branch to do this work was a proof of concept to show that it could work. However, Bart has mentioned he would prefer to build these on his own, presumably to minimize the chance of some injection attack.

If his thoughts have changed on this, i will be glad to make a PR.

I would not, however, plan to build 32bit ARM wheels, and would stick to the more formalized 64bit ARM wheels and whatever bitness of x86 he wants to target. Im not too worried about libc variants.

The biggest problem with 32bit ARM wheels is the v6/v7 targets. Piwheels already has built 32bit wheels for RPi so users that want these can use that index. I think most NXP ARM targets are running 64bit userland.

@pdcastro
Copy link

However, Bart has mentioned he would prefer to build these on his own, presumably to minimize the chance of some injection attack.

Ha. You may be referring to the following comments:

After the xz fiasco I'm also reluctant to have other people prepare binary releases of any kind. (link)
Is there any chance of me being able to do it locally on my laptop with the right toolchains installed? (link)
Yeah, I was thinking about something that'd use toolchains from my computer and no containers. (link)

I took these comments to mean that Bartosz wanted to understand / learn the process of building wheels on his laptop, but ultimately the wheels would be built through CI automation such as GitHub Actions.

@brgl, if you were thinking of building the binary wheels on your laptop and uploading them to GitHub on every release, let me ask you to reconsider, for the following reasons:

@vfazio
Copy link
Contributor Author

vfazio commented May 26, 2024

I'd prefer to keep this as simple and friendly and make the barrier to entry as low as possible. If Bartosz doesn't want to use CI, then it can still be done with cibuildwheel as i've demonstrated. My concern is trying to force the maintainer to do it one way or the other is likely to result in no wheels at all. If accepting that they're not built and published via CI means we at least get wheels, i'm ok with it.

Otherwise, if we can remove the concern from CI and containers, then that is also a path we've both proven out.

I'm here to help however I can, so please let me know if i can be of further assistance.

@brgl
Copy link
Owner

brgl commented May 27, 2024

@vfazio @pdcastro I don't want to publish the wheels on github (not sure if using github actions implies that). In fact: I'd prefer to not use github infrastructure at all. I can use cibuildwheel with docker on my local machine but I will upload the wheels directly to pypi. If you can make a PR that allow me to easily launch a cibuildwheel pipeline locally, I would gladly test it.

@vfazio
Copy link
Contributor Author

vfazio commented May 27, 2024

If this is the case, I think the steps I used in my first post are sufficient as they do not use any github architecture and it depends solely on cibuildwheel and docker.

While there are tools that allow you to run github actions locally, there isn't much value in them other than for debugging pipelines to make sure they work before pushing them. If the build process for gpiod was hairier, i could maybe see a use case, but its really no different than writing a script to make sure the proper steps are taken for a release (download correct tarball, unpack, setup venv, install cibuildwheel, run with fixed argument list, publish to pypi)

@brgl
Copy link
Owner

brgl commented May 27, 2024

@vfazio sounds good, please send the patches to the linux-gpio mailing list once you have them.

@vfazio
Copy link
Contributor Author

vfazio commented May 27, 2024

@brgl im not sure theres much of a patch to submit. Do you want me to write and submit some bash script you can run on your machine to perform this build? I cant embed your pypi creds in it. It may make sense to just be a script you have yourself that you use similar to when you publish sdists?

@pdcastro
Copy link

It may make sense to just be a script you have yourself that you use similar to when you publish sdists?

If there will be a script, I think it should be committed to the repo. Bartosz had mentioned that he uses the setup.py script to create the sdist:

LIBGPIOD_VERSION=2.1 python3 setup.py sdist (where LIBGPIOD_VERSION specifies which libgpiod tarball is to be downloaded from kernel.org)

Previously I came across the setup.py bdist_wheel command to build wheels, but I understand that in this case we should use cibuildwheel instead. In the same directory as setup.py in this repo, there is also a build_tests.py script. Perhaps a new script called build_wheels.py could be added, which programmatically called cibuildwheel. Such a build_wheels.py script would accept command line options to publish the wheels to PyPI.

PyPI credentials would be fetched from outside the script of course, e.g. through environment variables.

If the user was supposed to run the cibuildwheel command line directly (rather than cibuildwheel being called programmatically by a script), then this should at least be documented in the README.

Even if a CI won’t be used, my intention is to make the process as automated / automatable as possible.

@vfazio
Copy link
Contributor Author

vfazio commented May 27, 2024

That script creates the sdist but doesn't publish it. I can write some small script and place it in that directory that generates artifacts for publishing. This would include generating the sdist as well since thats an input for wheels.

Ill leave the publishing omitted since thats not currently handled by similar scripts.

There will be some additional dependencies that will need to be met on the machine building the wheels:

  • docker
  • cibuildwheel
  • qemu-user-static with fixed binary registration.

I don't know what host distro you generally use, but i can do some distro agnostic checks for these before kicking off a build.

If I have time, ill try to get this done this week.

@brgl
Copy link
Owner

brgl commented May 28, 2024

I work on ubuntu.

@vfazio
Copy link
Contributor Author

vfazio commented May 31, 2024

@brgl I have something that "works" if you want to give it a spin here: vfazio@1e019bd

I'll try to do more testing and submit it to the mailing list in the next few days.

@brgl
Copy link
Owner

brgl commented May 31, 2024

Thanks, I will give it a try.

@brgl
Copy link
Owner

brgl commented May 31, 2024

@vfazio this went suprisingly well right from the start:

$ ls dist/
gpiod-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl  gpiod-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl    gpiod-2.1.3-cp312-cp312-musllinux_1_2_aarch64.whl                       gpiod-2.1.3-cp39-cp39-musllinux_1_2_x86_64.whl
gpiod-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl    gpiod-2.1.3-cp311-cp311-musllinux_1_2_aarch64.whl                         gpiod-2.1.3-cp312-cp312-musllinux_1_2_x86_64.whl                        gpiod-2.1.3.tar.gz
gpiod-2.1.3-cp310-cp310-musllinux_1_2_aarch64.whl                         gpiod-2.1.3-cp311-cp311-musllinux_1_2_x86_64.whl                          gpiod-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
gpiod-2.1.3-cp310-cp310-musllinux_1_2_x86_64.whl                          gpiod-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl  gpiod-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
gpiod-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl  gpiod-2.1.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl    gpiod-2.1.3-cp39-cp39-musllinux_1_2_aarch64.whl

Do these wheels support glibc too?

@brgl
Copy link
Owner

brgl commented May 31, 2024

@vfazio Let's say we get this upstream and make a new release of the python bindings. I would typically do twine upload -r libgpiod libgpiod-x.y.z.tar.gz --verbose to upload the source package. How would I upload the wheels?

@vfazio
Copy link
Contributor Author

vfazio commented May 31, 2024

@vfazio this went suprisingly well right from the start:

$ ls dist/
gpiod-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl  gpiod-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl    gpiod-2.1.3-cp312-cp312-musllinux_1_2_aarch64.whl                       gpiod-2.1.3-cp39-cp39-musllinux_1_2_x86_64.whl
gpiod-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl    gpiod-2.1.3-cp311-cp311-musllinux_1_2_aarch64.whl                         gpiod-2.1.3-cp312-cp312-musllinux_1_2_x86_64.whl                        gpiod-2.1.3.tar.gz
gpiod-2.1.3-cp310-cp310-musllinux_1_2_aarch64.whl                         gpiod-2.1.3-cp311-cp311-musllinux_1_2_x86_64.whl                          gpiod-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
gpiod-2.1.3-cp310-cp310-musllinux_1_2_x86_64.whl                          gpiod-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl  gpiod-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
gpiod-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl  gpiod-2.1.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl    gpiod-2.1.3-cp39-cp39-musllinux_1_2_aarch64.whl

Do these wheels support glibc too?

The manylinux wheels should cover glibc. You can create a virtualenv, pip install <path to correct wheel> then test the bindings via REPL or script or whatever

@vfazio
Copy link
Contributor Author

vfazio commented May 31, 2024

@vfazio Let's say we get this upstream and make a new release of the python bindings. I would typically do twine upload -r libgpiod libgpiod-x.y.z.tar.gz --verbose to upload the source package. How would I upload the wheels?

You would point your twine command to the dist directory and it takes care of the rest. It may be good to test uploading them to the test pypi server before pushing to the production pypi server

twine upload dist/* would upload the sdist and wheels in the directory.

While I have the script building wheels for musl libc installations as well, I do not have a great way of testing those. I don't have a board on Alpine I can conveniently test with... Though maybe i can find a bootable live cd somewhere and run it through QEMU.

I plan to test the aarch64 glibc wheels on rpi today.

@brgl
Copy link
Owner

brgl commented May 31, 2024

And how will the user running pip install gpiod get the right wheel?

@vfazio
Copy link
Contributor Author

vfazio commented May 31, 2024

And how will the user running pip install gpiod get the right wheel?

This is a function of how pip works. More information here: https://packaging.pypa.io/en/stable/tags.html

When you pip install gpiod it will search the configured indexes and find a list of published wheels that meet the version constraints, and then within that narrowed list, will search for compatible wheels based on the host architecture, the python version, etc, based on the interpreter's support platform tags. If no wheel is found, it will fallback to the sdist and build the wheel using the toolchain local to the host.

One of the reasons I suggested pushing to the test pypi server was to make sure this worked as expected: https://packaging.python.org/en/latest/guides/using-testpypi/

@brgl
Copy link
Owner

brgl commented May 31, 2024

Thank you. I like it, it looks good and works fine. I would like to make it part of libgpiod.

@vfazio
Copy link
Contributor Author

vfazio commented May 31, 2024

Thank you. I like it, it looks good and works fine. I would like to make it part of libgpiod.

Awesome! I will do a few minor tweaks to comments and then do some verification on the generated wheels to make sure they function as expected (unless you've had a chance to test those out) and then submit the patch.

@vfazio
Copy link
Contributor Author

vfazio commented May 31, 2024

@brgl I've tested these wheels a bit on CPython 3.12 on GLibc and Musl libc based systems.

Build steps:

vfazio@vfazio4 ~/development/libgpiod $ docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v `pwd`:/work -w /work -v /tmp/tmp.ImEzOTRb8S:/outputs     4d1b0edbe159be1e2ea98d549af47eeef848a997d0e88a97a69b5de1d111889d     ./bindings/python/generate_pypi_artifacts.sh -v 2.1 -o /outputs -s /work/bindings/python -c


16 wheels produced in 5 minutes:
  gpiod-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl   96 kB
  gpiod-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl     96 kB
  gpiod-2.1.3-cp310-cp310-musllinux_1_2_aarch64.whl                          94 kB
  gpiod-2.1.3-cp310-cp310-musllinux_1_2_x86_64.whl                           93 kB
  gpiod-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl   97 kB
  gpiod-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl     97 kB
  gpiod-2.1.3-cp311-cp311-musllinux_1_2_aarch64.whl                          95 kB
  gpiod-2.1.3-cp311-cp311-musllinux_1_2_x86_64.whl                           95 kB
  gpiod-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl   97 kB
  gpiod-2.1.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl     97 kB
  gpiod-2.1.3-cp312-cp312-musllinux_1_2_aarch64.whl                          95 kB
  gpiod-2.1.3-cp312-cp312-musllinux_1_2_x86_64.whl                           94 kB
  gpiod-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl     95 kB
  gpiod-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl       95 kB
  gpiod-2.1.3-cp39-cp39-musllinux_1_2_aarch64.whl                            93 kB
  gpiod-2.1.3-cp39-cp39-musllinux_1_2_x86_64.whl                             92 kB

Hashes for generated outputs:
65bf39ace98694ddd1bbf4515567a495ec8ac4d6b82673fd9189af69fddb1ef6  /outputs/dist/gpiod-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
f6fec99c8821a9b991c8e1dabe93f31cb5c1c28e7649e3f9ff392d86794a8c03  /outputs/dist/gpiod-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
0cd0595f42ba89bd322d33f0f31e006143d5eb4c24964b8574e1af023b64d0d3  /outputs/dist/gpiod-2.1.3-cp310-cp310-musllinux_1_2_aarch64.whl
eba8dd08137d80b272952f7d2ff6988b734ca941a92d57b0333d1a996b92c0bc  /outputs/dist/gpiod-2.1.3-cp310-cp310-musllinux_1_2_x86_64.whl
98555fc749316b5ae592a711b1831b8e03419ae94ae9e4433946e34cb5424f1e  /outputs/dist/gpiod-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
92de44aa4a65362eb19d4cdc3b4ce7453d706b8dd93cace13ec9c109c740b60b  /outputs/dist/gpiod-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
37c764401a230ac12cdde85d2dc90521ff956fdd8424a16919e602aed2c6d684  /outputs/dist/gpiod-2.1.3-cp311-cp311-musllinux_1_2_aarch64.whl
0ce4d8a79592b5d0076066d5b142686becdb80f4eaecf86c4fd9e3c2ea3748e2  /outputs/dist/gpiod-2.1.3-cp311-cp311-musllinux_1_2_x86_64.whl
1f8845ffbe1508eb28bc83122072391dffa9c23f9b988c02cb2725f98a233e04  /outputs/dist/gpiod-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
6dbb6d63ae2e3c19106d114d34bbb40ed83ef780de48fcb4d155646693f4f03b  /outputs/dist/gpiod-2.1.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
e0416db87f5c76460bdaa447767c0a14d02e4983812d604843aac398b8a10482  /outputs/dist/gpiod-2.1.3-cp312-cp312-musllinux_1_2_aarch64.whl
6d9886499eb84d8c0b5522a3ca3e7bd84770bcfe13f30c620beec2ddfc6be4c9  /outputs/dist/gpiod-2.1.3-cp312-cp312-musllinux_1_2_x86_64.whl
0487ec7eb563202066784634017f72ea6ab69bbee79c5d9c99b569c5cd278d6c  /outputs/dist/gpiod-2.1.3-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
12a4d72b3faeb65a5af07a74d1b727b3c7e294d65b3e07d6980455c4cb4ba99c  /outputs/dist/gpiod-2.1.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
382e3f697774d368d52756818e73f1745eb9a6834e60811064306432d41c1568  /outputs/dist/gpiod-2.1.3-cp39-cp39-musllinux_1_2_aarch64.whl
11185628c408440fddf40f03be4a362d9390e705dbf353a8a8d9bb2aaf382f29  /outputs/dist/gpiod-2.1.3-cp39-cp39-musllinux_1_2_x86_64.whl
abf1cd50fee0746cdcb3c340a7e574120fc021d9aba07c8ff25ea96611cc937a  /outputs/dist/gpiod-2.1.3.tar.gz

I booted an RPi CM3+ on an IO board with Alpine Linux:

(venv) localhost:/tmp/tmp.ajBIdH# cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.20.0
PRETTY_NAME="Alpine Linux v3.20"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"

scp file over

vfazio@vfazio4 /tmp/tmp.ImEzOTRb8S $ scp dist/gpiod-2.1.3-cp312-cp312-musllinux_1_2_aarch64.whl [email protected]:~
[email protected]'s password: 
gpiod-2.1.3-cp312-cp312-musllinux_1_2_aarch64 100%   95KB   8.7MB/s   00:00    

Show the package works by running a couple of copied examples from the libgpiod repo. For this test, I jumpered gpio 40 to gpio 41.

(venv) localhost:/tmp/tmp.ajBIdH# python3 -c "import gpiod; print(gpiod.__version__)"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'gpiod'


(venv) localhost:/tmp/tmp.ajBIdH# sha256sum ~/gpiod-2.1.3-cp312-cp312-musllinux_1_2_aarch64.whl 
e0416db87f5c76460bdaa447767c0a14d02e4983812d604843aac398b8a10482  /root/gpiod-2.1.3-cp312-cp312-musllinux_1_2_aarch64.whl


(venv) localhost:/tmp/tmp.ajBIdH# pip install /root/gpiod-2.1.3-cp312-cp312-musllinux_1_2_aarch64.whl
Processing /root/gpiod-2.1.3-cp312-cp312-musllinux_1_2_aarch64.whl
Installing collected packages: gpiod
Successfully installed gpiod-2.1.3


(venv) localhost:/tmp/tmp.ajBIdH# python3 -c "import gpiod; print(gpiod.__version__)"
2.1.3


(venv) localhost:/tmp/tmp.ajBIdH# gpioset -b -m time -s 5 gpiochip0 40=0 
(venv) localhost:/tmp/tmp.ajBIdH# python3 example.py 
41=Value.INACTIVE
(venv) localhost:/tmp/tmp.ajBIdH# python3 example.py 
41=Value.INACTIVE
(venv) localhost:/tmp/tmp.ajBIdH# gpioset -b -m time -s 5 gpiochip0 40=1
(venv) localhost:/tmp/tmp.ajBIdH# python3 example.py 
41=Value.ACTIVE
(venv) localhost:/tmp/tmp.ajBIdH# python3 example.py 
41=Value.ACTIVE

# Test edge events by toggling values on another terminal
(venv) localhost:/tmp/tmp.ajBIdH# python3 example2.py 
line: 41  type: Rising   event #1
line: 41  type: Falling  event #2

I reused the same setup with a custom Debian based system as well to exercise the manylinux/glibc wheel:

root@rpi-87fa00:~# sha256sum gpiod-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
1f8845ffbe1508eb28bc83122072391dffa9c23f9b988c02cb2725f98a233e04  gpiod-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl


(venv) root@rpi-87fa00:/var/tmp/tmp.6jStcnqofv# python3 -c "import gpiod; print(gpiod.__version__)"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'gpiod'
(venv) root@rpi-87fa00:/var/tmp/tmp.6jStcnqofv# pip install /root/gpiod-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl 
Looking in indexes: https://nexus.xes-mad.com/repository/upstream-pypi/simple, https://nexus.xes-mad.com/repository/xes-pypi/simple
Processing /root/gpiod-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Installing collected packages: gpiod
Successfully installed gpiod-2.1.3
(venv) root@rpi-87fa00:/var/tmp/tmp.6jStcnqofv# python3 -c "import gpiod; print(gpiod.__version__)"
2.1.3

# pin toggled from another terminal
(venv) root@rpi-87fa00:/var/tmp/tmp.6jStcnqofv# python example2.py
line: 41  type: Rising   event #1
line: 41  type: Falling  event #2
line: 41  type: Rising   event #3
line: 41  type: Falling  event #4
line: 41  type: Rising   event #5
line: 41  type: Falling  event #6
line: 41  type: Rising   event #7


(venv) root@rpi-87fa00:/var/tmp/tmp.6jStcnqofv# gpioset -t 0 -c 0 40=active
(venv) root@rpi-87fa00:/var/tmp/tmp.6jStcnqofv# python example.py
41=Value.ACTIVE
(venv) root@rpi-87fa00:/var/tmp/tmp.6jStcnqofv# gpioset -t 0 -c 0 40=inactive
(venv) root@rpi-87fa00:/var/tmp/tmp.6jStcnqofv# python example.py
41=Value.INACTIVE

@brgl
Copy link
Owner

brgl commented May 31, 2024

Awesome, LGTM. I planned on doing a new python release anyway.

@vfazio
Copy link
Contributor Author

vfazio commented May 31, 2024

I submitted this and i see it reflected in a few mailing list mirrors https://www.spinics.net/lists/linux-gpio/msg100124.html but i don't see it in the linaro patchwork instance. It apparently forgot to CC myself so I wasn't sure if it actually went through.

@vfazio
Copy link
Contributor Author

vfazio commented Jun 6, 2024

v2 submitted https://www.spinics.net/lists/linux-gpio/msg100346.html. This time i was less dumb in the submission though i was dumb in my reply cuz it was html formatted and bounced off the list. I didn't necessarily want to resend it and spam you however.

@vfazio
Copy link
Contributor Author

vfazio commented Jun 11, 2024

@brgl are there guidelines for contributions? I looked at CONTRIBUTING.md and I didn't see any guidelines on reuse lint or shellcheck or the like. Is there a list of tools that should be installed and run for each submission?

@brgl
Copy link
Owner

brgl commented Jun 11, 2024

I need to add these points to the README, thanks for bringing this up.

@vfazio
Copy link
Contributor Author

vfazio commented Jun 11, 2024

I have no idea what license to use for this script, I assume GPLv2 is fine to stay in line with the rest of the repo? Looks like that's what gpio-tools-test.bash uses.

@brgl
Copy link
Owner

brgl commented Jun 11, 2024

Yes, GPL-2.0 works fine

@vfazio
Copy link
Contributor Author

vfazio commented Jun 12, 2024

Thanks for the direction! I submitted: https://www.spinics.net/lists/linux-gpio/msg100479.html

@brgl
Copy link
Owner

brgl commented Jun 12, 2024

Thanks, I applied it. So next step: new Python release and this time I'll upload the wheels next to the source dist.

@vfazio
Copy link
Contributor Author

vfazio commented Jun 12, 2024

I'll try to test the wheels once i see them published

@vfazio
Copy link
Contributor Author

vfazio commented Jun 18, 2024

gpiod 2.2.0 seems to work:

image

@brgl
Copy link
Owner

brgl commented Jun 20, 2024

Can we close this now?

@vfazio
Copy link
Contributor Author

vfazio commented Jun 20, 2024

Closing as resolved as part of the 2.2.0 release. Hopefully future releases follow the same pattern.

If other platforms need support, a separate issue can be created.

@vfazio vfazio closed this as completed Jun 20, 2024
@brgl
Copy link
Owner

brgl commented Jun 20, 2024

Yes, definitely. Now that this has been exercised, I'll just follow the same procedure every time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants