Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't acquire root on common container distros #2043

Open
vtbassmatt opened this issue Jan 10, 2019 · 77 comments
Open

Can't acquire root on common container distros #2043

vtbassmatt opened this issue Jan 10, 2019 · 77 comments

Comments

@vtbassmatt
Copy link
Member

Agent Version and Platform

Version of your agent? 2.x series

OS of the machine running the agent? Linux

Azure DevOps Type and Version

any Azure DevOps account

What's not working?

(copied from docs repo: https://github.com/MicrosoftDocs/vsts-docs/issues/2939) - reported by @njsmith:
The example here demonstrates using the container: feature with the ubuntu:16.04 image. Which is great! This is exactly what I want to do, though with ubuntu:18.10 to test my software on the latest versions of everything (in particular openssl 1.1.1).

And the container: feature is pretty slick: it goes to a lot of trouble to map things into the container in a clever way, and set up a non-root user to run as, while granting that user sudo permissions, etc.

But... the standard images maintained by dockerhub, like ubuntu:16.04 and ubuntu:18.10 or debian:testing, don't have sudo installed. Which means that if you use them with container:, you actually cannot get root inside the container. It's impossible.

I guess the container: feature is useful for folks who are already maintaining some kind of development-environment images for their own use, but this makes it a complete non-starter for my use case, where I just want to use pipelines normally, but test on a different distro. I guess in theory I could maintain my own image that is just the official ubuntu:18.10 + sudo installed, but there's no way maintaining an image like that is worth it for this.

Instead I've had to give on up using container: and am instead writing things like:

      - bash: |
          set -ex
          sudo docker run -v $PWD:/t ubuntu:rolling /bin/bash -c "set -ex; cd /t; apt update; apt install -y python3.7-dev python3-virtualenv git build-essential; python3.7 -m virtualenv -p python3.7 venv; source venv/bin/activate; source ci/ci.sh"

This is workable, but it's really a shame to lose all the slick container: features just because of this.

It would be really nice if the container: feature could some make sure it was possible to get root inside the container. For example, there could be a config key to request running as root, either for the whole job or just for a specific script or bash task. Or the container setup phase could mount in a volume containing a suid sudo or gosu. Or anything, really...


The LinuxBrew folks are also facing a similar challenge. See https://github.com/Linuxbrew/brew/issues/746#issuecomment-452873130

@sjackman
Copy link

In our case the default user of the container is linuxbrew with UID 1000. The Docker image does have sudo installed, and the linxubrew user has passwordless access to sudo. Pipelines attempts to run useradd -m -u 1001 vsts_azpcontainer, which fails because the linuxbrew user does not have permission to run useradd. Running sudo useradd -m -u 1001 vsts_azpcontainer would succeed. I suggest Pipelines to run sudo useradd -m -u 1001 vsts_azpcontainer if the current user is non-root and /usr/bin/sudo exists, and otherwise run useradd -m -u 1001 vsts_azpcontainer.

@sjackman
Copy link

Most Docker containers have a default user of root. Our Linuxbrew/brew container is a bit unusual in that regard, that the default user is linuxbrew UID=1000. We do this because Homebrew refuses to run as root.

@vtbassmatt
Copy link
Member Author

@TingluoHuang thoughts on how we handle this? I wonder if we could try without sudo (as we currently do), and if that fails, try with sudo?

@njsmith
Copy link

njsmith commented Jan 14, 2019

There are really two issues here, that are mostly unrelated.

For the problem the LinuxBrew folks are hitting, where the agent initialization is assuming that plain docker exec will have root, I think the solution is just for the agent initialization code to use docker exec -u 0:0 to override any USER directive in the dockerfile. Docker has root to start off with; there's no point in going root -> some other user -> sudo back to root.

For the problem I'm having, where there's no way for my user code to get root without sudo, the best solution I can think of is to add a way to mark particular tasks as being executed as root. Then it would be the agent's job to make this happen. For example, it might use sudo when running in a regular environment, and docker exec -u 0:0 or some other user-switching helper when running in an arbitrary container. Usage might look like:

- bash: "apt update && apt install -y some package"
  runAsRoot: true

@iMichka
Copy link

iMichka commented Jan 16, 2019

On the other hand, why is azure-pipelines even trying to execute anything inside the running container? I think other CI providers do not do anything like that. There could also be an option to disable that "feature"?

@vtbassmatt
Copy link
Member Author

vtbassmatt commented Jan 16, 2019

The feature is explicitly for running build steps inside a container. To do that, we need to make sure the container can read the files mapped into the workspace and, crucially, that the agent can also read anything created by the build step.

There could also be an option to disable that "feature"?

That exists - we don't make anyone use the feature 😁

@sjackman
Copy link

Good point. We're running our fist task that creates the artifacts in the Linxubrew/brew Docker image. @vtbassmatt Once the artifacts are created, can we run the task: PublishBuildArtifacts@1 in a different image? And if so, which image do you recommend?
See our usage of task: PublishBuildArtifacts@1 in for example https://github.com/Linuxbrew/homebrew-extra/pull/46/files

@vtbassmatt
Copy link
Member Author

We run the whole job in one container. We investigated doing per-step container support, and even had a working proof of concept at one point, but didn't pursue finishing it.

@sjackman
Copy link

Thanks for the explanation. In that case, we do need to run PublishBuildArtifacts@1 in our Linuxbrew/brew Docker image that's being used to create the artifacts.

@danwalmsley
Copy link

I also am unable to use this feature because im trying to use debootstrap which can only run as root.

https://unix.stackexchange.com/questions/214828/is-it-possible-to-run-debootstrap-within-a-fakeroot-environment

anyone found a workaround?

@njsmith
Copy link

njsmith commented Mar 27, 2019

@danwalmsley I'm guessing that any container that's full-featured enough to have debootstrap is also full-featured enough to have sudo :-). So the workaround is to run sudo debootstrap. This issue is specifically about containers that are missing sudo, and you can't install it, because to install sudo you need to use sudo...

@danwalmsley
Copy link

@njsmith you were right, I was able to use sudo and it worked. thanks

@esteve
Copy link

esteve commented May 6, 2019

@danwalmsley if it's of any use, I managed to install sudo by running docker exec -u 0 inside the container:

https://github.com/ApexAI/performance_test/blob/master/azure-pipelines.yml#L9-L17

Containers in Azure are configured so that you can run Docker inside them, so I just exported the Docker executable as a volume and then access the running container as root via docker exec. The only requirement is to name the container (by passing --name NAME in options), so you can access it via docker exec. The other thing is to not overwrite the sudo config files that the Azure agent generates, but I think it'd be better if the agent wrote them separately to a file /etc/sudoers.d/ instead of /etc/sudoers

@fatherlinux
Copy link

Red Hat might consider creating a custom version of Universal Base Image which comes pre configured for CI/CD pipelines with sudo installed. We are tracking it here, but don't have any plans yet.

Today, I would recommend people to build a small layered image (no matter which Linux distro), tuck it in quay.io or dockerhub, and pull from there. Maintaining a small layered image shouldn't be that difficult.

Also, is it possible to have the first step in Azure Pipelines just install sudo (I am assuming not). Sorry, I have never used Azure Pipelines and don't have time to test, but I am the product manager for UBI, so I find this use case interesting from a base image perspective.

To add a hair more background, there is constant tension when building base images (no matter which Linux distro). If we (Red Hat UBI team in this case, but same goes for any base image maintainer/architect) add more packages for every use case on the planet, then base images will eventually be 1GB+. Partners, customers, and cloud providers all need "special" things in their base images, and this use case is so similar.

@njsmith
Copy link

njsmith commented May 23, 2019

Today, I would recommend people to build a small layered image (no matter which Linux distro), tuck it in quay.io or dockerhub, and pull from there.

This is probably where we'll end up eventually if this isn't fixed, but having to create a separate repo and maintain and update custom containers is a lot of extra operational complexity for your average open source project, compared to just writing a few lines of yaml... Suddenly we have to figure out credential handling for our container repository, etc.

Also, is it possible to have the first step in Azure Pipelines just install sudo (I am assuming not).

Azure pipelines already starts by injecting some custom software into the image (a copy of nodejs that it uses to run the pipelines agent, which lets it run further commands inside the agent). If they injected a copy of sudo as well, in some well-known location, that would pretty much solve this for any container image.

@pombredanne
Copy link

pombredanne commented Jun 10, 2019

@njsmith after some trial an errors, I got a sudo installed in some containers this way:
https://dev.azure.com/nexB/license-expression/_build/results?buildId=79
https://github.com/nexB/license-expression/blob/3fe3f9359c34b6e6e31e6b3454e450ca8e9e9d6e/azure-pipelines.yml#L80

This is incredibly hackish as it involves running first a command line docker as root that runs docker in docker to install sudo? (or something along these lines). Somehow it works and I am able to get sudo-less containers (such as the official Ubuntu, Debian and CentOS images) to gain sudo access such that I can then install a Python interpreter and eventually run the tests at last.

It looks like this has been first crafted by @esteve for https://github.com/ApexAI/performance_test/blame/6ae8375fa1e3111cb6fa60bdd1d42b9b9f370372/azure-pipelines.yml#L11

There are variations in https://github.com/quamotion/lz4.nativebinaries/blob/4030ff9d97259b05df84c080d494971b62931363/azure-pipelines.yml#L77 and https://github.com/dlemstra/Magick.Native/blob/3c83b2e7d06ded8f052bb5b282c5a79e27b2d6b7/azure-pipelines.yml

@alanwales
Copy link

Are there any plans to address this issue?

I am trying to run Powershell Core which comes with the docker image 'mcr.microsoft.com/dotnet/core/sdk:3.0.100-preview8-buster' but I am also not able to change permissions on /usr/bin/pwsh to allow execution as the user 'vsts_azpcontainer' has too limited permissions and I can't switch to root.

I see a lot of complex workarounds above by some motivated and creative people, but fundamentally it should be simple to execute container workloads like this. If I pull the image locally and run it then everything works as it should but when I run it on Azure Devops nothing works. Isn't this somehow breaking the docker ideal that "it works on my machine" should disappear?

The suggestion above 'runAsRoot: true' would be great even to have as an option at the container level for people like me who just want to run some simple dotnet core tasks without spending a lot of time working out which chown, chmod, installing sudo or whatever else needs to be done to do this.

@pombredanne
Copy link

pombredanne commented Aug 16, 2019

@alanwales if sudo is not installed in a container, there are not many ways to work around this. The simplest is to use your own images that have sudo pre-installed (but then at least for me since the main use case is to install extra packages, that would be simpler to directly have a custom images with the packages I need).

That said, there is no reason that this should be so complicated here.

@alanwales
Copy link

Maybe we have a different interpretation of simple but to quote the original poster of this issue “there's no way maintaining an image like that is worth it for this”. I just want to be able to use the officially provided .Net Core sdk Image as it was designed to be used and how it works in docker natively. Now with yaml pipelines widely available I can imagine this issue will be coming up more often so would be great to prioritize it higher or close with an explanation, thanks

@ruffsl
Copy link

ruffsl commented Aug 25, 2019

Looking at the Initialize containers step, what is the reason for asure to add a new sudo user with the same UID/GID as that of the user running the asure agent process? Is the intent to try and retain file permissions for the _work directory created by the agent before mounting it into the container for the respective job? I'm guessing this is so the agent can always clean the mounted _work directory before the next jobs?

However, if the user running the asure agent process has access to the docker daemon, and is given sudo access inside the container, why couldn't the asure agent just leave the user to exe into the container unchanged from the Dockerfile and instead use sudo chown/rm to persist/clean the agent workspace between jobs instead? Are the azure cloud agents using a rootless docker install, thus the host user may not have sudo? I'm not sure if that kind of setup is permissible, e.g. the agent process being able to mount to docker.sock, but not have sudo access to clean up leftover workspace volumes.

I really this wish this related ticket had some closure, then perhaps this one could be resolved by having the agent target a newer version of docker, or machines with newer linux kernels:

Add ability to mount volume as user other than root #2259
moby/moby#2259

@ruffsl
Copy link

ruffsl commented Aug 25, 2019

I think I found the file that relates to most of the issues here.
Looking through the comments also explances a few blocks I explaines earlier today.

// Check whether we are inside a container.
// Our container feature requires to map working directory from host to the container.
// If we are already inside a container, we will not able to find out the real working direcotry path on the host.

Ah, yep, the azure agent is not happy about running container jobs while inside a container itself, although I feel like using docker volumes as opposed to host mounts would help avoid having to resolve absolute paths to mount volumes from the host filesystem. Then the azure agent could run as root in it's own container, so it could clean the workspace left from any job without changing the container's Dockerfile default user. I guess the agent might also want to peek into the docker image to determine the default user, so it could prepare the permissions for workspace files so that the default user could use them.

Here is where the Temp, Tools, Tasks, and Externals folders are being added as host volumes in the container:

// Create an user with same uid as the agent run as user inside the container.
// All command execute in docker will run as Root by default,
// this will cause the agent on the host machine doesn't have permission to any new file/folder created inside the container.
// So, we create a user account with same UID inside the container and let all docker exec command run as that user.

I'm not sure docker exec runs as root by default; it's just that the default user for most official images are already root, so many derivative images never change this. I think exec just keeps with user declared by last USER directive in the docker image layer, same as run commands.

@ruffsl
Copy link

ruffsl commented Aug 25, 2019

@TingluoHuang, I guess it's been a while since you added #1005 , but do you think it would be possible to achieve the same workspace setup going on now by first copying the _work workspace from the host to a standalone docker container volume, and then attaching that container volume to the job container, so we can avoid changing the expected default user from the Dockerfile?

https://docs.docker.com/engine/reference/commandline/volume_create/

@ruffsl
Copy link

ruffsl commented Aug 26, 2019

Given the Container Operation Provider tries to reconcile the permissions between the user in the container and the user on the host, a hacky workaround could be to set AGENT_ALLOW_RUNASROOT and launch the agent using the root user on the host. See #1878 , for an example.

Although, if your using a cloud hosted agent, rather than a local agent you could escalate, this doesn't really help when user hosting the agent is vsts (1001:117) on azure cloud.

https://dev.azure.com/ApexAI/performance_test/_build/results?buildId=124&view=logs&jobId=85036c68-25ef-5143-bf73-15187243f4ec&taskId=154fe597-c0fe-4278-8bb4-8aab264c0ef8&lineStart=94&lineEnd=95&colStart=1&colEnd=1

@vtbassmatt
Copy link
Member Author

@jtpetty this is a good one to noodle on as we think about how to evolve container support.

@ruffsl
Copy link

ruffsl commented Aug 28, 2019

@vtbassmatt , it looks like the beta for github actions is also being based on the azure agent runner. Would there be an existing issue or repo to suggest changes to the workspace strategy. I can understand how having a reserved directory path simplifies the file system mounting on the agent backend, but its a bit opinionative/constraining on where users can make stateful changes in the container. I.e. it adds a lot of boilerplate shuttling things back and forth from the workspace to elsewhere in the filestem.

I suppose one could nest logs/builds/caches in the workspace, then symlink to where they are expected in the container filesystem, but that doesn't seem as transparent. It'd be cool to see a similar pattern like with CircleCI, where the uploading assets, submitting test_results, and caching directories can be performed anywhere in the container filesystem, not just reserved to the azure/github workspace folder. microsoft/azure-pipelines-tasks#10870 (comment)

@vtbassmatt
Copy link
Member Author

@ruffsl yes, the GitHub runner is based on the Azure Pipelines agent code. It's not a slam-dunk for the agent to back-port runner changes, though. The runner doesn't have to be backwards compatible with existing Azure Pipelines customers. I still hope we can evolve our container support to be a little more industry standard.

@coveritytest
Copy link

coveritytest commented Nov 22, 2022

Sigh, the approach to install sudo does not work anymore on ubuntu-latest since today with a debian:latest container:

/tmp/docker: /lib/x86_64-linux-gnu/libc.so.6: version 'GLIBC_2.32' not found (required by /tmp/docker)
/tmp/docker: /lib/x86_64-linux-gnu/libc.so.6: version 'GLIBC_2.34' not found (required by /tmp/docker)

@kamronbatman
Copy link

kamronbatman commented Nov 22, 2022

Sigh, the approach to install sudo does not work anymore on ubuntu-latest since today with a debian:latest container:

/tmp/docker: /lib/x86_64-linux-gnu/libc.so.6: version 'GLIBC_2.32' not found (required by /tmp/docker) /tmp/docker: /lib/x86_64-linux-gnu/libc.so.6: version 'GLIBC_2.34' not found (required by /tmp/docker)

I went from testing 12 flavors/versions of linux and windows a year ago, to just windows because of this exact reason.
It doesn't work at all and it seems that for security reasons or whatever, they can't make it work.

@coveritytest
Copy link

It did work here until today, I guess something has changed in ubuntu-latest.

@kamronbatman
Copy link

Yeah, the same thing that killed Fedora and Debian support a few months ago. I suppose I could roll/maintain my own sudo-installed docker images and host them on docker hub, but that is a shitload of work to just build/test my open source project.

If anyone knows of ready-to-run images that are regularly maintained, that would be amazing.

@xkszltl
Copy link

xkszltl commented Nov 22, 2022

Debian 11 and Ubuntu 20.04 are on libc6_2.31, and 2.35 for Ubuntu 22.04, so I guess latest has been moved to 22.04?

@coveritytest
Copy link

@xkszltl Thanks, you're right. debian:bullseye indeed works with ubuntu-20.04.

@dipanshuchaubey
Copy link

I am also facing this issue. I simply need to add an external deb repo - which is impossible to do in the hosted agent, and subsequently impossible to do in a docker container due to this behavior. Thanks to the many suggestions above, this is the solution that worked for me:

  - job: myjob
    pool:
      vmImage: 'ubuntu-20.04'
    container:
      image: debian:bullseye
      options: '--name mycontainer -v /usr/bin/docker:/tmp/docker:ro'
    steps:
    # Hack to install sudo into the container.
    # The sudoers file already exists and subsequently apt-get will wait infinitely for input
    # (despite futile attempts to use `-y` or `yes | apt-get`)
    - script: |
        /tmp/docker exec -t -u root mycontainer mv /etc/sudoers /etc/sudoers.bak
        /tmp/docker exec -t -u root mycontainer apt-get -qq update
        /tmp/docker exec -t -u root mycontainer apt-get -qq install sudo
        /tmp/docker exec -t -u root mycontainer mv /etc/sudoers.bak /etc/sudoers
    - script: |
        sudo ...

This one worked like a charm.
In my case the container was running on centos. so I had to use yum instead of apt

Like this:

- script: |
          /tmp/docker exec -t -u root test mv /etc/sudoers /etc/sudoers.bak
          /tmp/docker exec -t -u root test yum install -y sudo
          /tmp/docker exec -t -u root test mv /etc/sudoers.bak /etc/sudoers

@tksrc
Copy link

tksrc commented Mar 7, 2023

Still a problem in 2023.

@ravindras85
Copy link

Is there any workaround or solution for this issue?

I am getting below error in my azure devops pipeline, even if I am using the workaround suggested by vsalvino

Try to create a user with UID '1001' inside the container
##[error]Docker-exec executed: useradd -m -u 1001 vsts_azpcontainer; container id: 80b38c59aadfb7ce4b9691e58d6b3a8cd80eaff479dda6d2fb; exit code: 1; command output: useradd: Permission denied., useradd: cannot lock /etc/passwd; try again later.

@GrumpyMeow
Copy link

It might be interesting to know that it's possible to run tasks on the host...

- script: |
   ....
  target: host

@kamronbatman
Copy link

It might be interesting to know that it's possible to run tasks on the host...

- script: |
   ....
  target: host

The host is not debian/fedora/alpine/etc.

FWIW, I stopped using azure pipelines for anything except Windows. Good luck everyone!

@ejohnson-dotnet
Copy link

I have found a work around that seems to do the job. Add this command line task to the beginning. It gets the docker container id then runs a docker exec command to run apt and install sudo. It runs this on the host, not in the container. The key is the target: host line. The dpkg options is needed because they are editing the /etc/sudoers file, and the sudo package has its own version, this keeps the existing file.

# Azure container jobs run as non-root user which has sudo permissions, but sudo is missing.  Install sudo package...
- task: CmdLine@2
  target: host
  inputs:
    script: |
      cid=`docker container ls -q`
      echo "ID = $cid"
      /usr/bin/docker exec  $cid su -c "apt -y update && apt -o DPkg::Options::="--force-confold" -y install sudo"

This other way should also work to give a name to the docker container, and use that name in the command.

pool:
  vmImage: ubuntu-latest

container: 
  image: debian:10
  options: --name mycontainer

steps:
# Azure container jobs run as non-root user which has sudo permissions, but sudo is missing.  Install sudo package...
- task: CmdLine@2
  target: host
  inputs:
    script: |
      /usr/bin/docker exec  mycontainer su -c "apt -y update && apt -o DPkg::Options::="--force-confold" -y install sudo"

@aditigaur4
Copy link

Hello this still a problem for us, and we are trying to use already available images, is there any solution planned for this? If there available images, where a script works as root, there is no reason it should not work in azure pipelines. Expecting users to maintain images with sudo installed is bizarre.

@GrumpyMeow
Copy link

With the replies above it is possible to install sudo in the running container. This would then enable you to use sudo later in your scripts.

@Clockwork-Muse
Copy link

@ejohnson-dotnet
Are you able to provide a more in-depth example? I have this:

stages:
  - stage: build_site
    displayName: Build Site
    dependsOn: []
    jobs:
      - job: build_site
        dependsOn: []
        pool:
          vmImage: ubuntu-latest
        container:
          image: docker.io/swacli/static-web-apps-cli:latest
          options: --name swa_$(Build.BuildId)
        steps:
            # Azure container jobs run as non-root user which has sudo permissions, but sudo is missing.  Install sudo package...
          - bash: |
              usr/bin/docker exec swa_$(Build.BuildId) su -c "apt -y update && apt -o DPkg::Options::="--force-confold" -y install sudo"
            target: host
          - bash: |
              swa --version
              swa build $(Build.Repository.LocalPath)

... but it still fails. From the logs, it appears that the container is pulled and it attempts to set it up completely before running any tasks at all.
It's possible this is more a problem with the particular image I'm using, but then that means there are a number of cases where this isn't a viable workaround.

@ejohnson-dotnet
Copy link

ejohnson-dotnet commented Aug 23, 2023

This is a full example that works for me.


trigger: none

pool:
  vmImage: ubuntu-latest

# Run this in Debian 10 container
container:
  image: debian:10

steps:

# Azure container jobs run as non-root user which has sudo permissions, but sudo is missing.  Install sudo package...
- task: CmdLine@2
  displayName: Install sudo package
  target: host
  inputs:
    script: |
      cid=`docker container ls -q`
      echo "ID = $cid"
      /usr/bin/docker exec  $cid su -c "apt-get -y update && apt-get -o DPkg::Options::="--force-confold" -y install sudo"

- task: CmdLine@2
  displayName: Install all build tools
  inputs:
    script: |
      sudo apt-get update
      sudo apt-get -y install build-essential git make libssl-dev wget

Put this entire snip in a .yml file and setup as a pipeline and it should work. Yes it does pull the container and set it up completely before any tasks run, but if you examine the logs, you will see the first "Initialize job", then "Initialize containers" In the logs for initializing the container, you can see what Microsoft is doing exactly to setup the docker container. Compare that with your task to install sudo.

Try using the - task: CmdLine@2 instead of - bash. Maybe -bash does not understand the target: host option. target: host is the key to make this work.

@Clockwork-Muse
Copy link

@ejohnson-dotnet - Ah, it turns out my problem is image-specific,

@Zegorax
Copy link

Zegorax commented Feb 1, 2024

Dear Microsoft team,

Is there any ETA for this? It's really a bummer that a feature as simple as this one cannot be implemented in 4 years.

Would it be possible to priorize it ?

@Zegorax
Copy link

Zegorax commented Feb 12, 2024

So, today after losing a lot of time trying to figure out why I couldn't execute a command in a container running in an Azure Pipeline, I came back to this root cause again.

Even when using a Microsoft image, in this case, mcr.microsoft.com/mssql-tools, this issue prevents the correct usage of this image. When looking at https://github.com/microsoft/mssql-docker/blob/master/linux/mssql-tools/Dockerfile.debian11, we can see that the PATH is updated during the image build for the root account.

Unfortunately, it means that since the pipeline uses another user, the command sqlcmd is not available to any other user that doesn't use what's inside the .bashrc.

In this specific case, there are two solutions. 1: use the full path to the command. It's okay but might not work due to other permission issues that could happen. 2: Copy the .bashrc to the user. But again, it means that for any image, you would need to analyze exactly what the original maintainer wanted but for another user.

In the meantime, we will either abandon Azure Pipelines alltogether or at least not use the container feature. It's really sad that such a basic functionality is not present here.

@vtbassmatt Could you please give us an update as to whether this is actually being taken care of ? If yes, could you give us an ETA ?

@zioalex
Copy link

zioalex commented Mar 19, 2024

@msftgits @msftdata can you please explain why sudo is not made available for the users.
This is a very simple thing to do that is blocking a proper usage of the container jobs

Copy link

This issue has had no activity in 180 days. Please comment if it is not actually stale

@github-actions github-actions bot added the stale label Sep 15, 2024
@Zegorax
Copy link

Zegorax commented Sep 15, 2024

Definitely not stale :)

@github-actions github-actions bot removed the stale label Sep 15, 2024
@tpidor
Copy link

tpidor commented Oct 21, 2024

Not sure if this is relevant here but FYI, they've tried here https://bargsten.org/wissen/non-root-container-jobs-in-azure-pipeline/ to enhance security by not running container agent as root.

I tried this in Dockerfile before switching to a nonroot user -

RUN useradd -m -u 1001 vsts_VSTSContainer && \
    groupadd azure_pipelines_sudo && \
    usermod -a -G azure_pipelines_sudo vsts_VSTSContainer && \
    echo '%azure_pipelines_sudo ALL=(ALL:ALL) NOPASSWD:ALL' >> /etc/sudoers && \
    ln -sf /usr/bin/true /usr/bin/su  && \
    ln -sf /usr/bin/true /usr/sbin/groupadd  && \
    ln -sf /usr/bin/true /usr/sbin/usermod

# switch to nonroot
USER nonroot

It managed to successfully run "Initialize containers" job task and able to run some commands as nonroot inside the containerized agent.

@Clockwork-Muse
Copy link

It managed to successfully run "Initialize containers" job task and able to run some commands as nonroot inside the containerized agent.

The usual reason people want sudo/"root" access is for one-off commands on an existing image, most often apt/yum/etc.
The whole point is to use an existing image, without needing to create/upload your own. Writing your own Dockerfile defeats the purpose (besides the maintenance headaches).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests