Page MenuHomeSolus

Snap errors in LXD due to bad AppArmor profiles
Closed, WontfixPublic


I have an installation of LXD on Solus done with Snap (version 4.0.0 of LXD). Recently, I noticed I was unable to start LXD services (snap.lxd.daemon.unix.socket and snap.lxd.daemon.service) and noted the following errors in my logs:

AVC apparmor="DENIED" operation="open" profile="snap.lxd.lxc" name="/sys/kernel/mm/transparent_hugepage/hpage_pmd_size" pid=7499 e="snap.lxd.lxc" name="/sys/kernel/mm/transparent_hugepage/hpage_pmd_size" pid=74998 comm="snap-exec" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
AVC apparmor="DENIED" operation="exec" profile="snap.lxd.lxc" name="/usr/bin/aa-exec" pid=74998 comm="lxc" requested_mask="x" dee="snap.lxd.lxc" name="/usr/bin/aa-exec" pid=74998 comm="lxc" requested_mask="x" denied_mask="x" fsuid=0 ouid=0

I've been able to solve this by manually editing the AppArmor profiles for snap.lxd.daemon and snap.lxd.lxc (located in /var/lib/snapd/apparmor/profiles/) and replacing the line '/usr/sbin/aa-exec ux,' with '/usr/bin/aa-exec ux,'.

After reloading the profiles (apparmor_parser -r /var/lib/snapd/apparmor/profiles/* -v), both services starts fine.

In solus, aa-exec is located in /usr/bin and the AppArmor profile seems to work fine with version 3.x of the snap (I have another workstation running version 3.20 fine) but version 4.0 seems to be less tolerant to this error.

Event Timeline

clauded created this task.Apr 20 2020, 4:21 PM
DataDrake closed this task as Wontfix.Apr 20 2020, 10:43 PM
DataDrake edited projects, added Upstream Issue; removed Lacks Project.
DataDrake added a subscriber: DataDrake.

We don't provide the AppArmor profiles used by Snaps. If there is something wrong with them, you'll need to take it up with folks packaging them.

Sorry to dig up closed tickets, but thought it was worth mentioning as I have also come up against this bug.
As per the discussion on github, it appears that this issue isn't caused by bad AppArmor profiles within the snaps, but rather a change in snapd itself, as is described in the following PR:

This change is in snapd 2.43 but is not in 2.39.

I also noticed the following ticket here for snapd.

So I understand there is a bit of a clash with some snap requirements on solus, and it might not be currently feasible to go to 2.43, but perhaps this bug can be reclassified [and opened] as it doesn't seem to be a bug with upstream snap packages. Perhaps there is a workaround we can get into the solus version of 2.39?

There is no workaround. Newer versions of Snap require the newer versions of systemd. Until we are able to update systemd to 245, it's a moot point.

JoshStrobl added a subscriber: JoshStrobl.EditedMay 19 2020, 6:17 PM

No, this has nothing to do with systemd 245. The reason we reverted from 245 was due to EFI changes for default boot loaders (see here, which would cause breakage during the booting process since it'd be selecting old, outdated kernels with potentially missing modules), not anything related to cgroup heirarchy. That already exists and is something we set to legacy already via the kernel's cmdline. This has to do with the fact that multiple pieces of software do not support cgroups v2, as highlighted by T8609. None of the mentioned issues referenced in T8609 have been resolved and as such, we cannot drop our use of legacy cgroups v2 without imposing unnecessary and undesired breakage and requirements of configuration for users.

Sorry. I was 99% sure v2 was required by 245 when I wrote that. Had completely forgotten about EFI changes.

Thanks for the updates!

I have no problem with this at all. The workaround from @clauded is very simple and works. I just wanted it documented for anyone else having the issue, that until snapd gets updated on solus [pending resolution of aforementioned issues], that this workaround will be needed, and to clarify that the issue isn't something which needs fixing with the snap packages themselves.

Would we be able to change the status of 'Closed, Wontfix' to something else? It is still an outstanding issue which is fixed upstream, just not yet able to be applied here.