Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mounting multiple EFS in codebuild projects fails since a few days #114

Open
martin31821 opened this issue Dec 14, 2021 · 4 comments
Open
Labels

Comments

@martin31821
Copy link

We have been successfully using codebuild with multiple EFS mounts in the past. Since ~3 days mounting more than one EFS appears to silently fail.

Within the AWS console the EFS's all report to be available and Codebuild also lists these as mounted. However, running a ls on the mount-points within the codebuild instance itself reveals that only the first one of these mounts show up.

Removing this successfully mounted EFS from the codebuild configuration and rerunning the project will then mount the next EFS (I assume in alphabetic order), however mounting any additional EFS mounts will still fail.

We tried recreating the codebuild project and (one) of the offending volumes, without any luck so far.

I think it might be related to #112, but I'm not sure, since we hit the issue consistent.

@Cappuccinuo
Copy link
Contributor

Thanks for the feedback. We'll have someone take a look, and get back to you when we have more information.

@Cappuccinuo
Copy link
Contributor

Are you mounting the file system with tls mount or just plain mount? Also can you explain how the mount is executed in your project (Fine via either code or words)?

@Jasper-Ben
Copy link

Jasper-Ben commented Apr 6, 2022

👋 @Cappuccinuo
I am working with @martin31821. Our failing terraform code looked like this (last tested 15.12.2021, after which we have consolidated into a single EFS to circumvent this bug):

codebuild.tf:

resource "aws_codebuild_project" "this" {

...

    file_system_locations {
    identifier    = "SSTATE_DIR"
    location      = "${var.sstate_fs.dns_name}:/"
    mount_point   = "/mnt/yocto_cache/sstate_cache"
    mount_options = "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2"
    type          = "EFS"
  }

  file_system_locations {
    identifier    = "SSTATE_DIR_RELEASE"
    location      = "${var.sstate_release_fs.dns_name}:/"
    mount_point   = "/mnt/yocto_cache/sstate_release_cache"
    mount_options = "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2"
    type          = "EFS"
  }

  file_system_locations {
    identifier    = "DL_DIR"
    location      = "${var.dldir_fs.dns_name}:/"
    mount_point   = "/mnt/yocto_cache/dl_dir"
    mount_options = "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2"
    type          = "EFS"
  }
}

efs.tf:

resource "aws_efs_file_system" "sstate_fs" {
  encrypted        = true
  performance_mode = "generalPurpose"
  throughput_mode  = "bursting"
}

resource "aws_efs_file_system" "sstate_release_fs" {
  encrypted        = true
  performance_mode = "generalPurpose"
  throughput_mode  = "bursting"
}

resource "aws_efs_file_system" "dldir_fs" {
  encrypted        = true
  performance_mode = "generalPurpose"
  throughput_mode  = "bursting"
}

resource "aws_efs_mount_target" "sstate_fs_a" {
  file_system_id  = aws_efs_file_system.sstate_fs.id
  subnet_id       = module.vpc.vpc_private_subnets[0]
  security_groups = [aws_security_group.security_group.id]
}

resource "aws_efs_mount_target" "sstate_release_fs_a" {
  file_system_id  = aws_efs_file_system.sstate_release_fs.id
  subnet_id       = module.vpc.vpc_private_subnets[0]
  security_groups = [aws_security_group.security_group.id]
}

resource "aws_efs_mount_target" "dldir_fs_a" {
  file_system_id  = aws_efs_file_system.dldir_fs.id
  subnet_id       = module.vpc.vpc_private_subnets[0]
  security_groups = [aws_security_group.security_group.id]
}

security_group.tf:

resource "aws_security_group" "security_group" {
  name        = "${var.prefix}-codebuild-sg"
  description = "${var.prefix}-codebuild-sg"
  vpc_id      = module.vpc.vpc_id

  egress {
    from_port = 0
    to_port   = 0
    protocol  = "-1"
    cidr_blocks = [
    "0.0.0.0/0"]
  }
  ingress {
    from_port   = 2049
    to_port     = 2049
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Please don't hesitate to ask for further details 🙂

@Jasper-Ben
Copy link

P.S.: aws terraform module version was 3.69.0

@RyanStan RyanStan added the bug label May 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants