Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility to mount multiple EFS volume with different regions? #101

Open
bangau1 opened this issue May 28, 2021 · 8 comments · May be fixed by #171
Open

Possibility to mount multiple EFS volume with different regions? #101

bangau1 opened this issue May 28, 2021 · 8 comments · May be fixed by #171

Comments

@bangau1
Copy link

bangau1 commented May 28, 2021

Hi,

I have a use case to use EFS in my EKS Cluster, where the EFS volumes to be mounted can be from different regions (not same region). I installed the efs csi driver, but turns out it relies to efs-utils.

From efs-utils documentation here, the only way to inject EFS region value is by changing the efs-utils.conf's region config. But this limitation (sharing the same region configuration) doesn't fit with my use case, which I want is to be able to mount EFS Volume from different region but sharing the same efs-csi-driver installation.

I want to ask whether it is possible to override region configuration using env var or certain cli flag?

@bangau1
Copy link
Author

bangau1 commented May 28, 2021

Upon checking the relevant code that decides the region value here:

def get_target_region(config):
def _fatal_error(message):
fatal_error('Error retrieving region. Please set the "region" parameter in the efs-utils configuration file.', message)
try:
return config.get(CONFIG_SECTION, 'region')
except NoOptionError:
pass
try:
return get_region_from_instance_metadata(config)
except Exception as e:
metadata_exception = e
logging.warning('Region not found in config file and metadata service call failed, falling back '
'to legacy "dns_name_format" check')
try:
region = get_region_from_legacy_dns_format(config)
sys.stdout.write('Warning: region obtained from "dns_name_format" field. Please set the "region" '
'parameter in the efs-utils configuration file.')
return region
except Exception:
logging.warning('Legacy check for region in "dns_name_format" failed')
_fatal_error(metadata_exception)

I see that it's possible to retrieve the region right from the EFS with fqdn format (passed to the mount command). I am wondering why the region from DNS Format got in the last priority? From dev/user perspective, if they pass the fqdn format (with region info in it), which is more explicit than the region from shared etc-utils.conf file and aws ec2 metadata call, so we should use that as the first priority.

Wdyt @Cappuccinuo ?

@bangau1
Copy link
Author

bangau1 commented May 28, 2021

Also, If I understand the code correctly, I noticed weird (wrong?) implementation of get target region from legacy dns format

def get_region_from_legacy_dns_format(config):
"""
For backwards compatibility check dns_name_format to obtain the target region. This functionality
should only be used if region is not present in the config file and metadata calls fail.
"""
dns_name_format = config.get(CONFIG_SECTION, 'dns_name_format')
if '{region}' not in dns_name_format:
split_dns_name_format = dns_name_format.split('.')
if '{dns_name_suffix}' in dns_name_format:
return split_dns_name_format[-2]
elif 'amazonaws.com' in dns_name_format:
return split_dns_name_format[-3]
raise Exception('Region not found in dns_name_format')

There is no case if {region} format found in the dns format. I believe we should refactor the get_target_region(config) to get_target_region(config, other_required_params), for example get_target_region(config, fs_id) where user can pass fs_id parameter with FQDN Format, so that we can evaluate the dns format regex capture group. Wdyt @Cappuccinuo ? If this is the right approach, I don't mind to submit a PR.

@Cappuccinuo
Copy link
Contributor

Hey @bangau1 ,

  1. The get_region_from_legacy_dns_format is checking if the {region} is replaced by a valid region in the dns_name_format = {az}.{fs_id}.efs.{region}.{dns_name_suffix}, e.g. dns_name_format = {az}.{fs_id}.efs.us-east-1.{dns_name_suffix}. We don't recommend you config the region info here, instead you can modify the region info in the following few lines in the config file
#The region of the file system when mounting from on-premises or cross region.
#region = us-east-1
  1. To mount a file system in another region, that means you need to
    a) Setup the vpc-peering so that your ec2 instance in VPC A can talk with efs in VPC
    b) After you have set the vpc-peering up, you will need to change the region in config file to the EFS region.
    c) Then you can mount your file system by specifing the mounttargetip option in mount command to pass a ip address of the mount target to mount. (This will require v1.31.1+ efs-utils)
    d) The current mount target ip fallback logic assume you are using a file system in the same region, and we recommend you use the file system in the same az so that you can get the best file system performance. So that is why the mount target ip address must be specified in this case.

e.g. fs-deadbeef is in region us-east-1, the instance with private ip address 192.2.8.155 is in region us-east-2, and we are mounting to one of the mount target ip address of fs-deadbeef

[ec2-user@ip-192-2-8-155 ~]$ sudo mount -t efs -o tls,mounttargetip=172.33.21.155 fs-deadbeef /mnt
[ec2-user@ip-192-2-8-155 ~]$ df
Filesystem            1K-blocks    Used        Available Use% Mounted on
devtmpfs                 492676       0           492676   0% /dev
tmpfs                    503448       0           503448   0% /dev/shm
tmpfs                    503448     548           502900   1% /run
tmpfs                    503448       0           503448   0% /sys/fs/cgroup
/dev/xvda1              8376300 1494688          6881612  18% /
tmpfs                    100692       0           100692   0% /run/user/1000
tmpfs                    100692       0           100692   0% /run/user/0
127.0.0.1:/    9007199254739968       0 9007199254739968   0% /mnt

[mount]
dns_name_format = {az}.{fs_id}.efs.{region}.{dns_name_suffix}
dns_name_suffix = amazonaws.com
#The region of the file system when mounting from on-premises or cross region.
region = us-east-1

@hazard595
Copy link

@Cappuccinuo I have two disks which I need to mount one of which is in another region. Setting that region in the config file then breaks mounting the other disk. A cli flag as @bangau1 mentioned would be pretty helpful in this case.

@Cappuccinuo
Copy link
Contributor

Thanks @Shaan95 , if you have two disks in two different regions, that makes sense to have a region cli flag to differentiate them. We will have someone to take a look at this.

@CodyKank
Copy link

CodyKank commented Mar 1, 2022

Is this still the case where using efs-utils only works with efs in one region? I have a TGW peering with another region and account where I need to mount efs from two regions to one ec2 instance. In theory, it is possible to get around this single region limitation by just using 'normal' NFS mounting instead of the efs-utils package and hitting the mount target IPs but I'd like to use efs-utils if at all possible.

@techdaddies-kevin
Copy link

Yes, it's still the case. We've been trying to work around this for a full day now without modifying the mount.efs code. Unfortunately, falling back to NFS doesn't work if you need to use an accesspoint (Such as in our case)

@briantist
Copy link

briantist commented Dec 25, 2022

One of the most confusing things about having to set the region in the config file, is that it completely ignores what region your AWS profile is in.

If I have an AWS config file with two profiles, let's say efs-east using region = us-east-1 and efs-west using region = us-west-1, I would hope I could not have to set a region in the config, and use the awsprofile option, but it doesn't work.

If the EC2 instance I'm mounting from is in us-east-1 then this works:

mount -t efs -o tls,iam,awsprofile=efs-east,accesspoint=fsap-xxxx fs-xxxx /mnt/east

But if I update it to use the access point ID and FS ID in west, it doesn't work, even if I use awsprofile=efs-west.

It only works with mounttargetip=<west mount IP>, and it works the same whether I use the efs-east or efs-west profile, and it only works if I set the region in the config to the region where the EFS volume is.

What would be much nicer is being to able to have two commands that at least look congruent:

mount -t efs -o tls,iam,awsprofile=efs-east,accesspoint=fsap-<eastid> fs-<eastid> /mnt/east
mount -t efs -o tls,iam,awsprofile=efs-west,accesspoint=fsap-<westid> fs-<westid> /mnt/west

(also the suggestions in #63)

tombriden added a commit to tombriden/efs-utils that referenced this issue Jun 14, 2023
tombriden added a commit to tombriden/efs-utils that referenced this issue Jun 14, 2023
tombriden added a commit to tombriden/efs-utils that referenced this issue Jun 15, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants