Randomly Expressed

About

Welcome to my blog “randomly expressed”. I created this website to publish helpful tips. It’s mainly technology driven, but I will blog about other topics. I am a Unix sysadmin that is always looking to learn new things. My goal is to be able to share knowledge that others may find useful. xkcd.com

Continue Reading »

Contact

Connect With US

Connect with us on the following social networking sites.

Most Popular Posts.

Add Some Content to This Area

You should either deactivate this panel on the Theme Settings page, or add some content via the Widgets page in your WordPress dashboard.

NetApp stops allowing mounts for NFS volumes

By on October 8, 2016 in Technology with No Comments

A strange issue with our NetApp FAS8040 occurred, where I was no longer able to mount new NFS volumes from a vserver. I was getting the following error when attempting to mount the new volume.
root@backup01:~# mount -t nfs 10.192.16.32:/vol_ds_db01 /mnt
mount.nfs: access denied by server while mounting 10.192.16.32:/vol_ds_db01

I verified that I was using the correct settings when creating the volume. I even tried changing the export policy to 0.0.0.0/0 to see if it was indeed a permissions issue. I did that for root volume too. I then got the following error.
root@backup01:~# mount -t nfs 10.192.16.32:/vol_ds_db01 /mnt
mount.nfs: mounting 10.192.16.32:/vol_ds_db01 failed, reason given by server: No such file or directory

After seeing the error change from a “permission denied” to “no such file or directory” I knew this was not an issue with the export policy. I then contacted my friend Chris who is a NetApp guru, to ask if he has ever seen anything like this before. We got on a webex session and within 3 minutes he was able to isolate the root cause. He looked at my snap mirrors for my load sharing mirrors on the root volumes. He noticed the vserver’s root volume that was unable to mount new NFS volumes did not have a snap mirror schedule and the last snap shot was over a month old.
Source Path: nac01-lax://esxi02-lax/svmesxi02_root
Destination Path: nac01-lax://esxi02-lax/svmesxi02_root_m1
Relationship Type: LS
Relationship Group Type: -
SnapMirror Schedule: -
SnapMirror Policy Type: -
SnapMirror Policy: -
Tries Limit: 8
Throttle (KB/sec): unlimited
Mirror State: Snapmirrored
Relationship Status: Idle
File Restore File Count: -
File Restore File List: -
Transfer Snapshot: -
Snapshot Progress: -
Total Progress: -
Network Compression Ratio: -
Snapshot Checkpoint: -
Newest Snapshot: snapmirror.5e7fc060-6b5e-11e5-8bab-00a0987fbbe8_7_2147484676.2016-08-26_161443
Newest Snapshot Timestamp: 08/26 16:14:44

What was even stranger was that the load sharing mirrors for all the root volumes showed healthy.
nac01-lax::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
esxi02-hq:vol_ds03
DP esxi02-lax:esxi02_hq_vol_ds03_mirror
Snapmirrored
Idle - true -
nac01-lax://esxi01-lax/svmesxi01_root
LS nac01-lax://esxi01-lax/svmesxi01_root_m1
Snapmirrored
Idle - true -
nac01-lax://esxi01-lax/svmesxi01_root_m2
Snapmirrored
Idle - true -
nac01-lax://esxi02-lax/svmesxi02_root
LS nac01-lax://esxi02-lax/svmesxi02_root_m1
Snapmirrored
Idle - true -
nac01-lax://esxi02-lax/svmesxi02_root_m2
Snapmirrored
Idle - true -
5 entries were displayed.

A manual sync of the root load sharing mirror was performed to get it up to sync.
nac01-lax::> snapmirror update-ls-set -source-path esxi02-lax:svmesxi02_root
[Job 5569] Job is queued: snapmirror update-ls-set for source "nac01-lax://esxi02-lax/svmesxi02_root".

The missing snap mirror schedule was then added and set to sync every 5 minutes.
nac01-lax::> snapmirror modify -destination-path esxi02-lax:svmesxi02_root_m1 -schedule 5min
[Job 5570] Job succeeded: SnapMirror: done

nac01-lax::> snapmirror modify -destination-path esxi02-lax:svmesxi02_root_m2 -schedule 5min
[Job 5571] Job succeeded: SnapMirror: done

Now the root volume load sharing mirror was synced with the other two mirrors and had a schedule to keep it in sync.
nac01-lax::> snapmirror show -destination-path *svmesxi02_root_m1 -instance

Source Path: nac01-lax://esxi02-lax/svmesxi02_root
Destination Path: nac01-lax://esxi02-lax/svmesxi02_root_m1
Relationship Type: LS
Relationship Group Type: none
SnapMirror Schedule: 5min
SnapMirror Policy Type: -
SnapMirror Policy: -
Tries Limit: 8
Throttle (KB/sec): unlimited
Mirror State: Snapmirrored
Relationship Status: Idle
File Restore File Count: -
File Restore File List: -
Transfer Snapshot: -
Snapshot Progress: -
Total Progress: -
Network Compression Ratio: -
Snapshot Checkpoint: -
Newest Snapshot: snapmirror.5e7fc060-6b5e-11e5-8bab-00a0987fbbe8_7_2147484676.2016-10-07_145754
Newest Snapshot Timestamp: 10/07 14:57:58

I then was able to mount the new NFS volume without any errors.

root@backup01:~# mount -t nfs 10.192.16.32:/vol_ds_db01 /mnt

root@backup01:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 2.3G 4.0K 2.3G 1% /dev
tmpfs 465M 832K 464M 1% /run
/dev/dm-0 6.5G 1.5G 4.6G 25% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 4.0K 5.0M 1% /run/lock
none 2.3G 0 2.3G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sda1 464M 68M 368M 16% /boot
/dev/mapper/vg0-home 3.7G 7.7M 3.5G 1% /home
/dev/mapper/vg0-var 3.7G 577M 3.0G 17% /var
/dev/sdb 5.0T 33M 5.0T 1% /storage01
/dev/sdc 32T 271G 32T 1% /storage02
10.192.16.32:/vol_ds_db01 6.7T 192K 6.7T 1% /mnt

So the root cause was a root volume that was no longer syncing with it’s load sharing mirrors. This was preventing any new NFS volumes from mounting to the vserver. I still need to figure out why the snap mirror schedule disappeared for only that one vserver and not the others. I hope this helps out anyone else that experiences this issue.

Facebook Comments

Tagged With: , , ,

Post a Comment

Your email address will not be published. Required fields are marked *

Top