Some things to consider when configuring NDMP transfers for 8.2 . I came across these for a disaster recovery test, where no design documentation (outlining design decisions) for the PROD environment was available. The backup platform being used was Symantec Backup Exec 2012.
vServer or Node Level NDMP
In Clustered ONTAP, NDMP is available either within the context of a vServer (configurable with vServer admin privileges) or within a node (configurable with cluster admin privileges). I believe the decision to use one or the other depends on any associated API calls from the platform controlling the NDMP transfer (e.g. if it also needs to create a volume, clone a volume, etc), but the default approach in this case was at a node level.
NDMP behaves differently depending on whether running it at vServer or Node level.
The below demonstrates some difference between these two approaches
CLUSTER01::> system services ndmp show -instance Node: CLUSTER01-03 NDMP Service Enabled: false Allow Clear Text Password: true NDMP User ID: root Node: CLUSTER01-04 NDMP Service Enabled: false Allow Clear Text Password: true NDMP User ID: root 2 entries were displayed. CLUSTER01::> vserver services ndmp show -vserver vsv01 Vserver: vsv01 NDMP version: 4 Ignore ctime: false Enable offset map: true Enable tcp nodelay: false TCP window size: 32768 Data port range: all Enable backup log: true Enable per qtree exclusion: false Authentication type: challenge Enable NDMP on vserver: false Preferred interface role: intercluster, data
Node Level specifics
Only one User ID can be specified. By default this is the root user.
This can be modified quite easily. Note - the NDMP user does not need to be an existing ONTAP user ID (I think I read somewhere that if you share the same ID, the passwords need to be different). My recommendation is to avoid creating an ONTAP user with this same name
CLUSTER01::> system services ndmp modify -user-id newuser -node CLUSTER01-03 Please enter password: Confirm password:
The existing password (for the root user) can be modified with the following command
CLUSTER01::> system services ndmp password -node CLUSTER01-03 Please enter password: Confirm password:
A node-management LIF must be created for node-level NDMP transfers.
Where possible, ensure this LIF is on a different subnet to existing data LIFs (we encountered some confusion when placing this LIF on the same subnet as a data LIF).
As a general practice, I try to ensure different subnets for each of the following:
- Out Of Band node management
- NDMP (node management)
- Cluster Management
LIF creation example:
CLUSTER01::> network interface create -vserver CLUSTER01-03 -lif ndmp -role node-mgmt -home-node CLUSTER01-03 -home-port e0j -address 192.168.80.10 -netmask 255.255.255.0 -status-admin up
Node Scope Mode
This one confused me a little, the NetApp documentation didn't seem to provide much assistance. Still, after sorting out several networking challenges (largely outside of the filer), we were still unable to get NDMP communications flowing until this was set to on
CLUSTER01::> system services ndmp node-scope-mode on
I believe this determines the set of privileges the NDMP client has once they connect. With this disabled, it seemed that we were unable to establish a connection.
Once we enabled it, everything worked seamlessly.
vServer level specifics
I haven't experiemented much at the vServer level, however the main difference I noticed here was that multiple NDMP user IDs can be configured for each vServer.
First, a local ONTAP user account needs to be created. Unsure what roles they require, I'm guessing ontapi may be sufficient.
Secondly, the ndmp password needs to be configured. The NetApp documentation suggests this needs to be different to the user's normal ONTAP password
CLUSTER01::> vserver services ndmp generate-password -vserver vsv01 -user ndmpuser
A LIF of type "data" needs to be created for ndmp transfers within a vserver.
CLUSTER01::> network interface create -vserver vsv01 -lif vsv01-ndmp -role data -home-node CLUSTER01-03 -home-port e0j -address 192.168.90.10 -netmask 255.255.255.0 -status-admin up
If I come across a need to configure NDMP within a vserver, I'll add further observations in here.