It is possible to move a complete GPFS filesystem between clusters with the mmexportfs
and mmimportfs
commands. I’ve found this useful for preparing a complete filesystem and populating it with data before shipping it to a remote site.
Preparing the mss3
filesystem to be exported/relocated to the another GPFS cluster attaching to the GPFS NSD server named bar1.example.com
:
# mmlsdisk mss3 disk driver sector failure holds holds storage name type size group metadata data status availability pool ------------ -------- ------ ------- -------- ----- ------------- ------------ ------------ bar1_nsd1 nsd 512 1 Yes Yes ready up system bar1_nsd2 nsd 512 1 Yes Yes ready up system bar1_nsd3 nsd 512 1 Yes Yes ready up system bar1_nsd4 nsd 512 1 Yes Yes ready up system # mmumount mss3 -a Wed May 29 12:49:56 MST 2013: mmumount: Unmounting file systems ... # mmexportfs mss3 -o mss3.gpfs mmexportfs: Processing file system mss3 ... mmexportfs: Propagating the cluster configuration data to all affected nodes. This is an asynchronous process. # scp mss3.gpfs bar1.example.com:. mss3.gpfs 100% 3379 3.3KB/s 00:00
Importing the mss3
filesystem on the bar1
host:
# mmimportfs mss3 -i mss3.gpfs mmimportfs: Processing file system mss3 ... mmimportfs: Processing disk bar1_nsd1 mmimportfs: Incorrect node foo3.example.com specified for command. mmimportfs: Processing disk bar1_nsd2 mmimportfs: Incorrect node foo3.example.com specified for command. mmimportfs: Processing disk bar1_nsd3 mmimportfs: Incorrect node foo3.example.com specified for command. mmimportfs: Processing disk bar1_nsd4 mmimportfs: Incorrect node foo3.example.com specified for command. mmimportfs: Committing the changes ... mmimportfs: The following file systems were successfully imported: mss3 mmimportfs: The NSD servers for the following disks from file system mss3 were reset or not defined: bar1_nsd1 bar1_nsd2 bar1_nsd3 bar1_nsd4 mmimportfs: Use the mmchnsd command to assign NSD servers as needed. # mmlsnsd -X Disk name NSD volume ID Device Devtype Node name Remarks --------------------------------------------------------------------------------------------------- bar1_nsd1 8CFC1C0B519A7DF7 /dev/sdc generic leda1.example.com bar1_nsd2 8CFC1C0B519A7DF8 /dev/sdd generic leda1.example.com bar1_nsd3 8CFC1C0B519A7DF9 /dev/sde generic leda1.example.com bar1_nsd4 8CFC1C0B519A7DFA /dev/sdf generic leda1.example.com # mmlsfs mss3 flag value description ------------------- ------------------------ ----------------------------------- -f 65536 Minimum fragment size in bytes -i 512 Inode size in bytes -I 32768 Indirect block size in bytes -m 1 Default number of metadata replicas -M 2 Maximum number of metadata replicas -r 1 Default number of data replicas -R 2 Maximum number of data replicas -j cluster Block allocation type -D nfs4 File locking semantics in effect -k all ACL semantics in effect -n 32 Estimated number of nodes that will mount file system -B 2097152 Block size -Q user;group;fileset Quotas enforced none Default quotas enabled --filesetdf Yes Fileset df enabled? -V 13.01 (3.5.0.0) File system version --create-time Mon May 20 12:53:43 2013 File system creation time -u Yes Support for large LUNs? -z No Is DMAPI enabled? -L 16777216 Logfile size -E No Exact mtime mount option -S Yes Suppress atime mount option -K no Strict replica allocation option --fastea Yes Fast external attributes enabled? --inode-limit 122081280 Maximum number of inodes -P system Disk storage pools in file system -d bar1_nsd1;leda1_nsd2;leda1_nsd3;leda1_nsd4 Disks in file system --perfileset-quota yes Per-fileset quota enforcement -A yes Automatic mount option -o none Additional mount options -T /net/mss3 Default mount point --mount-priority 0 Mount priority # mmfsck mss3 -y -v Checking "mss3" fsckFlags 0xA Stripe group managerneedNewLogs 0 nThreads 16 commited nodes 1 clientTerm 0 fsckReady 1 fsckCreated 0 % pool allowed 50 tuner off threshold 0.20 Disks 4 Bytes per metadata subblock 65536 Sectors per metadata subblock 128 Bytes per data subblock 65536 Sectors per data subblock 128 Sectors per indirect block 64 Subblocks per block 32 Subblocks per indirect block 1 Inodes 9715712 Inode size 512 singleINum -1 Fsck manager nodes 1 Inodes per fsck manager 9715712 Inode regions 257 maxInodesPerSegment 261120 Segments per inode region 1 Bytes per inode segment 2097152 nInode0Files 1 Regions per pass of pool system 1863 fsckStatus 2 PA size 155451392 PA map size 155451392 Inodes per inode block 4096 Data ptrs per inode 16 Indirect ptrs per inode 16 Data ptrs per indirect 1363 User files exposed some Meta files exposed some User files ill replicated some Meta files ill replicated some User files unbalanced some Meta files unbalanced some Current Global snapshots 0 Max Global snapshots 256 checkFilesets 1 checkFilesetsV2 1 5 % complete on Wed May 29 17:21:48 2013 Checking inodes Regions 0 to 1862 of total 1863 in storage pool "system". 10 % complete on Wed May 29 17:22:48 2013 16 % complete on Wed May 29 17:23:48 2013 22 % complete on Wed May 29 17:24:48 2013 45 % complete on Wed May 29 17:25:32 2013 Checking inode map file 50 % complete on Wed May 29 17:25:34 2013 52 % complete on Wed May 29 17:25:34 2013 55 % complete on Wed May 29 17:25:34 2013 Checking directories and files Scanning directory inodes : Pass 1 of 1 Node 10.0.0.228 (bar1) starting inode scan 0 to 9715711 Scanning directory entries : Pass 1 of 1 Node 10.0.0.228 (bar1) starting inode scan 0 to 9715711 62 % complete on Wed May 29 17:26:34 2013 Verifying file link counts : Pass 1 of 1 Node 10.0.0.228 (bar1) starting inode scan 0 to 9715711 Scanning directories for cycle 83 % complete on Wed May 29 17:29:42 2013 Checking log files Checking extended attributes file Checking allocation summary file Checking policy file Checking filesets metadata Checking file reference counts 97 % complete on Wed May 29 17:29:42 2013 Checking file system replication status 100 % complete on Wed May 29 17:29:42 2013 9715712 inodes 7036803 allocated 0 repairable 0 repaired 0 damaged 0 deallocated 0 orphaned 0 attached 1953234944 subblocks 1346682767 allocated 0 unreferenced 0 deletable 0 deallocated 45679055 addresses 0 suspended File system is clean. # mmmount mss3 Thu May 30 10:09:34 MST 2013: mmmount: Mounting file systems ... # df -h /net/mss3/ Filesystem Size Used Avail Use% Mounted on /dev/mss3 117T 81T 36T 70% /net/mss3
2014-06-13 at 04:00
hi I am new to storage kind of job. I am trying to migrate datas from FC to SATA in a GPFS 3.1 IBM storage system. When i did a trial of copying 8GB block iso image using rsync from FC to SATA its just copying in 32 MB/s. But the actual speed should be some what around 2GB/s. Please help me with it and is it possible to automate by writing a script for all the users to set and modify quota using the storage? Thanrks in advance for your reply?
2014-06-13 at 04:22
i forgot to say i m using RAID 5 systems