This is an old revision of the document!
How to load XRAID disk sets
Ensure no one has left open files on the disks: Run
lsof in the directories where the data is located.
You may also want to see whether anyone is using the machine. Use the “w” command.
If necessary, disable NFS on the computer to which the destination xraid is attached. I don't know how to do this safely on cuppa02 because it stops /nfs/apps and /home as well
Stop the disks and unmount the file systems.
> sudo umount /exports/xraid0X/*
If any of the xraid file systems cannot be unmounted, you will have to figure out which process is hanging on before proceeding.
Power down the xraid (this could be done in software from the comfort of your office through “XRAID Admin Tools”, which aren't yet installed).
Remove disks and replace in appropriate cases in the correct order!
Insert new disks, #1 on the left, #7 on the right. Check contacts on disks (and in XRAID if possible) before inserting into XRAID chassis. Disks slide in and then require a final push, you should hear them lock (thump!) into place, at which stage with the handle depressed they will be flush with the chassis.
Power on the XRAID. This should be done in the cluster room so you can watch for alarms/red lights. The power button is on the BACK of the chassis. Hold it in for a couple of seconds.
If all disks come up green, you can go back to your office! Otherwise, open XRAID Admin Tools and identify the source of the problem. If a disk has failed, it might be worth trying to power-cycle and re-insert. Sometimes the disk itself is fine, but not properly seated in the chassis. If a spare disk is required, you should be able to insert it in place of the failed one (even while the chassis is running) and the array will automatically rebuild using the new disk. If using an OLD disk or trying to re-insert an improperly seated disk, you may have to explicitly make it available to the array, because it will already contain RAID information. See Rebuilding XRAIDs
Once the XRAID is running, you can reload the SCSI devices and mount their file systems. There should be no reason to reboot the host node.
> sudo /nfs/apps/vlbi/refresh_xraid
> sudo mount /exports/xraid/?_?
where ?_? is l_1, l_2, l_3, r_1, r_2, r_3 as required (left and right referring to disk banks in a chassis).
We believe that this will mount the first left device in l_1 and so forth. However you should check this by ensuring that the data you expect to be on the device is in fact there before you try doing anything with it! ABSOLUTELY NO WARRANTY and all that.
If you are concerned, try mounting
/dev/sd?1 to the exports directory you want it to be, and make sure the left/right set of lights come on when you run the command. I believe devices always come up in the same order within a disk set, but disk sets are not always recognised in the same order (left-right, right-left, alternating) in a chassis.
Note This is not recommended now. The correlator will start datastream processes on the correct local host node provided this is specified in the corresponding line of the machines file (this was apparently not always the case in the past).
It is now possible to mount the Xraids over nfs as any user, with e.g.
mount /nfs/xraid01/l_1. To make life easier, there are shell scripts to do it in /home/corr/LBA/scripts/ mountxraids.sh and unmountxraids.sh
On cuppa01, type
cssh cuppa to get a multi-window command prompt, so the scripts can be run simultaneously on all cuppas.
To change disks, after unmounting from NFS, it seems to be necessary to restart the nfs server on the local host node:
sudo /etc/init.d/nfs-kernel-server restart
in order to
etc. without getting “device busy” errors. Be a bit careful doing this… try to check that no one is running stuff first. If other people are logged on may get stale file handle problems.
Back to XRAID menu