The failed disk in any solaris servers can be removed basically using these 2 commands :
1. luxadm ( most SAS and SCSI disks )
2. cfgadm ( most fiber channel disk )
Make sure you have removed the disk from volume manager control before removing the disk from OS control.
Using luxadm
1. Remove the disk using the luxadm command :
# /usr/sbin/luxadm remove_device /dev/rdsk/c1t1d0s2
2. If the disk fails to get removed, physically remove the disk and use below command :
# luxadm -e offline /dev/rdsk/c1t1d0s2
If the disk is multipathed, run the above command on the 2nd path as well. The picld daemon notifies the system about the disk removal.
3. Cleanup the device files for the removed device from /dev directory :
# devfsadm -Cv
Using cfgadm
1. Use the cfgadm command to display all the disks in the server.
# cfgadm -al Ap_Id Type Receptacle Occupant Condition c0 scsi-bus connected configured unknown c0::dsk/c0t0d0 CD-ROM connected configured unknown c1 scsi-bus connected configured unknown c1::dsk/c1t0d0 disk connected configured unknown c1::dsk/c1t1d0 disk connected configured unknown c1::dsk/c1t2d0 disk connected configured unknown c1::dsk/c1t3d0 disk connected configured unknown c2 scsi-bus connected configured unknown c2::dsk/c2t2d0 disk connected configured unknown .....
2. On identifying the disk to be removed, unconfigure the disk. You may have to use -f along with -c to forcibly remove the disk in some cases.
# cfgadm -c unconfigure c1::dsk/c1t3d0
3. Verify the status of the disk in cfgadm -al command. It should show unconfigured and unavailable.
c1::dsk/c1t3d0 unavailable connected unconfigured unknown
You can safely remove the disk from the server now.
Installing the new disk
Insert the new disk into the disk slot of the server and run the below command. The command is common for both the above methods.
# devfsadm
You should see the new disk detected in the OS:
# ls /dev/rdsk/c#t#d#*