Despues de mucho tiempo toca migrar un equipo antiguo a otro un poco menos antiguo, el detalle de reconfigurar el /dev no es mucho problema con solaris 10 lo unico a no olvidar es cambiar el nombre de las interfaces de red y con eso todo esta cerrado.
El mirror original que esta zon ZFS se corrompio por algun motivo pero la verdad es que no se tiene tiempo para averiguar que es lo que paso.
El status del mirror ahora es el siguiente:
root@odin # zpool status
pool: solpool
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
solpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c0t11d0s0 UNAVAIL 0 0 0 corrupted data
c8t0d0s0 ONLINE 0 0 0
errors: No known data errors
root@odin #
Los discos instalados en el equipo son los siguientes:
root@odin # echo | format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c8t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/scsi@2/sd@0,0
1. c8t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/scsi@2/sd@1,0
Specify disk (enter its number): Specify disk (enter its number):
Lo primero que se hara es sacar ese disco que no existe en el equipo el c0t11d0s0 con el siguiente comando
root@odin # zpool detach solpool c0t11d0s0
root@odin # zpool status
pool: solpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
errors: No known data errors
Ya con esto se hace el atach del disco que debe ser:
root@odin # zpool attach solpool c8t0d0s0 c8t1d0s0
Please be sure to invoke installboot(1M) to make 'c8t1d0s0' bootable.
Make sure to wait until resilver is done before rebooting.
root@odin #
Se lo corre el comando para que sea booteable.
root@odin # pwd
/usr/platform/SUNW,Sun-Fire-V210/lib/fs/zfs
root@odin # installboot bootblk /dev/rdsk/c8t1d0s0
root@odin #
Con esto solo queda medir el estado de la resincronizacion.
root@odin # zpool status -v
pool: solpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h6m, 36.16% done, 0h12m to go
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 8.20G resilvered
errors: No known data errors
root@odin #
De acuerdo a lo anterior debe concluir en 12 minutos mas
Paso un poco mas de los 12 minutos y se tiene la siguiente pantalla.
root@odin # zpool status -xv
pool: solpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h18m, 98.98% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 22.5G resilvered
errors: No known data errors
root@odin # zpool status -xv
pool: solpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h18m, 98.98% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 22.5G resilvered
errors: No known data errors
Finalmente.
root@odin # zpool status -xv
all pools are healthy
Para tener la practica necesaria el paso dos en el escenario en el que falle el otro disco seria lo siguiente:
root@odin # zpool status
pool: solpool
state: ONLINE
scrub: resilver completed after 0h19m with 0 errors on Tue Sep 25 11:36:03 2012
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 22.7G resilvered
errors: No known data errors
root@odin # zpool detach solpool c8t0d0
cannot detach c8t0d0: no such device in pool
root@odin # zpool detach solpool c8t0d0s0
root@odin # zpool status
pool: solpool
state: ONLINE
scrub: resilver completed after 0h19m with 0 errors on Tue Sep 25 11:36:03 2012
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 22.7G resilvered
errors: No known data errors
root@odin # zpool attach c8t1d0 c8t0d0
missing <new_device> specification
usage:
attach [-f] <pool> <device> <new-device>
root@odin # zpool attach c8t1d0s0 c8t0d0s0
missing <new_device> specification
usage:
attach [-f] <pool> <device> <new-device>
root@odin # zpool attach solpool c8t1d0s0 c8t0d0s0
Please be sure to invoke installboot(1M) to make 'c8t0d0s0' bootable.
Make sure to wait until resilver is done before rebooting.
root@odin # zpool status -xv
pool: solpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 1.18% done, 0h41m to go
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0 275M resilvered
errors: No known data errors
root@odin #
No hay comentarios:
Publicar un comentario