viernes, 28 de septiembre de 2012
Montar iso o DVD sobre un LDOM
Ahora toca instalar el SO sobre el ldom aqui estan los pasos breves para montar un iso sobre el dominio.
root@t5120 # ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 2G 0.2% 21h 17m
ldg0 active -n---- 5000 8 512M 0.1% 15h 40m
ldg1 active -t---- 5001 8 512M 12% 21h 15m
ldg2 active -t---- 5002 8 512M 12% 21h 15m
root@t5120 #
Primero bajo el dominio en el cual quiero montar el SO que es el ldg1
root@t5120 # ldm stop ldg1
LDom ldg1 stopped
root@t5120 # ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 2G 0.3% 21h 17m
ldg0 active -n---- 5000 8 512M 0.1% 15h 41m
ldg1 bound ------ 5001 8 512M
ldg2 active -t---- 5002 8 512M 12% 21h 15m
root@t5120 #
Ahora de desenlaza el dominio con el comando
root@t5120 # ldm unbind ldg1
root@t5120 # ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 2G 2.0% 21h 19m
ldg0 active -n---- 5000 8 512M 0.1% 15h 42m
ldg2 active -t---- 5002 8 512M 12% 21h 17m
ldg1 inactive ------ 8 512M
root@t5120 #
Ahora si se puede añadir el iso desde el dominio primario
root@t5120 # ldm add-vdsdev /export/home/itc/install/V27764-01.iso iso_vol@primary-vds0
root@t5120 # ldm add-vdisk vdisk_iso iso_vol@primary-vds0 ldg1
root@t5120 #
root@t5120 # ldm bind ldg1
root@t5120 # ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 2G 2.2% 21h 35m
ldg1 bound ------ 5000 8 512M
ldg2 active -t---- 5002 8 512M 12% 21h 32m
ldg0 inactive ------ 8 512M
root@t5120 #
root@t5120 # ldm start ldg1
LDom ldg1 started
root@t5120 # ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 2G 0.2% 21h 35m
ldg1 active -t---- 5000 8 512M 4.6% 2s
ldg2 active -t---- 5002 8 512M 12% 21h 33m
ldg0 inactive ------ 8 512M
root@t5120 #
Nos conectamos al dominio y se verifica el est
{0} ok banner
SPARC Enterprise T5120, No Keyboard
Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved.
OpenBoot 4.33.0.b, 512 MB memory available, Serial #83525636.
Ethernet address 0:14:4f:fa:80:4, Host ID: 84fa8004.
{0} ok devalias
vdisk_iso /virtual-devices@100/channel-devices@200/disk@1
vdisk0 /virtual-devices@100/channel-devices@200/disk@0
vnet0 /virtual-devices@100/channel-devices@200/network@0
net /virtual-devices@100/channel-devices@200/network@0
disk /virtual-devices@100/channel-devices@200/disk@0
virtual-console /virtual-devices/console@1
name aliases
{0} ok
En esta instancia ya se puede levantar desde el iso virtual
{0} ok boot cdrom
Boot device: /virtual-devices@100/channel-devices@200/disk@0 File and args: cdrom
Bad magic number in disk label
ERROR: /virtual-devices@100/channel-devices@200/disk@0: Can't open disk label package
ERROR: boot-read fail
Can't open boot device
{0} ok boot vdisk_iso
Boot device: /virtual-devices@100/channel-devices@200/disk@1 File and args:
SunOS Release 5.10 Version Generic_147440-01 64-bit
Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface vnet0...
Para remover hay que seguir el siguiente orden.
1 Parar el dominio
2 Desenlazar el dominio
3 Remover el disco con el comando
ldm remove-vdisk [-f] disk-name ldom
root@t5120 # ldm remove-vdisk vdisk_iso ldg0
4 Remover el device entregado al dominio con:
ldm remove-vdsdev [-f] volume-name@service-n
root@t5120 # ldm remove-vdsdev iso_vol@primary-vds0
root@t5120 #
Thats all!!
jueves, 27 de septiembre de 2012
Montar USB sobre solaris 10
Con los memory stick de hasta 4 gigas nunca se tuvieron problemas y eso que siempre se utilizaron de distintos fabricantes el procedimiento clasico era el siguente:
Aplicar el comando volcheck en forma directa sobre Solaris 10, con solaris 9 habia que hacer el adicional de bajar y subir el demonio volmgt y siempre fue suficiente.
root@t5120 # volcheck
root@t5120 # /etc/init.d/volmgt stop
root@t5120 # /etc/init.d/volmgt start
volume management starting.
root@t5120 #
Ahora ya tengo un memory stick de 8gigas en teoria de acuerdo al fabricante con soporte USB3.0 y casi todos los servidores de momento tienen solo USB2.0.
Pequeño gran problema ya el procedimiento ahora no avanza para adelante ya que el resultado es que directamente no se puede acceder al dispositivo.
Al final es como todo solo se tiene que dar la vuelta y encontrar solucion y se presenta un procediento que funciona.
Paso 1 devfsadm -C
Paso 2 mount -F pcfs /dev/dsk/cXt0d0s0:c /mnt
Voila!! todo montado y accesible con esto ya accedemos a la data.
martes, 25 de septiembre de 2012
ZFS mirror
Despues de mucho tiempo toca migrar un equipo antiguo a otro un poco menos antiguo, el detalle de reconfigurar el /dev no es mucho problema con solaris 10 lo unico a no olvidar es cambiar el nombre de las interfaces de red y con eso todo esta cerrado.
El mirror original que esta zon ZFS se corrompio por algun motivo pero la verdad es que no se tiene tiempo para averiguar que es lo que paso.
El status del mirror ahora es el siguiente:
root@odin # zpool status
pool: solpool
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
solpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c0t11d0s0 UNAVAIL 0 0 0 corrupted data
c8t0d0s0 ONLINE 0 0 0
errors: No known data errors
root@odin #
Los discos instalados en el equipo son los siguientes:
root@odin # echo | format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c8t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/scsi@2/sd@0,0
1. c8t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/scsi@2/sd@1,0
Specify disk (enter its number): Specify disk (enter its number):
Lo primero que se hara es sacar ese disco que no existe en el equipo el c0t11d0s0 con el siguiente comando
root@odin # zpool detach solpool c0t11d0s0
root@odin # zpool status
pool: solpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
errors: No known data errors
Ya con esto se hace el atach del disco que debe ser:
root@odin # zpool attach solpool c8t0d0s0 c8t1d0s0
Please be sure to invoke installboot(1M) to make 'c8t1d0s0' bootable.
Make sure to wait until resilver is done before rebooting.
root@odin #
Se lo corre el comando para que sea booteable.
root@odin # pwd
/usr/platform/SUNW,Sun-Fire-V210/lib/fs/zfs
root@odin # installboot bootblk /dev/rdsk/c8t1d0s0
root@odin #
Con esto solo queda medir el estado de la resincronizacion.
root@odin # zpool status -v
pool: solpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h6m, 36.16% done, 0h12m to go
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 8.20G resilvered
errors: No known data errors
root@odin #
De acuerdo a lo anterior debe concluir en 12 minutos mas
Paso un poco mas de los 12 minutos y se tiene la siguiente pantalla.
root@odin # zpool status -xv
pool: solpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h18m, 98.98% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 22.5G resilvered
errors: No known data errors
root@odin # zpool status -xv
pool: solpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h18m, 98.98% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 22.5G resilvered
errors: No known data errors
Finalmente.
root@odin # zpool status -xv
all pools are healthy
Para tener la practica necesaria el paso dos en el escenario en el que falle el otro disco seria lo siguiente:
root@odin # zpool status
pool: solpool
state: ONLINE
scrub: resilver completed after 0h19m with 0 errors on Tue Sep 25 11:36:03 2012
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 22.7G resilvered
errors: No known data errors
root@odin # zpool detach solpool c8t0d0
cannot detach c8t0d0: no such device in pool
root@odin # zpool detach solpool c8t0d0s0
root@odin # zpool status
pool: solpool
state: ONLINE
scrub: resilver completed after 0h19m with 0 errors on Tue Sep 25 11:36:03 2012
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 22.7G resilvered
errors: No known data errors
root@odin # zpool attach c8t1d0 c8t0d0
missing <new_device> specification
usage:
attach [-f] <pool> <device> <new-device>
root@odin # zpool attach c8t1d0s0 c8t0d0s0
missing <new_device> specification
usage:
attach [-f] <pool> <device> <new-device>
root@odin # zpool attach solpool c8t1d0s0 c8t0d0s0
Please be sure to invoke installboot(1M) to make 'c8t0d0s0' bootable.
Make sure to wait until resilver is done before rebooting.
root@odin # zpool status -xv
pool: solpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 1.18% done, 0h41m to go
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0 275M resilvered
errors: No known data errors
root@odin #
El mirror original que esta zon ZFS se corrompio por algun motivo pero la verdad es que no se tiene tiempo para averiguar que es lo que paso.
El status del mirror ahora es el siguiente:
root@odin # zpool status
pool: solpool
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: http://www.sun.com/msg/ZFS-8000-4J
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
solpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
c0t11d0s0 UNAVAIL 0 0 0 corrupted data
c8t0d0s0 ONLINE 0 0 0
errors: No known data errors
root@odin #
Los discos instalados en el equipo son los siguientes:
root@odin # echo | format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c8t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/scsi@2/sd@0,0
1. c8t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
/pci@1c,600000/scsi@2/sd@1,0
Specify disk (enter its number): Specify disk (enter its number):
Lo primero que se hara es sacar ese disco que no existe en el equipo el c0t11d0s0 con el siguiente comando
root@odin # zpool detach solpool c0t11d0s0
root@odin # zpool status
pool: solpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
errors: No known data errors
Ya con esto se hace el atach del disco que debe ser:
root@odin # zpool attach solpool c8t0d0s0 c8t1d0s0
Please be sure to invoke installboot(1M) to make 'c8t1d0s0' bootable.
Make sure to wait until resilver is done before rebooting.
root@odin #
Se lo corre el comando para que sea booteable.
root@odin # pwd
/usr/platform/SUNW,Sun-Fire-V210/lib/fs/zfs
root@odin # installboot bootblk /dev/rdsk/c8t1d0s0
root@odin #
Con esto solo queda medir el estado de la resincronizacion.
root@odin # zpool status -v
pool: solpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h6m, 36.16% done, 0h12m to go
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 8.20G resilvered
errors: No known data errors
root@odin #
De acuerdo a lo anterior debe concluir en 12 minutos mas
Paso un poco mas de los 12 minutos y se tiene la siguiente pantalla.
root@odin # zpool status -xv
pool: solpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h18m, 98.98% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 22.5G resilvered
errors: No known data errors
root@odin # zpool status -xv
pool: solpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h18m, 98.98% done, 0h0m to go
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 22.5G resilvered
errors: No known data errors
Finalmente.
root@odin # zpool status -xv
all pools are healthy
Para tener la practica necesaria el paso dos en el escenario en el que falle el otro disco seria lo siguiente:
root@odin # zpool status
pool: solpool
state: ONLINE
scrub: resilver completed after 0h19m with 0 errors on Tue Sep 25 11:36:03 2012
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 22.7G resilvered
errors: No known data errors
root@odin # zpool detach solpool c8t0d0
cannot detach c8t0d0: no such device in pool
root@odin # zpool detach solpool c8t0d0s0
root@odin # zpool status
pool: solpool
state: ONLINE
scrub: resilver completed after 0h19m with 0 errors on Tue Sep 25 11:36:03 2012
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0 22.7G resilvered
errors: No known data errors
root@odin # zpool attach c8t1d0 c8t0d0
missing <new_device> specification
usage:
attach [-f] <pool> <device> <new-device>
root@odin # zpool attach c8t1d0s0 c8t0d0s0
missing <new_device> specification
usage:
attach [-f] <pool> <device> <new-device>
root@odin # zpool attach solpool c8t1d0s0 c8t0d0s0
Please be sure to invoke installboot(1M) to make 'c8t0d0s0' bootable.
Make sure to wait until resilver is done before rebooting.
root@odin # zpool status -xv
pool: solpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 1.18% done, 0h41m to go
config:
NAME STATE READ WRITE CKSUM
solpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t1d0s0 ONLINE 0 0 0
c8t0d0s0 ONLINE 0 0 0 275M resilvered
errors: No known data errors
root@odin #
Suscribirse a:
Entradas (Atom)