Marangani
martes, 19 de agosto de 2014
Live Upgrade Solaris 10 pasando de UFS a ZFS
Pasar de UFS a ZFS
Lo primero es crear el pool en el cual va estar el SO
zpool create rpool c0t0d0s0
Dos opciones utilizar el c0t0d0 para que use el s0 por defecto o darle el sX
en especifico
Ahora hay que realizar la copia del boot environment al rpool
lucreate -c c0t0d0 -n BEdeZFS -p rpool
Si deseo crear un dataset separado se emplea la opcion -D por ejemplo
lucreate -c c0t0d0 -n BEdeZFS -p rpool -D /var
Actualizacion
Cuando ya tengo el ABE alternative boot environment hago lo siguiente
Montar la imagen de instalacion y hacer el upgrade de los paquetes
# path/Solaris_x/Tools/Installers/liveupgrade20 -nodisplay -noconsole
Para actualizar con lo siguiente:
luupgrade -u -n BEdeZFS -s /mnt
Ya esta actualizado ahora hay que parchar con el EIS:
Montar la imagen del EIS
cd /media/eis-dvd/sun/install
./setup-standar.sh
Despues se monta el ABE
lumount BEdeZFS
Como ya esta montado y emparejado lo del EIS
./setup-standar.sh -R / .alt.BEdeZFS
cd /media/eis-dvd/sun
patch-EIS -R / .alt.BEdeZFS /var/tmp
luumount /.alt.BEdeZFS
En esta instancia todo esta terminado asi que queda montar
luactivate BEdeZFS
init 6
miércoles, 9 de julio de 2014
Configuracion IPv4 en solaris 11
Se revisa cual profile esta activado para poner la IP de manera estatica es necesario habilitar el "DefaultFixed":
netadm list
root@solaris:~# ipadm create-ip net0 root@solaris:~# ipadm show-if IFNAME CLASS STATE ACTIVE OVER lo0 loopback ok yes --- net0 ip down no --- root@solaris:~# ipadm create-addr -T static -a 10.163.198.20/24 net0/acme root@solaris:~# ipadm show-if IFNAME CLASS STATE ACTIVE OVER lo0 loopback ok yes --- net0 ip ok yes --- root@solaris:~# ipadm show-addr ADDROBJ TYPE STATIC ADDR lo0/v4 static ok 127.0.0.1/8 net0/acme static ok 10.163.198.20/24 lo0/v6 static ok ::1/128Listing 1. Configuring a Static IP Address
We can then add a persistent default route:
root@solaris:~# route -p add default 10.163.198.1 add net default: gateway 10.163.198.1 add persistent net default: gateway 10.163.198.1
martes, 17 de diciembre de 2013
Solaris 8 en una zona
Primero se revisa añade los paquetes de legacy conteiners:
root@ue250 # cd solarislegacycontainers/
root@ue250 # ls
1.0 1.0.1 README
root@ue250 # cd 1.0.1/
root@ue250 # ls
Legal Product
root@ue250 # pkgadd -d .
pkgadd: ERROR: no packages were found in </export/home/itc/solarislegacycontainers/1.0.1>
root@ue250 # pwd
/export/home/itc/solarislegacycontainers/1.0.1
root@ue250 # ls -F
Legal/ Product/
root@ue250 # cd Product/
root@ue250 # ls
SUNWs8brandk SUNWs9brandk
root@ue250 # pkgadd -d .
The following packages are available:
1 SUNWs8brandk Solaris 8 Containers: solaris8 brand support RTU
(sparc) 11.10.0,REV=2008.09.20.18.50
2 SUNWs9brandk Solaris 9 Containers: solaris9 brand support RTU
(sparc) 11.10.0,REV=2008.09.20.18.50
Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]: all
Processing package instance <SUNWs8brandk> from </export/home/itc/solarislegacycontainers/1.0.1/Product>
Solaris 8 Containers: solaris8 brand support RTU(sparc) 11.10.0,REV=2008.09.20.18.50
Copyright 2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </> as the package base directory.
## Processing package information.
## Processing system information.
8 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
The following files are already installed on the system and are being
used by another package:
/usr/share/man/man5/solaris8.5
Do you want to install these conflicting files [y,n,?,q] y
## Checking for setuid/setgid programs.
Installing Solaris 8 Containers: solaris8 brand support RTU as <SUNWs8brandk>
## Installing part 1 of 1.
/usr/lib/brand/solaris8/files/patches/109147-44.zip
/usr/lib/brand/solaris8/files/patches/109221-01.zip
/usr/lib/brand/solaris8/files/patches/111023-03.zip
/usr/lib/brand/solaris8/files/patches/111431-01.zip
/usr/lib/brand/solaris8/files/patches/112050-04.zip
/usr/lib/brand/solaris8/files/patches/112605-04.zip
/usr/lib/brand/solaris8/files/patches/order
/usr/share/man/man5/solaris8.5
[ verifying class <none> ]
Installation of <SUNWs8brandk> was successful.
Processing package instance <SUNWs9brandk> from </export/home/itc/solarislegacycontainers/1.0.1/Product>
Solaris 9 Containers: solaris9 brand support RTU(sparc) 11.10.0,REV=2008.09.20.18.50
Copyright 2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </> as the package base directory.
## Processing package information.
## Processing system information.
8 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.
Installing Solaris 9 Containers: solaris9 brand support RTU as <SUNWs9brandk>
## Installing part 1 of 1.
/usr/lib/brand/solaris9/files/patches/112963-32.zip
/usr/lib/brand/solaris9/files/patches/115986-03.zip
/usr/lib/brand/solaris9/files/patches/order
/usr/share/man/man5/solaris9.5
[ verifying class <none> ]
Installation of <SUNWs9brandk> was successful.
root@ue250 #
Ahora se crean los parametros para la zona:
root@ue250 # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
root@ue250 # zonecfg -z solaris8
solaris8: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:solaris8> create -t SUNWsolaris8
zonecfg:solaris8> set zonepath=/zonas/solaris8
zonecfg:solaris8> set autoboot=true
zonecfg:solaris8> add net
zonecfg:solaris8:net> set physical=ce3
zonecfg:solaris8:net> set address=192.168.200.251
zonecfg:solaris8:net> set defrouter=192.168.200.100
zonecfg:solaris8:net> end
zonecfg:solaris8> verify
zonecfg:solaris8> commit
zonecfg:solaris8> exit
root@ue250 #
Ahora la carpeta del path le damos los permisos requeridos:
root@ue250 # mkdir -p /zonas/solaris8
root@ue250 # chmod -R 700 /zonas/solaris8/
root@ue250 #
Ya con eso se instala desde el backup que esta con formato *.dmp.
root@ue250 # zoneadm -z solaris8 install -p -v -a /export/home/itc/Ultra60Entel/raiz.dmp
Log File: /var/tmp/solaris8.install.1503.log
Product: Solaris 8 Containers 1.0
Installer: solaris8 brand installer 1.3
Zone: solaris8
Path: /zonas/solaris8
Source: /export/home/itc/Ultra60Entel/raiz.dmp
Media Type: ufsdump archive
Installing: This may take several minutes...
Hay que esperar que termine el proceso.
root@ue250 # cd solarislegacycontainers/
root@ue250 # ls
1.0 1.0.1 README
root@ue250 # cd 1.0.1/
root@ue250 # ls
Legal Product
root@ue250 # pkgadd -d .
pkgadd: ERROR: no packages were found in </export/home/itc/solarislegacycontainers/1.0.1>
root@ue250 # pwd
/export/home/itc/solarislegacycontainers/1.0.1
root@ue250 # ls -F
Legal/ Product/
root@ue250 # cd Product/
root@ue250 # ls
SUNWs8brandk SUNWs9brandk
root@ue250 # pkgadd -d .
The following packages are available:
1 SUNWs8brandk Solaris 8 Containers: solaris8 brand support RTU
(sparc) 11.10.0,REV=2008.09.20.18.50
2 SUNWs9brandk Solaris 9 Containers: solaris9 brand support RTU
(sparc) 11.10.0,REV=2008.09.20.18.50
Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]: all
Processing package instance <SUNWs8brandk> from </export/home/itc/solarislegacycontainers/1.0.1/Product>
Solaris 8 Containers: solaris8 brand support RTU(sparc) 11.10.0,REV=2008.09.20.18.50
Copyright 2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </> as the package base directory.
## Processing package information.
## Processing system information.
8 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
The following files are already installed on the system and are being
used by another package:
/usr/share/man/man5/solaris8.5
Do you want to install these conflicting files [y,n,?,q] y
## Checking for setuid/setgid programs.
Installing Solaris 8 Containers: solaris8 brand support RTU as <SUNWs8brandk>
## Installing part 1 of 1.
/usr/lib/brand/solaris8/files/patches/109147-44.zip
/usr/lib/brand/solaris8/files/patches/109221-01.zip
/usr/lib/brand/solaris8/files/patches/111023-03.zip
/usr/lib/brand/solaris8/files/patches/111431-01.zip
/usr/lib/brand/solaris8/files/patches/112050-04.zip
/usr/lib/brand/solaris8/files/patches/112605-04.zip
/usr/lib/brand/solaris8/files/patches/order
/usr/share/man/man5/solaris8.5
[ verifying class <none> ]
Installation of <SUNWs8brandk> was successful.
Processing package instance <SUNWs9brandk> from </export/home/itc/solarislegacycontainers/1.0.1/Product>
Solaris 9 Containers: solaris9 brand support RTU(sparc) 11.10.0,REV=2008.09.20.18.50
Copyright 2008 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
Using </> as the package base directory.
## Processing package information.
## Processing system information.
8 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.
Installing Solaris 9 Containers: solaris9 brand support RTU as <SUNWs9brandk>
## Installing part 1 of 1.
/usr/lib/brand/solaris9/files/patches/112963-32.zip
/usr/lib/brand/solaris9/files/patches/115986-03.zip
/usr/lib/brand/solaris9/files/patches/order
/usr/share/man/man5/solaris9.5
[ verifying class <none> ]
Installation of <SUNWs9brandk> was successful.
root@ue250 #
Ahora se crean los parametros para la zona:
root@ue250 # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
root@ue250 # zonecfg -z solaris8
solaris8: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:solaris8> create -t SUNWsolaris8
zonecfg:solaris8> set zonepath=/zonas/solaris8
zonecfg:solaris8> set autoboot=true
zonecfg:solaris8> add net
zonecfg:solaris8:net> set physical=ce3
zonecfg:solaris8:net> set address=192.168.200.251
zonecfg:solaris8:net> set defrouter=192.168.200.100
zonecfg:solaris8:net> end
zonecfg:solaris8> verify
zonecfg:solaris8> commit
zonecfg:solaris8> exit
root@ue250 #
Ahora la carpeta del path le damos los permisos requeridos:
root@ue250 # mkdir -p /zonas/solaris8
root@ue250 # chmod -R 700 /zonas/solaris8/
root@ue250 #
Ya con eso se instala desde el backup que esta con formato *.dmp.
root@ue250 # zoneadm -z solaris8 install -p -v -a /export/home/itc/Ultra60Entel/raiz.dmp
Log File: /var/tmp/solaris8.install.1503.log
Product: Solaris 8 Containers 1.0
Installer: solaris8 brand installer 1.3
Zone: solaris8
Path: /zonas/solaris8
Source: /export/home/itc/Ultra60Entel/raiz.dmp
Media Type: ufsdump archive
Installing: This may take several minutes...
Hay que esperar que termine el proceso.
jueves, 28 de noviembre de 2013
zfs grow
root@solaris:/etc/inet#
root@solaris:/etc/inet# zpool status rpool
pool: rpool
state: ONLINE
scan: resilvered 6.19G in 0h11m with 0 errors on Thu Nov 28 11:15:04 2013
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0 ONLINE 0 0 0
c8t3d0 ONLINE 0 0 0
errors: No known data errors
root@solaris:/etc/inet# zpool detach rpool c8t0d0
root@solaris:/etc/inet# df -h -F zfs
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/solaris 7.6G 3.8G 1.3G 75% /
rpool/ROOT/solaris/var
7.6G 392M 1.3G 23% /var
rpool/VARSHARE 7.6G 74K 1.3G 1% /var/share
mypool 39G 28G 11G 73% /datos
rpool/export 7.6G 32K 1.3G 1% /export
rpool/export/home 7.6G 32K 1.3G 1% /export/home
rpool/export/home/itc
7.6G 768K 1.3G 1% /export/home/itc
rpool 7.6G 4.9M 1.3G 1% /rpool
root@solaris:/etc/inet# zpool set autoexpand=on rpool
root@solaris:/etc/inet# df -h -F zfs
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/solaris 39G 3.8G 33G 11% /
rpool/ROOT/solaris/var
39G 392M 33G 2% /var
rpool/VARSHARE 39G 74K 33G 1% /var/share
mypool 39G 28G 11G 73% /datos
rpool/export 39G 32K 33G 1% /export
rpool/export/home 39G 32K 33G 1% /export/home
rpool/export/home/itc
39G 768K 33G 1% /export/home/itc
rpool 39G 4.9M 33G 1% /rpool
root@solaris:/etc/inet#
root@solaris:/etc/inet# zpool status rpool
pool: rpool
state: ONLINE
scan: resilvered 6.19G in 0h11m with 0 errors on Thu Nov 28 11:15:04 2013
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c8t0d0 ONLINE 0 0 0
c8t3d0 ONLINE 0 0 0
errors: No known data errors
root@solaris:/etc/inet# zpool detach rpool c8t0d0
root@solaris:/etc/inet# df -h -F zfs
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/solaris 7.6G 3.8G 1.3G 75% /
rpool/ROOT/solaris/var
7.6G 392M 1.3G 23% /var
rpool/VARSHARE 7.6G 74K 1.3G 1% /var/share
mypool 39G 28G 11G 73% /datos
rpool/export 7.6G 32K 1.3G 1% /export
rpool/export/home 7.6G 32K 1.3G 1% /export/home
rpool/export/home/itc
7.6G 768K 1.3G 1% /export/home/itc
rpool 7.6G 4.9M 1.3G 1% /rpool
root@solaris:/etc/inet# zpool set autoexpand=on rpool
root@solaris:/etc/inet# df -h -F zfs
Filesystem Size Used Available Capacity Mounted on
rpool/ROOT/solaris 39G 3.8G 33G 11% /
rpool/ROOT/solaris/var
39G 392M 33G 2% /var
rpool/VARSHARE 39G 74K 33G 1% /var/share
mypool 39G 28G 11G 73% /datos
rpool/export 39G 32K 33G 1% /export
rpool/export/home 39G 32K 33G 1% /export/home
rpool/export/home/itc
39G 768K 33G 1% /export/home/itc
rpool 39G 4.9M 33G 1% /rpool
root@solaris:/etc/inet#
martes, 4 de diciembre de 2012
setenforcing
setenforcing 0 | 1
# parted /dev/sdb GNU Parted 2.1 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Error: /dev/sdb: unrecognised disk label (parted) mklabel gpt (parted) print Model: Unknown (unknown) Disk /dev/sdb: 5909GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags
jueves, 8 de noviembre de 2012
Conexion hacia el CMM por consola
Sun Blade X6250 Server Module
# ssh -l root blade_6000_cmm
Password:
Sun(TM) Integrated Lights Out Manager
Version 2.0.3.2
Copyright 2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
-> cd BL3
/CH/BL3
-> show
/CH/BL3
Targets:
SP
SEEPROM
Properties:
type = Blade
fru_part_number = 501-7376-02
fru_serial_number = 0000000-7001
fru_name = ASSY,BD,WOLF,X6250
Commands:
cd
show
-> start /CH/BL3/SP/cli
Are you sure you want to start /CH/BL3/SP/cli (y/n)? y
start: Connecting to /CH/BL3/SP/cli as user root
start: Change the "user" property to connect as a different user
root@10.11.2.123's password:
Sun Microsystems Embedded Lights Out Manager
Copyright 2006 Sun Microsystems, Inc. All rights reserved.
Firmware Version: 4.0.51
SMASH Version: v1.0.0
Hostname: SUNSP001B242D495D
IP address: 10.12.9.207
MAC address: 00:1B:24:2D:49:5D
-> show
/
Targets:
SP
SYS
CH
Properties:
Target Commands:
show
cd
-> start /SP/AgentInfo/Console
console activate successful
press ESC+( to terminate session...
x6250a console login:
# ssh -l root blade_6000_cmm
Password:
Sun(TM) Integrated Lights Out Manager
Version 2.0.3.2
Copyright 2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
-> cd BL3
/CH/BL3
-> show
/CH/BL3
Targets:
SP
SEEPROM
Properties:
type = Blade
fru_part_number = 501-7376-02
fru_serial_number = 0000000-7001
fru_name = ASSY,BD,WOLF,X6250
Commands:
cd
show
-> start /CH/BL3/SP/cli
Are you sure you want to start /CH/BL3/SP/cli (y/n)? y
start: Connecting to /CH/BL3/SP/cli as user root
start: Change the "user" property to connect as a different user
root@10.11.2.123's password:
Sun Microsystems Embedded Lights Out Manager
Copyright 2006 Sun Microsystems, Inc. All rights reserved.
Firmware Version: 4.0.51
SMASH Version: v1.0.0
Hostname: SUNSP001B242D495D
IP address: 10.12.9.207
MAC address: 00:1B:24:2D:49:5D
-> show
/
Targets:
SP
SYS
CH
Properties:
Target Commands:
show
cd
-> start /SP/AgentInfo/Console
console activate successful
press ESC+( to terminate session...
x6250a console login:
martes, 6 de noviembre de 2012
Reparar boot archive corrupto en sparc con SVM
Es comun que despues de un apagado brusco del equipo se tenga corrupto el boot-archive, existe el procedimiento para subir el equipo en modo failsafe y desde ahi efectuar la reparacion manual, pero que pasa si esta con SVM.
A continuacion estan los pasos resumidos para esta tarea:
1.- Desde el prompt del ok
ok boot -F failsafe
2.- Montar una de las vias en modo read only para copiar el archivo de configuracion del SVM hacia el sistema en modo failsafe
mount -o ro /dev/dsk/cxtxdxs0 /a
3.- Copiar lo siguiente
cp /a/kernel/drv/md.conf /kernel/drv
4.- Ejecutar el comando para que lea el archivo copiado
update_drv -f md
5.- Ahora si todo ha ido bien se puede lanzar el comando metastat para verificar el resultado de los anteriores pasos.
6.- Ya con esto se tiene plenamente identificado donde esta el metadevice que corresponde al directorio / por ejemplo si fuera d10
metasync d10
7.- Se espera que el status se el ok es decir que las dos vias esten sincronizadas
8.- Se monta el metadevice en /a
mount /dev/md/dsk/d10 /a
9.- Ahora se actualiza el boot archive con el comando bootadm con las opciones de verbosidad y -R para que todo lo haga en forma recursiva
bootadm update-archive -v -R /a
10.- Es necesario desmontar el directorio /a
11.- Finalmente hay que darle un reboot ordenado con init 6
A continuacion estan los pasos resumidos para esta tarea:
1.- Desde el prompt del ok
ok boot -F failsafe
2.- Montar una de las vias en modo read only para copiar el archivo de configuracion del SVM hacia el sistema en modo failsafe
mount -o ro /dev/dsk/cxtxdxs0 /a
3.- Copiar lo siguiente
cp /a/kernel/drv/md.conf /kernel/drv
4.- Ejecutar el comando para que lea el archivo copiado
update_drv -f md
5.- Ahora si todo ha ido bien se puede lanzar el comando metastat para verificar el resultado de los anteriores pasos.
6.- Ya con esto se tiene plenamente identificado donde esta el metadevice que corresponde al directorio / por ejemplo si fuera d10
metasync d10
7.- Se espera que el status se el ok es decir que las dos vias esten sincronizadas
8.- Se monta el metadevice en /a
mount /dev/md/dsk/d10 /a
9.- Ahora se actualiza el boot archive con el comando bootadm con las opciones de verbosidad y -R para que todo lo haga en forma recursiva
bootadm update-archive -v -R /a
10.- Es necesario desmontar el directorio /a
11.- Finalmente hay que darle un reboot ordenado con init 6
Suscribirse a:
Entradas (Atom)