domingo, 19 de enero de 2014

Clonación de Jaulas de FreeBSD sobre ZFS, aprovisionando Oracle 11g R2

English Version:Cloning FreeBSD Jails with ZFS as a method for provisioning Oracle 11g R2

Una necesidad cada vez mas acuciante en los centros de datos actuales es la capacidad de desplegar rápidamente servicios y aplicaciones, casi de forma inmediata.

Es cada vez es más frecuente que los administradores de sistemas y DBAs mantengan distintas versiones de un mismo servicio o aplicación para ser empleado como entorno de producción, pruebas, desarrollo, reporting.

En esta entrada voy a mostrar con un caso práctico como la combinación de ZFS y las Jaulas de FreeBSD puede simplificar estas tareas hasta tal punto que resulten casi triviales.


clonejailz.sh

He creado este script para automatizar el proceso de clonación de una Jaula de FreeBSD sobre ZFS. 

Los requisitos que debe cumplir una Jaula para poder ser clonada con este script son los siguientes:

  • La Jaula debe tener todos sus sistemas de ficheros definidos sobre un único pool de ZFS. 
  • La configuración de la Jaula reside en un archivo jail.conf y los sistemas de ficheros pertenecientes a la Jaula se definen en un archivo fstab independiente.
  • La ruta de la Jaula debe tener formato de $JAIL_ROOT/$JAIL_NAME, donde la variable JAIL_ROOT define el directorio raíz de las Jaulas.
  • El dataset en el que reside la Jaula toma la forma ZPOOL/$JAIL_ROOT/$JAIL_NAME y es la raíz del resto de sistemas de ficheros asociados a la Jaula. 

Para instalar El script clonejailz.sh se copia el fichero en la ruta indicada en la variable JAIL_BIN, que está definida en el fichero clonejailz.rc.

El fichero clonejailz.rc puede tener tres ubicaciones distintas, que se listan por orden de precedencia:

  • En el directorio /usr/local/etc
  • En el mismo directorio donde se sitúa el script clonejailz.sh.
  • En el en el directorio HOME del usuario, adoptado la forma $HOME/.clonejailz.rc

El script clonejailz.sh se invoca con los siguientes argumentos:
# clonejailz.sh bjname=base_jail_name njname=new_jail_name jipadr=new_jail_ip [njpool=new_jail_pool] [script=script_to_run_inside_the_new_jail

  • bjname : Es el nombre de la Jaula Base, es decir, de la Jaula que va a ser clonada.
  • njname : Es el nombre de la Jaula que va a ser creada como copia de la Jaula Base.
  • jipadr : Es la dirección IP que se asigna a la jaula recién creada.
  • njpool : Es el nombre del Pool de ZFS donde residirá la jaula recién creada. En caso de no definirse, la Jaula se crea como un clone de ZFS de la Jaula Base en su mismo pool de ZFS.
  • script : Opcionalmente, se puede definir un script que realice acciones de configuración adicional para la nueva Jaula. Este script será ejecutado dentro de la Jaula recién creada.

Este sistema para la clonación de Jaulas consta de los siguientes elementos.


Fichero 1: clonejailz.rc


Es el fichero que define los parámetros principales que gobiernan el funcionamiento de clonejailz.sh.

// Comienza el Fichero clonehailz.rc
#
# http://devil-detail.blogspot.com.es/
#

# The path of the jail must have de format $JAIL_ROOT/$JAIL_NAME,
# hence the JAIL_ROOT variable defines the root directory for the Jails.
# Each Jail must must have all filesystems defined on a single ZFS pool.
# The dataset where resides the Jail takes the form ZPOOL/$JAIL_ROOT/$JAIL_NAME.
# and is the root of all other file systems associated with the Jail.
#
JAIL_ROOT=/jailz


# JAIL_BIN it have the path where resides clonejailz.sh and the scripts for the post cloning actions.
#
JAIL_BIN=${JAIL_ROOT}/bin

# The configuration of jail resides in jail.conf and the
# filesystems owned by the Jail are defined in a fstab file.
# JAIL_ETC defines where resides the jail.conf file and fstab.JAIL_NAME
#
JAIL_ETC=${JAIL_ROOT}/etc

#
# JAIL_CONF the name and complete path to the jail configuration file
#
JAIL_CONF=${JAIL_ETC}/jail.conf

# The snapshots performed by this script has the format of $CLONE_PREFIX$JAIL_NAME
#
CLONE_PREFIX=clone_for_

# An alternative set of values for this variables may be the following
# JAIL_ROOT=/home
# JAIL_BIN=/usr/local/sbin
# JAIL_ETC=/usr/local/etc
# JAIL_CONF=${JAIL_ETC}/jail.conf
// Fin el Fichero clonehailz.rc

 

Fichero 2: clonejailz.sh


Este es el script que realiza el clonado de la Jaula.

// Comienza el Fichero clonehailz.sh
#!/bin/sh
#
# http://devil-detail.blogspot.com.es/
#

# Find the clonejailz.rc for defining global settings.

[ -s /usr/local/etc/clonejailz.rc ] && . /usr/local/etc/clonejailz.rc
[ -s "${0%/*}/clonejailz.rc" ] && . ${0%/*}/clonejailz.rc
[ -s "$HOME/.clonejailz.rc" ] && . $HOME/.clonejailz.rc

# Initialize script variables.

BJ_NAME=""
NJ_NAME=""
NJ_IP=""
NJ_ZPOOL=""
JAIL_SCRIPT=""
BJ_DATASET=""
NJ_DATASET=""
NJ_MOUNTP=""
TMP_CFG=""

# Declare Functions.

# Print the parameters definition.

print_help () {
 echo "Usage: clonejailz.sh bjname=base_jail_name njname=new_jail_name
              jipadr=new_jail_ip [njpool=new_jail_pool]
              [script=script_to_run_inside_the_new_jail]
              [help]"
echo ""
echo "bjname : Name of the base Jail. Mandatory."
echo "njname : Name of the new Jail. Mandatory."
echo "jipadr : Ip adress of the new Jail. Mandatory."
echo "njpool : ZFS pool where will reside the new Jail. Optional."
echo "script : Script to run inside the new Jail. Optional."
echo "help   : Print this message."
return 0
}

# Validate conditions.

check_params () {
result=1

if [ -w "${JAIL_CONF}" ]
then
  check_njname && check_bjname && check_njpool && check_jipadr && check_script
  result=$?
else
  echo "${JAIL_CONF} : Can not writable."
fi
 
return $result
}

#  Check the correctness of Destination Pool for the cloned Jail.

check_njpool () {
result=1
if [ "${NJ_ZPOOL}" = "" ]
then
  result=0
else
  npool=$( zpool list | grep ${NJ_ZPOOL} | wc -l )
  if [ ${npool} -eq 0 ]
  then
    echo "${NJ_ZPOOL} : The Pool does not exist."
  else
    result=0
  fi
fi

return $result
}

# Check the Base Jail Name.

check_bjname () {
result=1
if [ "${BJ_NAME}" = "" ]
then
  echo "bjname is mandatory."
else
  cfile=$( grep ${BJ_NAME} ${JAIL_CONF} | wc -l )

  if [ $cfile -eq 0 ]
  then
    echo "${BJ_NAME} : Does not exist in ${JAIL_CONF}"
  else
    isactivejail=$( jls | grep ${BJ_NAME} | wc -l )

    if [ ${isactivejail} -eq 0 ]
    then
      result=0
    else
      echo "${BJ_NAME} : must be inactive."
    fi
  fi
fi

return $result
}

# Check that the New Jail and its mount point does not exist in jail.conf.

check_njname () {
result=1

if [ "${NJ_NAME}" = "" ]
then
  echo "njname is mandatory."
else
  cfile=$( grep ${NJ_NAME} ${JAIL_CONF} | wc -l )

  if [ $cfile -eq 0 ]
  then
    if [ -d ${NJ_MOUNTP} ]
    then
      echo "${NJ_MOUNTP} : Already exist."
    else
      result=0
    fi
  else
    echo "${NJ_NAME} : Already exist in ${JAIL_CONF}"
  fi
fi
return $result
}

# Parse the IP address for the New Jail.

check_jipadr () {
result=1
IPregex="^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])$"

if [ "${NJ_IP}" = "" ]
then
  echo "jipadr is mandatory."
else
  if ( echo $NJ_IP | grep -qs -E $IPregex )
  then
    result=0
  else
    echo " ${NJ_IP}: Invalid ipv4 format."
  fi
fi
return $result
}

# Check de executions rights for the supplied script, if exist.

check_script () {
result=1
if [ "${JAIL_SCRIPT}" = "" ]
then
  result=0
else
  if [ -x "${JAIL_BIN}/${JAIL_SCRIPT}" ]
  then
    result=0
  else
    echo "${JAIL_BIN}/${JAIL_SCRIPT} is not an executable script."
  fi
fi
return $result
}

# Create a new entry in jail.conf for the new Jail and a fstab file in JAIL_ETC.

clone_jail_conf () {
  result=1
  # Identify the fstab file of the Base Jail.
  oldfstab=$( grep fstab ${TMP_CFG} | cut -f2 -d'=' | cut -f1 -d';' |cut -f2 -d '"' )

  if [ -r "${oldfstab}" ]
  then
    # Define the fstab file name for the new Jail.
    newfstab=${JAIL_ETC}/fstab.${NJ_NAME}
    # Create the new fstab file from the original fstab file.
    sed -E 's/'"${BJ_NAME}"'/'"${NJ_NAME}"'/g' < ${oldfstab} > ${newfstab}
  fi

  # Create the new entry in the jails.conf for the new Jail.
  sed -E '
         s/[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/'"${NJ_IP}"'/g
         s/'"${BJ_NAME}"'/'"${NJ_NAME}"'/g
  ' < ${TMP_CFG}  >> ${JAIL_CONF}

  result=$?
  [ $result -ne 0 ] && echo "An error has occurred during jail.conf actualization."
  return $result
}

# Make a zfs clone of the BASE JAIL, or send it to a new ZFS POOL, if it was defined

clone_zfs() {
  result=1
  snapshot_name=${BJ_DATASET}@${CLONE_PREFIX}${NJ_NAME}
 

  # First of all, create an snapshot on the ZFS filesystem where resides the  Base Jail.

  zfs snapshot ${snapshot_name}
  result=$?
  if [ ${result} -eq 0 ]
  then
    if [ "${NJ_ZPOOL}" = "" ]
    then
      # If have not defined a destination POOL for new JAIL,
      # then perform a clone in the POOL where resides the BASE JAIL.
      NJ_DATASET=$( echo ${BJ_DATASET} | cut -f1 -d '/' )${NJ_MOUNTP}
      zfs clone ${snapshot_name} ${NJ_DATASET}
      result=$?
    else
      # If exist a destination POOL make a copy of the BASE_JAIL,
      # transfer it to the NEW POOL and finally rename it to the NEW_JAIL
      NJ_DATASET=${NJ_ZPOOL}${NJ_MOUNTP}
      zpool_root_fs=${NJ_ZPOOL}${JAIL_ROOT}

      zfs create ${zpool_root_fs}

      zfs send ${snapshot_name} | zfs receive -euv ${zpool_root_fs}

      zfs rename ${zpool_root_fs}/${BJ_NAME} ${NJ_DATASET}
      zfs set mountpoint=${NJ_MOUNTP} ${NJ_DATASET}
      result=$?
    fi
  else
    echo "${snapshot_name} : cannot be created."
  fi

  [ $result -ne 0 ] && echo "${BJ_DATASET}: cannot be cloned."
  return $result
}

#  Start the New Jail, add IP address to the host file and execute the supplied script.

jail_setup() {
  result=1

  # Start the New Jail.

  jail -f ${JAIL_CONF} -c ${NJ_NAME}

  jid=$( jls -j ${NJ_NAME} -h jid | tail -1 )
  result=$?

  if [ $result -eq 0 ]
  then
    # Add the hostname and the IP to the hosts file.

    echo "${NJ_IP}   ${NJ_NAME}" >> ${NJ_MOUNTP}/etc/hosts

    if [ "${JAIL_SCRIPT}" != "" ]
    then
      # If a script is supplied, then is executed inside the Jail.
      tmp_jail=${NJ_MOUNTP}/tmp

      cp ${JAIL_BIN}/${JAIL_SCRIPT} ${tmp_jail}

      jexec $jid /tmp/${JAIL_SCRIPT}
    fi

    # Stop the Jail.

    jail -f ${JAIL_CONF} -r ${NJ_NAME}
    result=$?
  else
    echo " Invalid JID."
  fi

  if [ $result -eq 0 ]
  then
    echo "${NJ_NAME} was successfully created."
  else
    echo "${NJ_NAME} was created with errors."
  fi
  return $result
}

# The script begins parsing the input parameters.

result=1
if [ $# -gt 0 ]
then
  while [ $# -gt 0 ]
  do
    case "${1}" in
        bjname=*) BJ_NAME=${1#bjname=}  ;;
        njname=*) NJ_NAME=${1#njname=}   ;;
        jipadr=*) NJ_IP=${1#jipadr=}     ;;
        njpool=*) NJ_ZPOOL=${1#njpool=}  ;;
        script=*) JAIL_SCRIPT=${1#script=}     ;;
        help) print_help ; exit 1;;
      *)  break;;
    esac
    shift
  done
else
  print_help
  exit $result
fi

NJ_MOUNTP=${JAIL_ROOT}/${NJ_NAME}

# Next, validate the parameters.

if ( check_params )
then
  # If the script  parameters was ok, go ahead.

  TMP_CFG=/tmp/tmp.${NJ_NAME}

  # Make a backup of the jail.conf file.
  now=$( date +"%m%d%Y%H%M%S" )

  cp  -p ${JAIL_CONF}  ${JAIL_CONF}.${now}
  

# Create a temp file with a copy of the Base Jail configuration.

  sed -En '/'"$BJ_NAME"' *{/,/}/ p' < ${JAIL_CONF} > ${TMP_CFG}

  bjpath=$( grep path ${TMP_CFG} | cut -f2 -d'=' | cut  -f1 -d';' )

  BJ_DATASET=$( zfs list -H -o name ${bjpath} )

  result=$?

  if [ ${result} -eq 0 ]
  then
     # Finally proceed with the cloning process.

     clone_zfs && clone_jail_conf && jail_setup
     result=$?
  else
    echo "${BJ_NAME} It has no associated ZFS filesystem."
 fi
fi
exit $result
// Fin el Fichero clonejailz.sh

Fichero 3: cloneorahome.sh


Como paso final del proceso de  clonación, de manera opcional, se puede ejecutar un script que realice tareas de configuración adicionales dentro de la JauLa. Para ello se indica en el parametro script= el nombre del script que se va ejecutar; clonejailz.sh, arranca la jaula recién creada, copia el script en la ruta /var/tmp y lo ejecuta.

A continuación se muestra un script de ejemplo donde se reconstruye el oraInventory a partir del directorio ORACLE_HOME; práctica recomendable al clonar el directorio donde reside una instalación de el RDBMS Oracle.


// Inicio del Fichero cloneorahome.sh
#!/bin/sh

su - oracle -c '
ORACLE_HOME=/oracle/product/11.2.0
ORACLE_SID=ORATEST
NLS_LANG=American_america.WE8ISO8859P15
ORA_NLS11=${ORACLE_HOME}/nls/data
PATH=$PATH:$ORACLE_HOME/bin

export PATH
export ORACLE_BASE
export ORACLE_HOME
export ORACLE_SID
export NLS_LANG
export ORA_NLS33

rm -r /oracle/oraInventory/ContentsXML
rm -r /oracle/oraInventory/logs

$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/clone/bin/clone.pl ORACLE_BASE="/oracle" ORACLE_HOME="/oracle/product/11.2.0" OSDBA_GROUP=dba OSOPER_GROUP=oper INVENTORY_LOCATION=/oracle/oraInventory -defaultHomeName -O -ignorePrereq -jreloc /usr/lib/jvm/java-6-openjdk

'>> /var/log/cloneorahome.log 2>&1
/oracle/product/11.2.0/root.sh >> /var/log/cloneorahome.log 2>&1
// Fin del Fichero cloneorahome.sh
 

Ejemplo de uso de clonejailz.sh


Para ilustrar, con un ejemplo práctico, el uso de este script, tomamos como punto de partida  mi entorno Para Oracle 11gR2 sobre FreeBSD y creamos un pool de ZFS sobre una partición de mi portátil de pruebas.


Vamos a definir un entorno, clonado a partir de la Jaula debora, que llamaremos oratest1.
  
Primero configuramos un pool de ZFS sobre el que vamos a crear la Jaula clonada. Pare ello seleccionamos la partición 12 que tenemos previamente etiquetada como freedsk0.

root@morsa:/root # gnop create -S 4096 /dev/gpt/freedsk0
root@morsa:/root # zpool create -m none datapool0 /dev/gpt/freedsk0.nop
root@morsa:/root # zpool status
  pool: datapool0
 state: ONLINE
  scan: none requested
config:

        NAME                STATE     READ WRITE CKSUM
        datapool0           ONLINE       0     0     0
          gpt/freedsk0.nop  ONLINE       0     0     0

errors: No known data errors

  pool: fbsdzpool1
 state: ONLINE
  scan: none requested
config:

        NAME            STATE     READ WRITE CKSUM
        fbsdzpool1      ONLINE       0     0     0
          gpt/freedsk1  ONLINE       0     0     0

errors: No known data errors
 
Ejecutamos el script clonejailz.sh con los siguientes parámetros:

root@morsa:/root #/jailz/bin/clonejailz.sh bjname=debora njpool=datapool0 njname=oratest1 jipadr=127.0.0.100  script=cloneorahome.sh
receiving full stream of fbsdzpool1/jailz/debora@clone_for_oratest1 into datapool0/jailz/debora@clone_for_oratest1
received 7.56GB stream in 392 seconds (19.7MB/sec)
oratest1: created
stty: standard input: Inappropriate ioctl for device
Starting periodic command scheduler: crond.
stty: standard input: Inappropriate ioctl for device
Asking all remaining processes to terminate...done.
All processes ended within 1 seconds....done.
/etc/rc0.d/S31umountnfs.sh: line 45: /etc/mtab: No such file or directory
Deconfiguring network interfaces...done.
Cleaning up ifupdown....
Unmounting temporary filesystems...umount: tmpfs: must be superuser to umount
umount: tmpfs: must be superuser to umount
umount: tmpfs: must be superuser to umount
umount: tmpfs: must be superuser to umount
failed.
Deactivating swap...failed.
Unmounting local filesystems...umount2: Operation not permitted

oratest1: removed
oratest1 was successfully created.

Dentro de la jaula recién creada -oratest1- revisamos el fichero de log /var/log/cloneorahome.log generado por el script cloneorahome.sh

// Comienza el fichero /jailz/otatest1/var/log/cloneorahome.log
./runInstaller -clone -waitForCompletion  "ORACLE_BASE=/oracle" "ORACLE_HOME=/oracle/product/11.2.0" "oracle_install_OSDBA=dba" "oracle_install_OSOPER=oper" "INVENTORY_LOCATION=/oracle/oraInventory" -defaultHomeName   -ignorePrereq  -jreloc  /usr/lib/jvm/java-6-openjdk  -defaultHomeName -silent -noConfig -nowait
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 4096 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-01-01_09-07-58PM. Please wait ...Oracle Universal Installer, Version 11.2.0.1.0 Production
Copyright (C) 1999, 2009, Oracle. All rights reserved.

You can find the log of this install session at:
 /oracle/oraInventory/logs/cloneActions2014-01-01_09-07-58PM.log

.................................................................................................... 100% Done.

Installation in progress (Wednesday, January 1, 2014 9:08:20 PM CET)
............................................................................                                                    76% Done.
Install successful

Linking in progress (Wednesday, January 1, 2014 9:08:31 PM CET)
Link successful

Setup in progress (Wednesday, January 1, 2014 9:09:19 PM CET)
Setup successful

End of install phases.(Wednesday, January 1, 2014 9:11:08 PM CET)
Starting to execute configuration assistants
The following configuration assistants have not been run. This can happen because Oracle Universal Installer was invoked with the -noConfig option.
--------------------------------------
The "/oracle/product/11.2.0/cfgtoollogs/configToolFailedCommands" script contains all commands that failed, were skipped or were cancelled. This file may be used to run these configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.
The "/oracle/product/11.2.0/cfgtoollogs/configToolAllCommands" script contains all commands to be executed by the configuration assistants. This file may be used to run the configuration assistants outside of OUI. Note that you may have to update this script with passwords (if any) before executing the same.

--------------------------------------
WARNING:
The following configuration scripts need to be executed as the "root" user.
/oracle/product/11.2.0/root.sh
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts
   
The cloning of OraHome1 was successful.
Please check '/oracle/oraInventory/logs/cloneActions2014-01-01_09-07-58PM.log' for more details.
Check /oracle/product/11.2.0/install/root_oratest1_2014-01-01_21-11-09.log for the output of root script
 // Fin del fichero /jailz/otatest1/var/log/cloneorahome.log

Para comprobar el correcto funcionamiento de las jaulas arrancamos  debora y oratest1  con sus correspondientes base de datos.

root@morsa:/root # sysctl kern.ipc.shmmax=214743648
root@morsa:/root # jail -f /jailz/etc/jail.conf -c debora 
debora: created
Starting periodic command scheduler: crond.
root@morsa:/root # jail -f /jailz/etc/jail.conf -c oratest1 

oratest1: created
Starting periodic command scheduler: crond.

root@morsa:/root # jls
   JID  IP Address      Hostname                      Path
     2  127.0.0.25      debora                        /jailz/debora
     3  127.0.0.100     oratest1                      /jailz/oratest1



Desde la consola ttyv1 -Alt+F2- nos conectamos a la Jaula debora e iniciamos la base de datos ORATEST.

root@morsa:/root # jexec debora /bin/sh
sh-3.2# uname -a
Linux debora 2.6.16 FreeBSD 9.1-RELEASE-p4 #0: Mon Jun 17 11:42:37 UTC 2013 i686
GNU/Linux
sh-3.2# su - oracle
oracle@debora:~$ . ./ORATEST.sh
oracle@debora:~$ sqlplus /nolog
SQL*Plus: Release 11.2.0.1.0 Production on Wed Jan 1 21:44:09 2014
Copyright (c) 1982, 2009, Oracle. All rights reserved.
SQL> conn / as sysdba
Connected.
SQL> startup
ORACLE instance started.
Total System Global Area 1071333376 bytes
Fixed Size 1341312 bytes
Variable Size 750782592 bytes
Database Buffers 314572800 bytes
Redo Buffers 4636672 bytes
Database mounted.
Database opened.
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
1 ORATEST
debora
11.2.0.1.0 01-JAN-14 OPEN NO 1 STOPPED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO

Desde la consola ttyv2 -Alt+F3- nos conectamos a la Jaula otatest1 e iniciamos la base de datos ORATEST.

root@morsa:/root # jexec 3 /bin/sh
sh-3.2# uname -n
oratest1
sh-3.2# su - oracle
oracle@oratest1:~$ . ./ORATEST.sh
oracle@oratest1:~$ sqlplus /nolog
SQL*Plus: Release 11.2.0.1.0 Production on Wed Jan 1 21:44:09 2014
Copyright (c) 1982, 2009, Oracle. All rights reserved.
SQL> conn / as sysdba
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 1071333376 bytes
Fixed Size 1341312 bytes
Variable Size 750782592 bytes
Database Buffers 314572800 bytes
Redo Buffers 4636672 bytes
Database mounted.
Database opened.
SQL> select * from v$instance;
INSTANCE_NUMBER INSTANCE_NAME
--------------- ----------------
HOST_NAME
----------------------------------------------------------------
VERSION STARTUP_T STATUS PAR THREAD# ARCHIVE LOG_SWITCH_WAIT
----------------- --------- ------------ --- ---------- ------- ---------------
LOGINS SHU DATABASE_STATUS INSTANCE_ROLE ACTIVE_ST BLO
---------- --- ----------------- ------------------ --------- ---
1 ORATEST
oratest1
11.2.0.1.0 01-JAN-14 OPEN NO 1 STOPPED
ALLOWED NO ACTIVE PRIMARY_INSTANCE NORMAL NO

 Hecho.

Consideraciones finales

ZFS y FreeBSD Jails son tecnologías que nos permiten duplicar fácilmente un entorno (con ZFS snapshots) y ejecutar de forma aislada el entorno recién creado (con Jails).

Naturalmente existen otras alternativas a la combinación de ZFS y Jails, cualquier filesystem que permita hacer snapshots puede servir a este propósito. 

También pueden utilizarse otras entornos de ejecución distintos a FreeBSD Jails como son LXC y OpenVZ en entorno Linux. En entornos UNIX podemos citar  AIX Workload Paritions y Solaris Zones.

De cualquier manera, la combinación de ZFS y FreeBSD Jails es una gran solución para un rápido y fácil aprovisionamiento de servicios, como acabo de de demostrado.

No hay comentarios:

Publicar un comentario

Vuestros comentarios son bien recibidos, os animo a contribuir proponiendo temas que os interesen para desarrollar en este blog.