Oracle Database Appliance Assessment Report

System Health Score is 99 out of 100 (detail)

Cluster Summary

Cluster Nameodacluster01-c
OS Version LINUX X86-64 OELRHEL 5 2.6.32-300.11.1.el5uek
CRS Home - Version/u01/app/11.2.0.3/grid - 11.2.0.3.0
DB Home - Version - Names/u01/app/oracle/product/11.2.0.3/dbhome_1 - 11.2.0.3.0 - 8
Number of nodes2
   Database Servers2
odachk Version 2.1.5_20120524
Collectionodachk_MCLDB_081512_092757.zip
Collection Date15-Aug-2012 09:30:35

Table of Contents


Remove finding from report

Findings Needing Attention

FAIL, WARNING, ERROR and INFO findings should be evaluated. INFO status is considered a significant finding and details for those should be reviewed in light of your environment.

Database Server

Status Type Message Status On Details
WARNINGOS CheckOne or more warnings for network and bonding interface checksAll Database ServersView
WARNINGDatabase CheckLocal listener init parameter is not set to local node VIPodahost-01:MCL6DBView

Top

Findings Passed

Database Server

Status Type Message Status On Details
PASSDatabase CheckRemote listener is set to SCAN nameAll DatabasesView
PASSDatabase CheckValue of remote_listener parameter is able to tnspingAll DatabasesView
PASSSQL Parameter CheckDatabase Parameter parallel_execution_message_size is set to the recommended valueAll InstancesView
PASSOS CheckThe number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)All Database ServersView
PASSOS Checkpam_limits configured properly for shell limitsAll Database ServersView
PASSSQL Parameter CheckDatabase parameter DB_BLOCK_CHECKSUM is set to recommended valueAll InstancesView
PASSSQL Parameter CheckDatabase parameter DB_BLOCK_CHECKING is set to the recommended valueAll InstancesView
PASSSQL Parameter CheckASM parameter MEMORY_TARGET is set according to recommended value.All InstancesView
PASSSQL CheckAll bigfile tablespaces have non-default maxbytes values setAll DatabasesView
PASSOS CheckOS Disk Storage checks successfulAll Database ServersView
PASSOS CheckSystem component checks successfulAll Database ServersView
PASSOS CheckShared storage checks successfulAll Database ServersView
PASSOS CheckAll software and firmware versions are up to date with OAK repository.All Database ServersView
PASSDatabase CheckDatabase parameter Db_create_online_log_dest_n is set to recommended valueAll DatabasesView
PASSDatabase CheckDatabase parameter db_recovery_file_dest_size is set to recommended valueAll DatabasesView
PASSSQL Parameter CheckDatabase parameter GLOBAL_NAMES is set to recommended valueAll InstancesView
PASSSQL Parameter CheckDatabase parameter DB_LOST_WRITE_PROTECT is set to recommended valueAll InstancesView
PASSOS CheckShell limit soft nproc for DB is configured according to recommendationAll Database ServersView
PASSOS CheckShell limit hard stack for DB is configured according to recommendationAll Database ServersView
PASSOS CheckShell limit hard nofile for DB is configured according to recommendationAll Database ServersView
PASSOS CheckShell limit hard nproc for DB is configured according to recommendationAll Database ServersView
PASSOS CheckShell limit hard nproc for GI is configured according to recommendationAll Database ServersView
PASSOS CheckShell limit hard nofile for GI is configured according to recommendationAll Database ServersView
PASSOS CheckShell limit soft nproc for GI is configured according to recommendationAll Database ServersView
PASSOS CheckShell limit soft nofile for GI is configured according to recommendationAll Database ServersView
PASSASM CheckAll disk groups have compatible.asm parameter set to recommended valuesAll ASM InstancesView
PASSASM CheckAll disk groups have allocation unit size set to 4MBAll ASM InstancesView
PASSOS CheckOSWatcher is runningAll Database ServersView
PASSOS Checkohasd Log Ownership is Correct (root root)All Database ServersView
PASSOS Checkohasd/orarootagent_root Log Ownership is Correct (root root)All Database ServersView
PASSOS Checkcrsd/orarootagent_root Log Ownership is Correct (root root)All Database ServersView
PASSOS Checkcrsd Log Ownership is Correct (root root)All Database ServersView
PASSOS CheckNIC bonding mode is not set to Broadcast(3) for public networkAll Database ServersView
PASSOS CheckNIC bonding is configured for public network (VIP)All Database ServersView
PASSOS CheckCRS version is higher or equal to ASM version.All Database ServersView
PASSDatabase CheckLocal listener init parameter is set to local node VIPodahost-01:tstdb1, odahost-01:MCL2DB, odahost-01:MCL3DB, odahost-01:MCL4DB, odahost-01:MCL5DB ... moreView
PASSOS CheckAll voting disks are onlineAll Database ServersView
PASSOS Checkip_local_port_range is configured according to recommendationAll Database ServersView
PASSOS CheckLinux Swap Configuration meets or exceeds RecommendationAll Database ServersView
PASSOS Check$ORACLE_HOME/bin/oradism ownership is rootAll Database ServersView
PASSOS Check$ORACLE_HOME/bin/oradism setuid bit is setAll Database ServersView
PASSSQL CheckFailover method (SELECT) and failover mode (BASIC) are configured properlyAll DatabasesView
PASSOS CheckKernel Parameter net.core.rmem_max OKAll Database ServersView
PASSSQL CheckAll tablespaces are using Automatic segment storage managementAll DatabasesView
PASSSQL CheckAll tablespaces are locally manged tablespaceAll DatabasesView
PASSOS CheckKernel Parameter SEMMNS OKAll Database ServersView
PASSOS CheckKernel Parameter SEMMSL OKAll Database ServersView
PASSOS CheckKernel Parameter SEMMNI OKAll Database ServersView
PASSOS CheckKernel Parameter SEMOPM OKAll Database ServersView
PASSOS CheckNone of the hostnames contains an underscore characterAll Database ServersView
PASSOS Checknet.core.rmem_default Is Configured ProperlyAll Database ServersView
PASSOS Checknet.core.wmem_max Is Configured ProperlyAll Database ServersView
PASSOS Checknet.core.wmem_default Is Configured ProperlyAll Database ServersView
PASSSQL CheckSYS.AUDSES$ sequence cache size >= 10,000All DatabasesView
PASSSQL CheckSYS.IDGEN1$ sequence cache size >= 1,000All DatabasesView

Cluster Wide

Status Type Message Status On Details
PASSCluster Wide CheckRDBMS home /u01/app/oracle/product/11.2.0.3/dbhome_1 has same number of patches installed across the clusterCluster Wide-
PASSCluster Wide CheckRDBMS software version matches across cluster.Cluster WideView
PASSCluster Wide CheckAll nodes are using same NTP server across clusterCluster WideView
PASSCluster Wide CheckTime zone matches for root user across clusterCluster WideView
PASSCluster Wide CheckTime zone matches for GI/CRS software owner across clusterCluster WideView
PASSCluster Wide CheckOS Kernel version(uname -r) matches across cluster.Cluster WideView
PASSCluster Wide CheckClusterware active version matches across cluster.Cluster WideView
PASSCluster Wide CheckTimezone matches for current user across cluster.Cluster WideView
PASSCluster Wide CheckPublic network interface names are the same across clusterCluster WideView
PASSCluster Wide CheckRDBMS software owner UID matches across clusterCluster WideView
PASSCluster Wide CheckPrivate interconnect interface names are the same across clusterCluster WideView

Top

Best Practices and Other Recommendations

Best Practices and Other Recommendations are generally items documented in various sources which could be overlooked. odachk assesses them and calls attention to any findings.


Top

RDBMS software version comparison

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential database or application instability due to version mismatch for database homes.
It is possible that if the versions of related RDBMS homes on all the cluster nodes do not
match that some incompatibility could exist which would make diagnosing problems difficult
or bugs fixed in the later RDBMS version still being present on some nodes but not on others.

Action / Repair:

It is assumed that the RDBMS versions of related database homes will match across the cluster. 
If the versions of related RDBMS homes do not match then it is assumed that some mistake has
been made and overlooked.  The purpose of this check is to bring this situation to the attention
of the customer for action and remedy.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide: PASS => RDBMS software version matches across cluster.


odahost-01 = 112030
odahost-02 = 112030
Top

Top

Same NTP server across cluster

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Make sure machine clocks are synchronized on all nodes to the same NTP source.

NOTE: raccheck expects the NTP time source to be the same across the cluster based on the NTP server IP address.  In cases where the customer is using a fault tolerant configuration for NTP servers and the customer is certain that the configuration is correct and the same time source is being utilized then a finding for this check can be ignored.

Implement NTP (Network Time Protocol) on all nodes.
Prevents evictions and helps to facilitate problem diagnosis.

Also use  the -x option (ie. ntpd -x, xntp -x) if available to prevent time from moving backwards in large amounts. This slewing will help reduce time changes into multiple small changes, such that they will not impact the CRS. Enterprise Linux: see /etc/sysconfig/ntpd; Solaris: set "slewalways yes" and "disable pll" in /etc/inet/ntp.conf. 
Like:- 
       # Drop root to id 'ntp:ntp' by default.
       OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
       # Set to 'yes' to sync hw clock after successful ntpdate
       SYNC_HWCLOCK=no
       # Additional options for ntpdate
       NTPDATE_OPTIONS=""

The Time servers operate in a pyramid structure where the top of the NTP stack is usually an external time source (such as GPS Clock).  This then trickles down through the Network Switch stack to the connected server.  
This NTP stack acts as the NTP Server and ensuring that all the RAC Nodes are acting as clients to this server in a slewing method will keep time changes to a minute amount.

Changes in global time to account for atomic accuracy's over Earth rotational wobble , will thus be accounted for with minimal effect.   This is sometimes referred to as the " Leap Second " " epoch ", (between  UTC  12/31/2008 23:59.59 and 01/01/2009 00:00.00  has the one second inserted).

RFC "NTP Slewing for RAC" has been created successfully. CCB ID 462 
 
Links
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide: PASS => All nodes are using same NTP server across cluster


odahost-01 = 10.15.2.1
odahost-02 = 10.15.2.1
Top

Top

Root time zone

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Make sure machine clocks are synchronized on all nodes to the same NTP source.
Implement NTP (Network Time Protocol) on all nodes.
Prevents evictions and helps to facilitate problem diagnosis.

Also use  the -x option (ie. ntpd -x, xntp -x) if available to prevent time from moving backwards in large amounts. This slewing will help reduce time changes into multiple small changes, such that they will not impact the CRS. Enterprise Linux: see /etc/sysconfig/ntpd; Solaris: set "slewalways yes" and "disable pll" in /etc/inet/ntp.conf. 
Like:- 
       # Drop root to id 'ntp:ntp' by default.
       OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
       # Set to 'yes' to sync hw clock after successful ntpdate
       SYNC_HWCLOCK=no
       # Additional options for ntpdate
       NTPDATE_OPTIONS=""

The Time servers operate in a pyramid structure where the top of the NTP stack is usually an external time source (such as GPS Clock).  This then trickles down through the Network Switch stack to the connected server.  
This NTP stack acts as the NTP Server and ensuring that all the RAC Nodes are acting as clients to this server in a slewing method will keep time changes to a minute amount.

Changes in global time to account for atomic accuracy's over Earth rotational wobble , will thus be accounted for with minimal effect.   This is sometimes referred to as the " Leap Second " " epoch ", (between  UTC  12/31/2008 23:59.59 and 01/01/2009 00:00.00  has the one second inserted).

More information can be found in Note 759143.1
"NTP leap second event causing Oracle Clusterware node reboot"
Linked to this Success Factor.

RFC "NTP Slewing for RAC" has been created successfully. CCB ID 462 
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide: PASS => Time zone matches for root user across cluster


odahost-01 = ==
odahost-02 = ==
Top

Top

GI/CRS software owner time zone

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Benefit / Impact:

Clusterware deployment requirement

Risk:

Potential cluster instability

Action / Repair:

Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.

If for whatever reason the time zones have gotten out of sych then the configuration should be corrected.  Consult with Oracle Support about the proper method for correcting the time zones.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide: PASS => Time zone matches for GI/CRS software owner across cluster


odahost-01 = ==
odahost-02 = ==
Top

Top

Kernel version comparison across cluster

Success FactorGENERIC OS DATA COLLECTIONS
Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential cluster instability due to kernel version mismatch on cluster nodes.
It is possible that if the kernel versions do not match that some incompatibility
could exist which would make diagnosing problems difficult or bugs fixed in the l
ater kernel still being present on some nodes but not on others.

Action / Repair:

Unless in the process of a rolling upgrade of cluster node kernels it is assumed
that the kernel versions will match across the cluster.  If they do not then it is
assumed that some mistake has been made and overlooked.  The purpose of
this check is to bring this situation to the attention of the customer for action and remedy.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide: PASS => OS Kernel version(uname -r) matches across cluster.


odahost-01 = 2632-300111el5uek
odahost-02 = 2632-300111el5uek
Top

Top

Clusterware version comparison

Success FactorGENERIC OS DATA COLLECTIONS
Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential cluster instability due to clusterware version mismatch on cluster nodes.
It is possible that if the clusterware versions do not match that some incompatibility
could exist which would make diagnosing problems difficult or bugs fixed in the
later clusterware version still being present on some nodes but not on others.

Action / Repair:

Unless in the process of a rolling upgrade of the clusterware it is assumed
that the clusterware versions will match across the cluster.  If they do not then it is
assumed that some mistake has been made and overlooked.  The purpose of
this check is to bring this situation to the attention of the customer for action and remedy.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide: PASS => Clusterware active version matches across cluster.


odahost-01 = 112030
odahost-02 = 112030
Top

Top

Timezone for current user

Success FactorMAKE SURE MACHINE CLOCKS ARE SYNCHRONIZED ON ALL NODES USING NTP
Recommendation
 Benefit / Impact:

Clusterware deployment requirement

Risk:

Potential cluster instability

Action / Repair:

Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.

If for whatever reason the time zones have gotten out of sych then the configuration should be corrected.  Consult with Oracle Support about the proper method for correcting the time zones.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide: PASS => Timezone matches for current user across cluster.


odahost-01 = CEST
odahost-02 = CEST
Top

Top

GI/CRS - Public interface name check (VIP)

Success FactorMAKE SURE NETWORK INTERFACES HAVE THE SAME NAME ON ALL NODES
Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential application instability due to incorrectly named network interfaces used for node VIP.

Action / Repair:

The Oracle clusterware expects and it is required that the network interfaces used for
the public interface used for the node VIP be named the same on all nodes of the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide: PASS => Public network interface names are the same across cluster


odahost-01 = bond0
odahost-02 = bond0
Top

Top

RDBMS software owner UID across cluster

Success FactorENSURE EACH ORACLE/ASM USER HAS A UNIQUE UID ACROSS THE CLUSTER
Recommendation
 Benefit / Impact:

Availability, stability

Risk:

Potential OCR logical corruptions and permission problems accessing OCR keys when multiple O/S users share the same UID which are difficult to diagnose.

Action / Repair:

For GI/CRS, ASM and RDBMS software owners ensure one unique user ID with a single name is in use across the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide: PASS => RDBMS software owner UID matches across cluster


odahost-01 = 1001
odahost-02 = 1001
Top

Top

GI/CRS - Private interconnect interface name check

Success FactorMAKE SURE NETWORK INTERFACES HAVE THE SAME NAME ON ALL NODES
Recommendation
 Benefit / Impact:

Stability, Availability, Standardization

Risk:

Potential cluster or application instability due to incorrectly named network interfaces.

Action / Repair:

The Oracle clusterware expects and it is required that the network interfaces used for
the cluster interconnect be named the same on all nodes of the cluster.
 
Needs attention on-
Passed onCluster Wide

Status on Cluster Wide: PASS => Private interconnect interface names are the same across cluster


odahost-01 = eth1
odahost-02 = eth1
Top

Top

Remote listener set to scan name

Success FactorSQL DATA COLLECTIONS AND CHECKS
Recommendation
 For Oracle Database 11g Release 2, the REMOTE_LISTENER parameter should be set to the SCAN. This allows the instances to register with the SCAN Listeners to provide information on what services are being provided by the instance, the current load, and a recommendation on how many incoming connections should be directed to the
instance.
 
Links
Needs attention on-
Passed onodahost-01:tstdb1, odahost-01:MCL2DB, odahost-01:MCL3DB, odahost-01:MCL4DB, odahost-01:MCL5DB, odahost-01:MCL7DB, odahost-01:MCLDB, odahost-02:tstdb1, odahost-02:MCL2DB, odahost-02:MCL3DB, odahost-02:MCL4DB, odahost-02:MCL5DB, odahost-02:MCL7DB, odahost-02:MCLDB

Status on odahost-01:tstdb1: PASS => Remote listener is set to SCAN name


DATA FROM odahost-01 - tstdb1 DATABASE - REMOTE LISTENER SET TO SCAN NAME



remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-01:MCL2DB: PASS => Remote listener is set to SCAN name


DATA FROM odahost-01 - MCL2DB DATABASE - REMOTE LISTENER SET TO SCAN NAME



remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-01:MCL3DB: PASS => Remote listener is set to SCAN name


DATA FROM odahost-01 - MCL3DB DATABASE - REMOTE LISTENER SET TO SCAN NAME



remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-01:MCL4DB: PASS => Remote listener is set to SCAN name


DATA FROM odahost-01 - MCL4DB DATABASE - REMOTE LISTENER SET TO SCAN NAME



remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-01:MCL5DB: PASS => Remote listener is set to SCAN name


DATA FROM odahost-01 - MCL5DB DATABASE - REMOTE LISTENER SET TO SCAN NAME



remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-01:MCL7DB: PASS => Remote listener is set to SCAN name


DATA FROM odahost-01 - MCL7DB DATABASE - REMOTE LISTENER SET TO SCAN NAME



remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-01:MCLDB: PASS => Remote listener is set to SCAN name


DATA FROM odahost-01 - MCLDB DATABASE - REMOTE LISTENER SET TO SCAN NAME



remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-02:tstdb1: PASS => Remote listener is set to SCAN name


remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-02:MCL2DB: PASS => Remote listener is set to SCAN name


remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-02:MCL3DB: PASS => Remote listener is set to SCAN name


remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-02:MCL4DB: PASS => Remote listener is set to SCAN name


remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-02:MCL5DB: PASS => Remote listener is set to SCAN name


remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-02:MCL7DB: PASS => Remote listener is set to SCAN name


remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01

Status on odahost-02:MCLDB: PASS => Remote listener is set to SCAN name


remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01
Top

Top

tnsping to remote listener parameter

Success FactorSQL DATA COLLECTIONS AND CHECKS
Recommendation
 If value of remote_listener parameter is set to non-pingable tnsnames,instances will not be cross registered and will not balance the load across cluster.In case of node or instance failure, connections may not failover to serviving node. For more information about remote_listener,load balancing and failover.

 
Links
Needs attention on-
Passed onodahost-01:tstdb1, odahost-01:MCL2DB, odahost-01:MCL3DB, odahost-01:MCL4DB, odahost-01:MCL5DB, odahost-01:MCL7DB, odahost-01:MCLDB, odahost-02:tstdb1, odahost-02:MCL2DB, odahost-02:MCL3DB, odahost-02:MCL4DB, odahost-02:MCL5DB, odahost-02:MCL7DB, odahost-02:MCLDB

Status on odahost-01:tstdb1: PASS => Value of remote_listener parameter is able to tnsping


DATA FROM odahost-01 - tstdb1 DATABASE - TNSPING TO REMOTE LISTENER PARAMETER




TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:34:53

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522)))
OK (10 msec)

Status on odahost-01:MCL2DB: PASS => Value of remote_listener parameter is able to tnsping


DATA FROM odahost-01 - MCL2DB DATABASE - TNSPING TO REMOTE LISTENER PARAMETER




TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:34:53

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522)))
OK (0 msec)

Status on odahost-01:MCL3DB: PASS => Value of remote_listener parameter is able to tnsping


DATA FROM odahost-01 - MCL3DB DATABASE - TNSPING TO REMOTE LISTENER PARAMETER




TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:34:54

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522)))
OK (0 msec)

Status on odahost-01:MCL4DB: PASS => Value of remote_listener parameter is able to tnsping


DATA FROM odahost-01 - MCL4DB DATABASE - TNSPING TO REMOTE LISTENER PARAMETER




TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:34:54

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522)))
OK (0 msec)

Status on odahost-01:MCL5DB: PASS => Value of remote_listener parameter is able to tnsping


DATA FROM odahost-01 - MCL5DB DATABASE - TNSPING TO REMOTE LISTENER PARAMETER




TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:34:54

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522)))
OK (0 msec)

Status on odahost-01:MCL7DB: PASS => Value of remote_listener parameter is able to tnsping


DATA FROM odahost-01 - MCL7DB DATABASE - TNSPING TO REMOTE LISTENER PARAMETER




TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:34:55

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522)))
OK (0 msec)

Status on odahost-01:MCLDB: PASS => Value of remote_listener parameter is able to tnsping


DATA FROM odahost-01 - MCLDB DATABASE - TNSPING TO REMOTE LISTENER PARAMETER




TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:34:56

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522)))
OK (0 msec)

Status on odahost-02:tstdb1: PASS => Value of remote_listener parameter is able to tnsping



TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:40:02

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522)))
OK (0 msec)

Status on odahost-02:MCL2DB: PASS => Value of remote_listener parameter is able to tnsping



TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:40:03

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522)))
OK (0 msec)

Status on odahost-02:MCL3DB: PASS => Value of remote_listener parameter is able to tnsping



TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:40:04

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522)))
OK (0 msec)

Status on odahost-02:MCL4DB: PASS => Value of remote_listener parameter is able to tnsping



TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:40:05

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522)))
OK (0 msec)

Status on odahost-02:MCL5DB: PASS => Value of remote_listener parameter is able to tnsping



TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:40:06

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522)))
OK (0 msec)

Status on odahost-02:MCL7DB: PASS => Value of remote_listener parameter is able to tnsping



TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:40:07

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522)))
OK (0 msec)

Status on odahost-02:MCLDB: PASS => Value of remote_listener parameter is able to tnsping



TNS Ping Utility for Linux: Version 11.2.0.3.0 - Production on 15-AUG-2012 09:40:07

Copyright (c) 1997, 2011, Oracle.  All rights reserved.

Used parameter files:

Used HOSTNAME adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.13)(PORT=1522))(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.11)(PORT=1522)))
OK (0 msec)
Top

Top

Check for parameter parallel_execution_message_size

Success FactorCONFIGURE PARALLEL_EXECUTION_MESSAGE_SIZE FOR BETTER PARALLELISM PERFORMANCE
Recommendation
 Critical

Benefit / Impact: 

Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized.
The parameters are common to all database instances. The impact of setting these parameters is minimal.
The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact.

Risk: 

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value.

Action / Repair: 

PARALLEL_EXECUTION_MESSAGE_SIZE = 16384 Improves Parallel Query performance
 
Links
Needs attention on-
Passed ontstdb11, MCL2DB1, MCL3DB1, MCL4DB1, MCL5DB1, MCL6DB, MCL7DB1, MCLDB1, tstdb12, MCL2DB2, MCL3DB2, MCL4DB2, MCL5DB2, MCL7DB2, MCLDB2

Status on tstdb11: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

tstdb11.parallel_execution_message_size = 16384                               

Status on MCL2DB1: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCL2DB1.parallel_execution_message_size = 16384                               

Status on MCL3DB1: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCL3DB1.parallel_execution_message_size = 16384                               

Status on MCL4DB1: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCL4DB1.parallel_execution_message_size = 16384                               

Status on MCL5DB1: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCL5DB1.parallel_execution_message_size = 16384                               

Status on MCL6DB: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCL6DB.parallel_execution_message_size = 16384                                

Status on MCL7DB1: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCL7DB1.parallel_execution_message_size = 16384                               

Status on MCLDB1: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCLDB1.parallel_execution_message_size = 16384                                

Status on tstdb12: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

tstdb12.parallel_execution_message_size = 16384                               

Status on MCL2DB2: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCL2DB2.parallel_execution_message_size = 16384                               

Status on MCL3DB2: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCL3DB2.parallel_execution_message_size = 16384                               

Status on MCL4DB2: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCL4DB2.parallel_execution_message_size = 16384                               

Status on MCL5DB2: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCL5DB2.parallel_execution_message_size = 16384                               

Status on MCL7DB2: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCL7DB2.parallel_execution_message_size = 16384                               

Status on MCLDB2: PASS => Database Parameter parallel_execution_message_size is set to the recommended value

MCLDB2.parallel_execution_message_size = 16384                                
Top

Top

Network and bonding interfaces warning status

Success FactorORACLE DATABASE APPLIANCE (ODA)
Recommendation
 All network and bonding interface checks are expected to be successful
 
Needs attention onodahost-01, odahost-02
Passed on-

Status on odahost-01: WARNING => One or more warnings for network and bonding interface checks


DATA FROM odahost-01 FOR NETWORK AND BONDING INTERFACES STATUS



INFO: Doing oak network checks
RESULT: Detected active link for interface eth0 with link speed 1000Mb/s
RESULT: Detected active link for interface eth1 with link speed 1000Mb/s
RESULT: Detected active link for interface eth2 with link speed 1000Mb/s
RESULT: Detected active link for interface eth3 with link speed 1000Mb/s
RESULT: Detected active link for interface eth4 with link speed 1000Mb/s
RESULT: Detected active link for interface eth5 with link speed 1000Mb/s
WARNING: No Link detected for interface eth6
WARNING: No Link detected for interface eth7
WARNING: No Link detected for interface eth8
WARNING: No Link detected for interface eth9
INFO: Checking bonding interface status
RESULT: Bond interface bond0 is up configured in mode:fault-tolerance (active-backup) with current active interface as eth2
Slave1 interface is eth2 with status:up Link fail count=0 Maccaddr:00:21:28:d6:14:4c
Slave2 interface is eth3 with status:up Link fail count=0 Maccaddr:00:21:28:d6:14:4d
RESULT: Bond interface bond1 is up configured in mode:fault-tolerance (active-backup) with current active interface as eth4
...More

Status on odahost-02: WARNING => One or more warnings for network and bonding interface checks


DATA FROM odahost-02 FOR NETWORK AND BONDING INTERFACES STATUS



INFO: Doing oak network checks
RESULT: Detected active link for interface eth0 with link speed 1000Mb/s
RESULT: Detected active link for interface eth1 with link speed 1000Mb/s
RESULT: Detected active link for interface eth2 with link speed 1000Mb/s
RESULT: Detected active link for interface eth3 with link speed 1000Mb/s
RESULT: Detected active link for interface eth4 with link speed 1000Mb/s
RESULT: Detected active link for interface eth5 with link speed 1000Mb/s
WARNING: No Link detected for interface eth6
WARNING: No Link detected for interface eth7
WARNING: No Link detected for interface eth8
WARNING: No Link detected for interface eth9
INFO: Checking bonding interface status
RESULT: Bond interface bond0 is up configured in mode:fault-tolerance (active-backup) with current active interface as eth2
Slave1 interface is eth2 with status:up Link fail count=0 Maccaddr:00:21:28:d6:14:6a
Slave2 interface is eth3 with status:up Link fail count=0 Maccaddr:00:21:28:d6:14:6b
RESULT: Bond interface bond1 is up configured in mode:fault-tolerance (active-backup) with current active interface as eth4
...More
Top

Top

maximum parallel asynch io

Success FactorORACLE DATABASE APPLIANCE (ODA)
Recommendation
 A message in the alert.log similar to the one below is indicative of /proc/sys/fs/aio-max-nr being too low but you should set this to 1048576 proactively and even increase it if you get a similar message.  A problem in this area could lead to availability issues.

Warning: OS async I/O limit 128 is lower than recovery batch 1024
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


DATA FROM odahost-01 - MCLDB DATABASE - MAXIMUM PARALLEL ASYNCH IO



aio-max-nr = 3145728

Status on odahost-02: PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr)


aio-max-nr = 3145728
Top

Top

pam_limits check

Success FactorORACLE DATABASE APPLIANCE (ODA)
Recommendation
 This is required to make the shell limits work properly and applies to 10g and 11g.  From the 11g documentation:

Add the following line to the /etc/pam.d/login file, if it does not already exist:

session    required     pam_limits.so

 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => pam_limits configured properly for shell limits


DATA FROM odahost-01 - MCLDB DATABASE - PAM_LIMITS CHECK



#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 500 quiet
auth        required      pam_deny.so

account     required      pam_unix.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=3
password    sufficient    pam_unix.so md5 shadow nullok try_first_pass use_authtok
password    required      pam_deny.so

...More

Status on odahost-02: PASS => pam_limits configured properly for shell limits


#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 500 quiet
auth        required      pam_deny.so

account     required      pam_unix.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=3
password    sufficient    pam_unix.so md5 shadow nullok try_first_pass use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so
Top

Top

Check for parameter db_block_checksum

Success FactorORACLE DATABASE APPLIANCE (ODA)
Recommendation
 Critical

Benefit / Impact: 

Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized.
The parameters are common to all database instances. The impact of setting these parameters is minimal.
The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact.

Risk: 

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value.

Action / Repair: 

DB_BLOCK_CHECKSUM = TYPICAL aids in block corruption detection.  Enable for primary and standby databases.
 
Links
Needs attention on-
Passed ontstdb11, MCL2DB1, MCL3DB1, MCL4DB1, MCL5DB1, MCL6DB, MCL7DB1, MCLDB1, tstdb12, MCL2DB2, MCL3DB2, MCL4DB2, MCL5DB2, MCL7DB2, MCLDB2

Status on tstdb11: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

tstdb11.db_block_checksum = FULL                                              

Status on MCL2DB1: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCL2DB1.db_block_checksum = FULL                                              

Status on MCL3DB1: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCL3DB1.db_block_checksum = FULL                                              

Status on MCL4DB1: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCL4DB1.db_block_checksum = FULL                                              

Status on MCL5DB1: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCL5DB1.db_block_checksum = FULL                                              

Status on MCL6DB: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCL6DB.db_block_checksum = FULL                                               

Status on MCL7DB1: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCL7DB1.db_block_checksum = FULL                                              

Status on MCLDB1: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCLDB1.db_block_checksum = FULL                                               

Status on tstdb12: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

tstdb12.db_block_checksum = FULL                                              

Status on MCL2DB2: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCL2DB2.db_block_checksum = FULL                                              

Status on MCL3DB2: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCL3DB2.db_block_checksum = FULL                                              

Status on MCL4DB2: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCL4DB2.db_block_checksum = FULL                                              

Status on MCL5DB2: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCL5DB2.db_block_checksum = FULL                                              

Status on MCL7DB2: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCL7DB2.db_block_checksum = FULL                                              

Status on MCLDB2: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value

MCLDB2.db_block_checksum = FULL                                               
Top

Top

Check for parameter db_block_checking

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 Critical

Benefit / Impact:

Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized. The parameters are common to all database instances. The impact of setting these parameters is minimal. The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact.

Risk:

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value.

Action / Repair:

For higher data corruption detection and prevention, enable this setting but performance impacts vary per workload.
Evaluate performance impact.

See the referenced MOS Note for more details.
 
Links
Needs attention on-
Passed ontstdb11, MCL2DB1, MCL3DB1, MCL4DB1, MCL5DB1, MCL6DB, MCL7DB1, MCLDB1, tstdb12, MCL2DB2, MCL3DB2, MCL4DB2, MCL5DB2, MCL7DB2, MCLDB2

Status on tstdb11: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

tstdb11.db_block_checking = FULL                                              

Status on MCL2DB1: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCL2DB1.db_block_checking = FULL                                              

Status on MCL3DB1: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCL3DB1.db_block_checking = FULL                                              

Status on MCL4DB1: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCL4DB1.db_block_checking = FULL                                              

Status on MCL5DB1: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCL5DB1.db_block_checking = FULL                                              

Status on MCL6DB: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCL6DB.db_block_checking = FULL                                               

Status on MCL7DB1: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCL7DB1.db_block_checking = FULL                                              

Status on MCLDB1: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCLDB1.db_block_checking = FULL                                               

Status on tstdb12: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

tstdb12.db_block_checking = FULL                                              

Status on MCL2DB2: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCL2DB2.db_block_checking = FULL                                              

Status on MCL3DB2: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCL3DB2.db_block_checking = FULL                                              

Status on MCL4DB2: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCL4DB2.db_block_checking = FULL                                              

Status on MCL5DB2: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCL5DB2.db_block_checking = FULL                                              

Status on MCL7DB2: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCL7DB2.db_block_checking = FULL                                              

Status on MCLDB2: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value

MCLDB2.db_block_checking = FULL                                               
Top

Top

Check for parameter memory_target

Success FactorORACLE DATABASE APPLIANCE (ODA)
Recommendation
 Critical

Benefit / Impact: 

Experience and testing has shown that certain ASM initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these ASM initialization parameters as recommended, known problems may be avoided and performance maximized.
The parameters are specific to the ASM instances. Unless otherwise specified, the value is for V2, X2-2 and X2-8 Database Machines. The impact of setting these parameters is minimal.

Risk: 

If the ASM initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value.

Action / Repair: 

ASM MEMORY_TARGET of 1040M avoids issues with 11.2.0.1 to 11.2.0.2 upgrade. This is the initial deployment setting for Exadata.
 
Needs attention on-
Passed on+ASM1, +ASM2

Status on +ASM1: PASS => ASM parameter MEMORY_TARGET is set according to recommended value.

+ASM1.memory_target = 1073741824                                                

Status on +ASM2: PASS => ASM parameter MEMORY_TARGET is set according to recommended value.

+ASM2.memory_target = 1073741824                                                
Top

Top

Verify all "BIGFILE" tablespaces have non-default "MAXBYTES" values set

Success FactorSQL DATA COLLECTIONS AND CHECKS
Recommendation
 Benefit / Impact:

"MAXBYTES" is the SQL attribute that expresses the "MAXSIZE" value that is used in the DDL command to set "AUTOEXTEND" to "ON". By default, for a bigfile tablespace, the value is "3.5184E+13", or "35184372064256". The benefit of having "MAXBYTES" set at a non-default value for "BIGFILE" tablespaces is that a runaway operation or heavy simultaneous use (e.g., temp tablespace) cannot take up all the space in a diskgroup.

The impact of verifying that "MAXBYTES" is set to a non-default value is minimal. The impact of setting the "MAXSIZE" attribute to a non-default value "varies depending upon if it is done during database creation, file addition to a tablespace, or added to an existing file.

Risk

The risk of running out of space in a diskgroup varies by application and cannot be quantified here. A diskgroup running out of space may impact the entire database as well as ASM operations (e.g., rebalance operations).

Action / Repair:

To obtain a list of file numbers and bigfile tablespaces that have the "MAXBYTES" attribute at the default value, enter the following sqlplus command logged into the database as sysdba:
select file_id, a.tablespace_name, autoextensible, maxbytes
from (select file_id, tablespace_name, autoextensible, maxbytes from dba_data_files where autoextensible='YES' and maxbytes = 35184372064256) a, (select tablespace_name from dba_tablespaces where bigfile='YES') b
where a.tablespace_name = b.tablespace_name
union
select file_id,a.tablespace_name, autoextensible, maxbytes
from (select file_id, tablespace_name, autoextensible, maxbytes from dba_temp_files where autoextensible='YES' and maxbytes = 35184372064256) a, (select tablespace_name from dba_tablespaces where bigfile='YES') b
where a.tablespace_name = b.tablespace_name;

The output should be:no rows returned 

If you see output similar to:

   FILE_ID TABLESPACE_NAME                AUT   MAXBYTES
---------- ------------------------------ --- ----------
         1 TEMP                           YES 3.5184E+13
         3 UNDOTBS1                       YES 3.5184E+13
         4 UNDOTBS2                       YES 3.5184E+13

Investigate and correct the condition.
 
Needs attention on-
Passed ontstdb1, MCL2DB, MCL3DB, MCL4DB, MCL5DB, MCL6DB, MCL7DB, MCLDB

Status on tstdb1: PASS => All bigfile tablespaces have non-default maxbytes values set


DATA FOR tstdb1 FOR VERIFY ALL "BIGFILE" TABLESPACES HAVE NON-DEFAULT "MAXBYTES" VALUES SET




If no rows returned means query did not return any row and SQL check passed


Status on MCL2DB: PASS => All bigfile tablespaces have non-default maxbytes values set


DATA FOR MCL2DB FOR VERIFY ALL "BIGFILE" TABLESPACES HAVE NON-DEFAULT "MAXBYTES" VALUES SET




If no rows returned means query did not return any row and SQL check passed


Status on MCL3DB: PASS => All bigfile tablespaces have non-default maxbytes values set


DATA FOR MCL3DB FOR VERIFY ALL "BIGFILE" TABLESPACES HAVE NON-DEFAULT "MAXBYTES" VALUES SET




If no rows returned means query did not return any row and SQL check passed


Status on MCL4DB: PASS => All bigfile tablespaces have non-default maxbytes values set


DATA FOR MCL4DB FOR VERIFY ALL "BIGFILE" TABLESPACES HAVE NON-DEFAULT "MAXBYTES" VALUES SET




If no rows returned means query did not return any row and SQL check passed


Status on MCL5DB: PASS => All bigfile tablespaces have non-default maxbytes values set


DATA FOR MCL5DB FOR VERIFY ALL "BIGFILE" TABLESPACES HAVE NON-DEFAULT "MAXBYTES" VALUES SET




If no rows returned means query did not return any row and SQL check passed


Status on MCL6DB: PASS => All bigfile tablespaces have non-default maxbytes values set


DATA FOR MCL6DB FOR VERIFY ALL "BIGFILE" TABLESPACES HAVE NON-DEFAULT "MAXBYTES" VALUES SET




If no rows returned means query did not return any row and SQL check passed


Status on MCL7DB: PASS => All bigfile tablespaces have non-default maxbytes values set


DATA FOR MCL7DB FOR VERIFY ALL "BIGFILE" TABLESPACES HAVE NON-DEFAULT "MAXBYTES" VALUES SET




If no rows returned means query did not return any row and SQL check passed


Status on MCLDB: PASS => All bigfile tablespaces have non-default maxbytes values set


DATA FOR MCLDB FOR VERIFY ALL "BIGFILE" TABLESPACES HAVE NON-DEFAULT "MAXBYTES" VALUES SET




If no rows returned means query did not return any row and SQL check passed

Top

Top

OS Disk Storage Status

Success FactorORACLE DATABASE APPLIANCE (ODA)
Recommendation
 All OS Disk Storage checks are expected to be successful
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => OS Disk Storage checks successful


DATA FROM odahost-01 FOR OS DISK STORAGE STATUS



INFO: Checking Operating System Storage
SUCCESS: The OS disks have the boot stamp
RESULT: Raid device /dev/md0 found clean
RESULT: Raid device /dev/md1 found clean
RESULT: Physical Volume   /dev/md1 in VolGroupSys has 270213.84M out of total 499994.59M
RESULT: Volumegroup   VolGroupSys consist of 1 physical volumes,contains 4 logical volumes, has 0 volume snaps with total size of 499994.59M and free space of 270213.84M
RESULT: Logical Volume   LogVolOpt in VolGroupSys Volume group is of size 60.00G
RESULT: Logical Volume   LogVolRoot in VolGroupSys Volume group is of size 30.00G
RESULT: Logical Volume   LogVolSwap in VolGroupSys Volume group is of size 24.00G
RESULT: Logical Volume   LogVolU01 in VolGroupSys Volume group is of size 100.00G
RESULT: Device /dev/mapper/VolGroupSys-LogVolRoot is mounted on / of type ext3 in (rw)
RESULT: Device /dev/md0 is mounted on /boot of type ext3 in (rw)
RESULT: Device /dev/mapper/VolGroupSys-LogVolOpt is mounted on /opt of type ext3 in (rw)
RESULT: Device /dev/mapper/VolGroupSys-LogVolU01 is mounted on /u01 of type ext3 in (rw)
RESULT: / has 25042 MB free out of total 29758 MB
RESULT: /boot has 42 MB free out of total 99 MB
...More

Status on odahost-02: PASS => OS Disk Storage checks successful


DATA FROM odahost-02 FOR OS DISK STORAGE STATUS



INFO: Checking Operating System Storage
SUCCESS: The OS disks have the boot stamp
RESULT: Raid device /dev/md0 found clean
RESULT: Raid device /dev/md1 found clean
RESULT: Physical Volume   /dev/md1 in VolGroupSys has 270213.84M out of total 499994.59M
RESULT: Volumegroup   VolGroupSys consist of 1 physical volumes,contains 4 logical volumes, has 0 volume snaps with total size of 499994.59M and free space of 270213.84M
RESULT: Logical Volume   LogVolOpt in VolGroupSys Volume group is of size 60.00G
RESULT: Logical Volume   LogVolRoot in VolGroupSys Volume group is of size 30.00G
RESULT: Logical Volume   LogVolSwap in VolGroupSys Volume group is of size 24.00G
RESULT: Logical Volume   LogVolU01 in VolGroupSys Volume group is of size 100.00G
RESULT: Device /dev/mapper/VolGroupSys-LogVolRoot is mounted on / of type ext3 in (rw)
RESULT: Device /dev/md0 is mounted on /boot of type ext3 in (rw)
RESULT: Device /dev/mapper/VolGroupSys-LogVolOpt is mounted on /opt of type ext3 in (rw)
RESULT: Device /dev/mapper/VolGroupSys-LogVolU01 is mounted on /u01 of type ext3 in (rw)
RESULT: / has 25036 MB free out of total 29758 MB
RESULT: /boot has 42 MB free out of total 99 MB
...More
Top

Top

System Component Status

Success FactorORACLE DATABASE APPLIANCE (ODA)
Recommendation
 All system component checks are expected to be successful
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => System component checks successful


DATA FROM odahost-01 FOR SYSTEM COMPONENT STATUS



INFO: oak system information and Validations
RESULT: System Software inventory details
Reading the metadata. It takes a while...
System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
2.3.0.0.0
Controller                05.00.29.00               Up-to-date
Expander                  0342                      Up-to-date
SSD_SHARED                E125                      Up-to-date
HDD_LOCAL                 SA03                      Up-to-date
HDD_SHARED                0B25                      Up-to-date
ILOM                      3.0.16.22 r73911          Up-to-date
BIOS                      12010309                  Up-to-date
IPMI                      1.8.10.4                  Up-to-date
HMP                       2.2.4                     Up-to-date
OAK                       2.3.0.0.0                 Up-to-date
...More

Status on odahost-02: PASS => System component checks successful


DATA FROM odahost-02 FOR SYSTEM COMPONENT STATUS



INFO: oak system information and Validations
RESULT: System Software inventory details
Reading the metadata. It takes a while...
System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
2.3.0.0.0
Controller                05.00.29.00               Up-to-date
Expander                  0342                      Up-to-date
SSD_SHARED                E125                      Up-to-date
HDD_LOCAL                 SA03                      Up-to-date
HDD_SHARED                0B25                      Up-to-date
ILOM                      3.0.16.22 r73911          Up-to-date
BIOS                      12010309                  Up-to-date
IPMI                      1.8.10.4                  Up-to-date
HMP                       2.2.4                     Up-to-date
OAK                       2.3.0.0.0                 Up-to-date
...More
Top

Top

Shared Storage Status

Success FactorORACLE DATABASE APPLIANCE (ODA)
Recommendation
 All shared storage checks are expected to be successful
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Shared storage checks successful


DATA FROM odahost-01 FOR VALIDATE SHARED STORAGE



INFO: Checking Shared Storage
RESULT: Disk HDD_E0_S00_966615931 path1 status active device sdc with status active path2 status enabled device sdam with status active
SUCCESS: HDD_E0_S00_966615931 has both the paths up and current active path is sdc
RESULT: Disk HDD_E0_S01_966589563 path1 status active device sdm with status active path2 status enabled device sdaw with status active
SUCCESS: HDD_E0_S01_966589563 has both the paths up and current active path is sdm
RESULT: Disk HDD_E0_S04_966044031 path1 status active device sdd with status active path2 status enabled device sdan with status active
SUCCESS: HDD_E0_S04_966044031 has both the paths up and current active path is sdd
RESULT: Disk HDD_E0_S05_966615123 path1 status active device sdn with status active path2 status enabled device sdax with status active
SUCCESS: HDD_E0_S05_966615123 has both the paths up and current active path is sdn
RESULT: Disk HDD_E0_S08_967037407 path1 status active device sde with status active path2 status enabled device sdao with status active
SUCCESS: HDD_E0_S08_967037407 has both the paths up and current active path is sde
RESULT: Disk HDD_E0_S09_966788687 path1 status active device sdk with status active path2 status enabled device sdau with status active
SUCCESS: HDD_E0_S09_966788687 has both the paths up and current active path is sdk
RESULT: Disk HDD_E0_S12_966579103 path1 status active device sdf with status active path2 status enabled device sdap with status active
SUCCESS: HDD_E0_S12_966579103 has both the paths up and current active path is sdf
RESULT: Disk HDD_E0_S13_967038227 path1 status active device sdl with status active path2 status enabled device sdav with status active
...More

Status on odahost-02: PASS => Shared storage checks successful


DATA FROM odahost-02 FOR VALIDATE SHARED STORAGE



INFO: Checking Shared Storage
RESULT: Disk HDD_E0_S00_966615931 path1 status active device sdc with status active path2 status enabled device sdam with status active
SUCCESS: HDD_E0_S00_966615931 has both the paths up and current active path is sdc
RESULT: Disk HDD_E0_S01_966589563 path1 status active device sdm with status active path2 status enabled device sdaw with status active
SUCCESS: HDD_E0_S01_966589563 has both the paths up and current active path is sdm
RESULT: Disk HDD_E0_S04_966044031 path1 status active device sdd with status active path2 status enabled device sdan with status active
SUCCESS: HDD_E0_S04_966044031 has both the paths up and current active path is sdd
RESULT: Disk HDD_E0_S05_966615123 path1 status active device sdn with status active path2 status enabled device sdax with status active
SUCCESS: HDD_E0_S05_966615123 has both the paths up and current active path is sdn
RESULT: Disk HDD_E0_S08_967037407 path1 status active device sde with status active path2 status enabled device sdao with status active
SUCCESS: HDD_E0_S08_967037407 has both the paths up and current active path is sde
RESULT: Disk HDD_E0_S09_966788687 path1 status active device sdk with status active path2 status enabled device sdau with status active
SUCCESS: HDD_E0_S09_966788687 has both the paths up and current active path is sdk
RESULT: Disk HDD_E0_S12_966579103 path1 status active device sdf with status active path2 status enabled device sdap with status active
SUCCESS: HDD_E0_S12_966579103 has both the paths up and current active path is sdf
RESULT: Disk HDD_E0_S13_967038227 path1 status active device sdl with status active path2 status enabled device sdav with status active
...More
Top

Top

Software and Firmware Versions

Success FactorORACLE DATABASE APPLIANCE (ODA)
Recommendation
 All installed software versions are intended to match the proposed patch versions in the oak repository. installed version must match in supported version in below reported data on each node. 
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => All software and firmware versions are up to date with OAK repository.


DATA FROM odahost-01 FOR FIRMWARE AND SOFTWARE VERSIONS



Reading the metadata. It takes a while...
Failed to connect: Busy

System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
2.3.0.0.0
BIOS                      Unknown                   12010309
13919095)
DB_HOME {
[ OraDb11203_home1 ]      11.2.0.3.3(13923374,      No-update
13919095)
[ tstdb1 ]              11.2.0.2.5(13343424,      No-update
13343447)
}
ASR                       Unknown                   3.7

Status on odahost-02: PASS => All software and firmware versions are up to date with OAK repository.


DATA FROM odahost-02 FOR FIRMWARE AND SOFTWARE VERSIONS



Reading the metadata. It takes a while...
Failed to connect: Busy

System Version  Component Name            Installed Version         Supported Version
--------------  ---------------           ------------------        -----------------
2.3.0.0.0
BIOS                      Unknown                   12010309
13919095)
DB_HOME {
[ OraDb11203_home1 ]      11.2.0.3.3(13923374,      No-update
13919095)
[ tstdb1 ]              11.2.0.2.5(13343424,      No-update
13343447)
}
Top

Top

High Redundancy Redolog files

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 Benefit / Impact: 

Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized. The parameters are common to all database instances. The impact of setting these parameters is minimal. The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact. 

Risk: 

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value. 

Action / Repair: 

Ensure the db_create_online_log_dest_n is configured for a high redundancy diskgroup

A high redundancy diskgroup optimizes availability.

If a high redundancy disk group is available, use the first high ASM redundancy disk group for all your Online Redo Logs or Standby Redo Logs. Use only one log member to minimize performance impact.

If a high redundancy disk group is not available, multiplex redo log members across DATA and RECO ASM disk groups for additional protection.
 
Needs attention on-
Passed onodahost-01:tstdb1, odahost-01:MCL2DB, odahost-01:MCL3DB, odahost-01:MCL4DB, odahost-01:MCL5DB, odahost-01:MCL6DB, odahost-01:MCL7DB, odahost-01:MCLDB, odahost-02:tstdb1, odahost-02:MCL2DB, odahost-02:MCL3DB, odahost-02:MCL4DB, odahost-02:MCL5DB, odahost-02:MCL7DB, odahost-02:MCLDB

Status on odahost-01:tstdb1: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


DATA FROM odahost-01 - tstdb1 DATABASE - HIGH REDUNDANCY REDOLOG FILES



High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-01:MCL2DB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


DATA FROM odahost-01 - MCL2DB DATABASE - HIGH REDUNDANCY REDOLOG FILES



High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-01:MCL3DB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


DATA FROM odahost-01 - MCL3DB DATABASE - HIGH REDUNDANCY REDOLOG FILES



High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-01:MCL4DB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


DATA FROM odahost-01 - MCL4DB DATABASE - HIGH REDUNDANCY REDOLOG FILES



High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-01:MCL5DB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


DATA FROM odahost-01 - MCL5DB DATABASE - HIGH REDUNDANCY REDOLOG FILES



High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-01:MCL6DB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


DATA FROM odahost-01 - MCL6DB DATABASE - HIGH REDUNDANCY REDOLOG FILES



High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-01:MCL7DB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


DATA FROM odahost-01 - MCL7DB DATABASE - HIGH REDUNDANCY REDOLOG FILES



High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-01:MCLDB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


DATA FROM odahost-01 - MCLDB DATABASE - HIGH REDUNDANCY REDOLOG FILES



High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-02:tstdb1: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-02:MCL2DB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-02:MCL3DB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-02:MCL4DB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-02:MCL5DB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-02:MCL7DB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1

Status on odahost-02:MCLDB: PASS => Database parameter Db_create_online_log_dest_n is set to recommended value


High redundancy disk groups = 	  3
Number of redo log groups with more than 1 member = 	     0
Number of diskgroup where redo log members are multiplexed = 		       1
Top

Top

db_recovery_file_dest_size

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 Benefit / Impact: 

Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized. The parameters are common to all database instances. The impact of setting these parameters is minimal. The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact. 

Risk: 

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value. 

Action / Repair: 

Ensure db_recovery_file_dest_size <= 90% of the Recovery Area diskgroup TOTAL_MB size

 
Needs attention on-
Passed onodahost-01:tstdb1, odahost-01:MCL2DB, odahost-01:MCL3DB, odahost-01:MCL4DB, odahost-01:MCL5DB, odahost-01:MCL6DB, odahost-01:MCL7DB, odahost-01:MCLDB, odahost-02:tstdb1, odahost-02:MCL2DB, odahost-02:MCL3DB, odahost-02:MCL4DB, odahost-02:MCL5DB, odahost-02:MCL7DB, odahost-02:MCLDB

Status on odahost-01:tstdb1: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


DATA FROM odahost-01 - tstdb1 DATABASE - DB_RECOVERY_FILE_DEST_SIZE



90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			 1800GB

Status on odahost-01:MCL2DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


DATA FROM odahost-01 - MCL2DB DATABASE - DB_RECOVERY_FILE_DEST_SIZE



90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB

Status on odahost-01:MCL3DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


DATA FROM odahost-01 - MCL3DB DATABASE - DB_RECOVERY_FILE_DEST_SIZE



90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB

Status on odahost-01:MCL4DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


DATA FROM odahost-01 - MCL4DB DATABASE - DB_RECOVERY_FILE_DEST_SIZE



90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB

Status on odahost-01:MCL5DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


DATA FROM odahost-01 - MCL5DB DATABASE - DB_RECOVERY_FILE_DEST_SIZE



90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB

Status on odahost-01:MCL6DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


DATA FROM odahost-01 - MCL6DB DATABASE - DB_RECOVERY_FILE_DEST_SIZE



90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB

Status on odahost-01:MCL7DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


DATA FROM odahost-01 - MCL7DB DATABASE - DB_RECOVERY_FILE_DEST_SIZE



90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB

Status on odahost-01:MCLDB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


DATA FROM odahost-01 - MCLDB DATABASE - DB_RECOVERY_FILE_DEST_SIZE



90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB

Status on odahost-02:tstdb1: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			 1800GB

Status on odahost-02:MCL2DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB

Status on odahost-02:MCL3DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB

Status on odahost-02:MCL4DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB

Status on odahost-02:MCL5DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB

Status on odahost-02:MCL7DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB

Status on odahost-02:MCLDB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value


90% of RECO Total Space = 			 5740GB
db_recovery_file_dest_size= 			   10GB
Top

Top

Check for parameter global_names

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 Critical

Benefit / Impact: 

Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized.
The parameters are common to all database instances. The impact of setting these parameters is minimal.
The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact.

Risk: 

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value.

Action / Repair: 

GLOBAL_NAMES = TRUE is a security optimization
 
Needs attention on-
Passed ontstdb11, MCL2DB1, MCL3DB1, MCL4DB1, MCL5DB1, MCL6DB, MCL7DB1, MCLDB1, tstdb12, MCL2DB2, MCL3DB2, MCL4DB2, MCL5DB2, MCL7DB2, MCLDB2

Status on tstdb11: PASS => Database parameter GLOBAL_NAMES is set to recommended value

tstdb11.global_names = TRUE                                                   

Status on MCL2DB1: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCL2DB1.global_names = TRUE                                                   

Status on MCL3DB1: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCL3DB1.global_names = TRUE                                                   

Status on MCL4DB1: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCL4DB1.global_names = TRUE                                                   

Status on MCL5DB1: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCL5DB1.global_names = TRUE                                                   

Status on MCL6DB: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCL6DB.global_names = TRUE                                                    

Status on MCL7DB1: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCL7DB1.global_names = TRUE                                                   

Status on MCLDB1: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCLDB1.global_names = TRUE                                                    

Status on tstdb12: PASS => Database parameter GLOBAL_NAMES is set to recommended value

tstdb12.global_names = TRUE                                                   

Status on MCL2DB2: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCL2DB2.global_names = TRUE                                                   

Status on MCL3DB2: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCL3DB2.global_names = TRUE                                                   

Status on MCL4DB2: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCL4DB2.global_names = TRUE                                                   

Status on MCL5DB2: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCL5DB2.global_names = TRUE                                                   

Status on MCL7DB2: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCL7DB2.global_names = TRUE                                                   

Status on MCLDB2: PASS => Database parameter GLOBAL_NAMES is set to recommended value

MCLDB2.global_names = TRUE                                                    
Top

Top

Check for parameter db_lost_write_protect

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 Critical

Benefit / Impact: 

Experience and testing has shown that certain database initialization parameters should be set at specific values. These are the best practice values set at deployment time. By setting these database initialization parameters as recommended, known problems may be avoided and performance maximized.
The parameters are common to all database instances. The impact of setting these parameters is minimal.
The performance related settings provide guidance to maintain highest stability without sacrificing performance. Changing the default performance settings can be done after careful performance evaluation and clear understanding of the performance impact.

Risk: 

If the database initialization parameters are not set as recommended, a variety of issues may be encountered, depending upon which initialization parameter is not set as recommended, and the actual set value.

Action / Repair: 

This is important for data block lost write detection and repair. Enable for
primary and standby databases.

Refer to MOS 1265884.1 and 1302539.1. Refer to section on how to address
ORA-752 on the standby database
 
Links
Needs attention on-
Passed ontstdb11, MCL2DB1, MCL3DB1, MCL4DB1, MCL5DB1, MCL6DB, MCL7DB1, MCLDB1, tstdb12, MCL2DB2, MCL3DB2, MCL4DB2, MCL5DB2, MCL7DB2, MCLDB2

Status on tstdb11: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

tstdb11.db_lost_write_protect = TYPICAL                                       

Status on MCL2DB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCL2DB1.db_lost_write_protect = TYPICAL                                       

Status on MCL3DB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCL3DB1.db_lost_write_protect = TYPICAL                                       

Status on MCL4DB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCL4DB1.db_lost_write_protect = TYPICAL                                       

Status on MCL5DB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCL5DB1.db_lost_write_protect = TYPICAL                                       

Status on MCL6DB: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCL6DB.db_lost_write_protect = TYPICAL                                        

Status on MCL7DB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCL7DB1.db_lost_write_protect = TYPICAL                                       

Status on MCLDB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCLDB1.db_lost_write_protect = TYPICAL                                        

Status on tstdb12: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

tstdb12.db_lost_write_protect = TYPICAL                                       

Status on MCL2DB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCL2DB2.db_lost_write_protect = TYPICAL                                       

Status on MCL3DB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCL3DB2.db_lost_write_protect = TYPICAL                                       

Status on MCL4DB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCL4DB2.db_lost_write_protect = TYPICAL                                       

Status on MCL5DB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCL5DB2.db_lost_write_protect = TYPICAL                                       

Status on MCL7DB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCL7DB2.db_lost_write_protect = TYPICAL                                       

Status on MCLDB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value

MCLDB2.db_lost_write_protect = TYPICAL                                        
Top

Top

DB shell limits soft nproc

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 The soft nproc shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 2047.
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Shell limit soft nproc for DB is configured according to recommendation


DATA FROM odahost-01 - MCLDB DATABASE - DB SHELL LIMITS SOFT NPROC



oracle soft nproc 131072

Status on odahost-02: PASS => Shell limit soft nproc for DB is configured according to recommendation


oracle soft nproc 131072
Top

Top

DB shell limits hard stack

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 Benefit / Impact:

Documented value is the /etc/security/limits.conf file as documented in 11gR2 Grid Infrastructure  Installation Guide, section 2.15.3 Setting Resource Limits for the Oracle Software Installation Users.  

If the /etc/security/limits.conf file is not configured as described in the documentation then log in to the system as the database software owner (eg., oracle) and check the hard stack configuration as described below.

Risk:

The hard stack shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 10240.  As long as the hard stack limit is 10240 or above then the configuration should be ok.

Action / Repair:

Change DB software install owner hard stack shell limit

$ ulimit -Hs
10240


 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Shell limit hard stack for DB is configured according to recommendation


DATA FROM odahost-01 - MCLDB DATABASE - DB SHELL LIMITS HARD STACK



oracle hard stack unlimited

Status on odahost-02: PASS => Shell limit hard stack for DB is configured according to recommendation


oracle hard stack unlimited
Top

Top

DB shell limits hard nofile

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 Benefit / Impact:

Documented value, cluster stability

The hard nofile shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 65536.

Risk:

Resource starvation (file descriptors) leading to node instability

Action / Repair:

Change DB software install owner hard nofile shell limit
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Shell limit hard nofile for DB is configured according to recommendation


DATA FROM odahost-01 - MCLDB DATABASE - DB SHELL LIMITS HARD NOFILE



oracle hard nofile 131072

Status on odahost-02: PASS => Shell limit hard nofile for DB is configured according to recommendation


oracle hard nofile 131072
Top

Top

DB shell limits hard nproc

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 Benefit / Impact:

Documented value, cluster stability

The hard nproc shell limit for the Oracle DB software install owner as defined in /etc/security/limits.conf should be >= 16384.

Risk:

Resource starvation (processes) leading to node instability

Action / Repair:

Change DB software install owner hard nproc shell limit

 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Shell limit hard nproc for DB is configured according to recommendation


DATA FROM odahost-01 - MCLDB DATABASE - DB SHELL LIMITS HARD NPROC



oracle hard nproc 131072

Status on odahost-02: PASS => Shell limit hard nproc for DB is configured according to recommendation


oracle hard nproc 131072
Top

Top

GI shell limits hard nproc

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 The hard nproc shell limit for the Oracle GI software install owner as defined in /etc/security/limits.conf should be >= 16384.
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Shell limit hard nproc for GI is configured according to recommendation


DATA FROM odahost-01 - MCLDB DATABASE - GI SHELL LIMITS HARD NPROC




grid   soft   nofile    131072
grid   hard   nofile    131072
grid   soft   nproc    131072
grid   hard   nproc    131072
grid   soft   core    unlimited
grid   hard   core    unlimited
grid   soft   memlock	72000000
grid   hard   memlock	72000000

oracle   soft   nofile    131072

oracle   hard   nofile    131072

oracle   soft   nproc    131072

...More

Status on odahost-02: PASS => Shell limit hard nproc for GI is configured according to recommendation



grid   soft   nofile    131072
grid   hard   nofile    131072
grid   soft   nproc    131072
grid   hard   nproc    131072
grid   soft   core    unlimited
grid   hard   core    unlimited
grid   soft   memlock	72000000
grid   hard   memlock	72000000

oracle   soft   nofile    131072

oracle   hard   nofile    131072

oracle   soft   nproc    131072

oracle   hard   nproc    131072

oracle   soft   core    unlimited

...More
Top

Top

GI shell limits hard nofile

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 The hard nofile shell limit for the Oracle GI software install owner as defined in /etc/security/limits.conf should be >= 65536.
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Shell limit hard nofile for GI is configured according to recommendation


DATA FROM odahost-01 - MCLDB DATABASE - GI SHELL LIMITS HARD NOFILE




grid   soft   nofile    131072
grid   hard   nofile    131072
grid   soft   nproc    131072
grid   hard   nproc    131072
grid   soft   core    unlimited
grid   hard   core    unlimited
grid   soft   memlock	72000000
grid   hard   memlock	72000000

oracle   soft   nofile    131072

oracle   hard   nofile    131072

oracle   soft   nproc    131072

...More

Status on odahost-02: PASS => Shell limit hard nofile for GI is configured according to recommendation



grid   soft   nofile    131072
grid   hard   nofile    131072
grid   soft   nproc    131072
grid   hard   nproc    131072
grid   soft   core    unlimited
grid   hard   core    unlimited
grid   soft   memlock	72000000
grid   hard   memlock	72000000

oracle   soft   nofile    131072

oracle   hard   nofile    131072

oracle   soft   nproc    131072

oracle   hard   nproc    131072

oracle   soft   core    unlimited

...More
Top

Top

GI shell limits soft nproc

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 The soft nproc shell limit for the Oracle GI software install owner as defined in /etc/security/limits.conf should be >= 2047.
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Shell limit soft nproc for GI is configured according to recommendation


DATA FROM odahost-01 - MCLDB DATABASE - GI SHELL LIMITS SOFT NPROC




grid   soft   nofile    131072
grid   hard   nofile    131072
grid   soft   nproc    131072
grid   hard   nproc    131072
grid   soft   core    unlimited
grid   hard   core    unlimited
grid   soft   memlock	72000000
grid   hard   memlock	72000000

oracle   soft   nofile    131072

oracle   hard   nofile    131072

oracle   soft   nproc    131072

...More

Status on odahost-02: PASS => Shell limit soft nproc for GI is configured according to recommendation



grid   soft   nofile    131072
grid   hard   nofile    131072
grid   soft   nproc    131072
grid   hard   nproc    131072
grid   soft   core    unlimited
grid   hard   core    unlimited
grid   soft   memlock	72000000
grid   hard   memlock	72000000

oracle   soft   nofile    131072

oracle   hard   nofile    131072

oracle   soft   nproc    131072

oracle   hard   nproc    131072

oracle   soft   core    unlimited

...More
Top

Top

GI shell limits soft nofile

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 The soft nofile shell limit for the Oracle GI software install owner as defined in /etc/security/limits.conf should be >= 1024.
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Shell limit soft nofile for GI is configured according to recommendation


DATA FROM odahost-01 - MCLDB DATABASE - GI SHELL LIMITS SOFT NOFILE




grid   soft   nofile    131072
grid   hard   nofile    131072
grid   soft   nproc    131072
grid   hard   nproc    131072
grid   soft   core    unlimited
grid   hard   core    unlimited
grid   soft   memlock	72000000
grid   hard   memlock	72000000

oracle   soft   nofile    131072

oracle   hard   nofile    131072

oracle   soft   nproc    131072

...More

Status on odahost-02: PASS => Shell limit soft nofile for GI is configured according to recommendation



grid   soft   nofile    131072
grid   hard   nofile    131072
grid   soft   nproc    131072
grid   hard   nproc    131072
grid   soft   core    unlimited
grid   hard   core    unlimited
grid   soft   memlock	72000000
grid   hard   memlock	72000000

oracle   soft   nofile    131072

oracle   hard   nofile    131072

oracle   soft   nproc    131072

oracle   hard   nproc    131072

oracle   soft   core    unlimited

...More
Top

Top

ASM disk group compatible.asm parameter

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 Benefit / Impact:

The components in the I/O stack are tightly integrated in Exadata. You must use the proper versions of software both on the storage servers and the database servers. Setting compatible attributes defines available functionality. Setting CELL.SMART_SCAN_CAPABLE enables the offloading of certain query work to the storage servers. Setting AU_SIZE maximizes available disk technology and throughput by reading 4MB of data before performing a disk seek to a new sector location.
There is minimal impact to verify and configure these settings.

Risk:

If these attributes are not set as directed, performance will be sub-optimal.

Action / Repair:

For the ASM disk group containing Oracle Exadata Storage Server grid disks, 
verify the attribute settings as follows: 

     * COMPATIBLE.ASM attribute is set to 11.2.0.2 or higher. 
     * COMPATIBLE.RDBMS attribute is set to the minimum Oracle database software version in use. 
     * CELL.SMART_SCAN_CAPABLE attribute is TRUE. 
     * AU_SIZE attribute is 4M. 
if these attributes are not set properly, correct the condition. 
 
Needs attention on-
Passed onodahost-01

Status on odahost-01: PASS => All disk groups have compatible.asm parameter set to recommended values


DATA FROM odahost-01 - MCLDB DATABASE - ASM DISK GROUP COMPATIBLE.ASM PARAMETER



ASM DATA.compatible.asm = 11.2.0.2.0
ASM RECO.compatible.asm = 11.2.0.2.0
ASM REDO.compatible.asm = 11.2.0.2.0
Top

Top

ASM allocation unit size for all disk groups

Success FactorDBMACHINE X2-2 AND X2-8 AUDIT CHECKS
Recommendation
 Benefit / Impact:

In order to achieve fast disk scan rates with today's disk technology, it is important that segments be laid out on disk with at least 4MB of contiguous disk space. This allows disk scans to read 4MB of data from disk before having to perform a seek to another location on disk and therefore ensures that most of the time during a scan is spent transferring data from disk.

Risk:

Time could be spent seeking between disk locations for data. 

Action / Repair:

To ensure that segments are layed out with 4MB of contiguous data on disk, you will need to set the ASM allocation unit (AU) size to 4MB and ensure that data file extents are at least 4MB in size. The ASM allocation unit can be specified when a disk group is created. For Exadata, we recommend setting the AU size to 4MB. The ASM allocation unit size (AU_SIZE) can be set at disk group creation time as can be seen in the following example:

CREATE diskgroup data normal redundancy 
DISK 'o/*/DATA*'
ATTRIBUTE 
          'AU_SIZE' = '4M',
          'cell.smart_scan_capable'='TRUE',
          'compatible.rdbms'='11.2.0.0', 
          'compatible.asm'='11.2.0.0';
 
Needs attention on-
Passed onodahost-01

Status on odahost-01: PASS => All disk groups have allocation unit size set to 4MB


DATA FROM odahost-01 - MCLDB DATABASE - ASM ALLOCATION UNIT SIZE FOR ALL DISK GROUPS



ASM DATA.au_size = 4194304
ASM RECO.au_size = 4194304
ASM REDO.au_size = 4194304
Top

Top

OSWatcher status

Success FactorINSTALL AND RUN OSWATCHER PROACTIVELY FOR OS RESOURCE UTILIZATION DIAGNOSIBILITY
Recommendation
 Operating System Watcher  (OSW) is a collection of UNIX shell scripts intended to collect and archive operating system and network metrics to aid diagnosing performance issues. OSW is designed to run continuously and to write the metrics to ASCII files which are saved to an archive directory. The amount of archived data saved and frequency of collection are based on user parameters set when starting OSW.
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => OSWatcher is running


DATA FROM odahost-01 - MCLDB DATABASE - OSWATCHER STATUS



root      7114 12582  0 09:33 ?        00:00:00 /usr/bin/ksh ./mpsub.sh archive/oswmpstat/odahost-01_mpstat_12.08.15.0900.dat mpstat 1 3 0
root      7124 12582  0 09:33 ?        00:00:00 /usr/bin/ksh ./oswlnxio.sh archive/oswiostat/odahost-01_iostat_12.08.15.0900.dat
root      7128 12582  0 09:33 ?        00:00:00 /usr/bin/ksh ./oswlnxtop.sh archive/oswtop/odahost-01_top_12.08.15.0900.dat
root     12582     1  0 Aug14 ?        00:00:24 /usr/bin/ksh ./OSWatcher.sh 10 504 gzip
root     12816 12582  0 Aug14 ?        00:00:05 /usr/bin/ksh ./OSWatcherFM.sh 504

Status on odahost-02: PASS => OSWatcher is running


root     12597     1  0 Aug14 ?        00:00:19 /usr/bin/ksh ./OSWatcher.sh 10 504 gzip
root     12829 12597  0 Aug14 ?        00:00:04 /usr/bin/ksh ./OSWatcherFM.sh 504
Top

Top

ohasd Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => ohasd Log Ownership is Correct (root root)


DATA FROM odahost-01 - MCLDB DATABASE - OHASD LOG FILE OWNERSHIP



total 7768
-rw-r--r-- 1 root root 7936570 Aug 15 09:33 ohasd.log
-rw-r--r-- 1 root root     772 Aug 14 10:12 ohasdOUT.log

Status on odahost-02: PASS => ohasd Log Ownership is Correct (root root)


total 6532
-rw-r--r-- 1 root root 6670908 Aug 15 09:38 ohasd.log
-rw-r--r-- 1 root root    1158 Aug 14 14:21 ohasdOUT.log
Top

Top

ohasd/orarootagent_root Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
  • Oracle Bug # 9837321 - Bug 9837321 - Ownership of crsd traces gets changed from root by patching script - OWNERSHIP OF CRSD TRACES GOT CHANGE FROM ROOT TO ORACLE BY PATCHING SCRIPT
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM odahost-01 - MCLDB DATABASE - OHASD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP



total 8036
-rw-r--r-- 1 root root 8211222 Aug 15 09:33 orarootagent_root.log
-rw-r--r-- 1 root root       0 Aug 13 15:26 orarootagent_rootOUT.log
-rw-r--r-- 1 root root       6 Aug 14 10:12 orarootagent_root.pid

Status on odahost-02: PASS => ohasd/orarootagent_root Log Ownership is Correct (root root)


total 7980
-rw-r--r-- 1 root root 8151219 Aug 15 09:38 orarootagent_root.log
-rw-r--r-- 1 root root       0 Aug 13 15:34 orarootagent_rootOUT.log
-rw-r--r-- 1 root root       6 Aug 14 14:21 orarootagent_root.pid
Top

Top

crsd/orarootagent_root Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support.  These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics.  So it would be wise to verify that the ownership of the following files is root:root:

$ls -l $GRID_HOME/log/`hostname`/crsd/*
$ls -l $GRID_HOME/log/`hostname`/ohasd/*
$ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
$ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*

If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root):

# chown root:root $GRID_HOME/log/`hostname`/crsd/*
# chown root:root $GRID_HOME/log/`hostname`/ohasd/*
# chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/*
# chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/*
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


DATA FROM odahost-01 - MCLDB DATABASE - CRSD/ORAROOTAGENT_ROOT LOG FILE OWNERSHIP



total 15244
-rw-r--r-- 1 root root 10498797 Aug 14 19:45 orarootagent_root.l01
-rw-r--r-- 1 root root  5074264 Aug 15 09:33 orarootagent_root.log
-rw-r--r-- 1 root root        0 Aug 13 15:45 orarootagent_rootOUT.log
-rw-r--r-- 1 root root        6 Aug 14 14:59 orarootagent_root.pid

Status on odahost-02: PASS => crsd/orarootagent_root Log Ownership is Correct (root root)


total 14616
-rw-r--r-- 1 root root 10503593 Aug 14 21:36 orarootagent_root.l01
-rw-r--r-- 1 root root  4425937 Aug 15 09:38 orarootagent_root.log
-rw-r--r-- 1 root root        0 Aug 13 15:43 orarootagent_rootOUT.log
-rw-r--r-- 1 root root        5 Aug 14 15:00 orarootagent_root.pid
Top

Top

crsd Log File Ownership

Success FactorVERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY
Recommendation
 CRSD trace file should owned by "root:root", but due to Bug 9837321application of patch may have resulted in changing the trace file ownership for patching and not changing it back.
 
Links
  • Oracle Bug # 9837321 - Bug 9837321 - Ownership of crsd traces gets changed from root by patching script - OWNERSHIP OF CRSD TRACES GOT CHANGE FROM ROOT TO ORACLE BY PATCHING SCRIPT
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => crsd Log Ownership is Correct (root root)


DATA FROM odahost-01 - MCLDB DATABASE - CRSD LOG FILE OWNERSHIP



total 20164
-rw-r--r-- 1 root root 10525494 Aug 14 11:59 crsd.l01
-rw-r--r-- 1 root root 10081114 Aug 15 09:33 crsd.log
-rw-r--r-- 1 root root      756 Aug 14 14:59 crsdOUT.log

Status on odahost-02: PASS => crsd Log Ownership is Correct (root root)


total 6744
-rw-r--r-- 1 root root 6886449 Aug 15 09:38 crsd.log
-rw-r--r-- 1 root root     756 Aug 14 15:00 crsdOUT.log
Top

Top

NIC Bonding Mode Public

Success FactorLINUX BONDING DRIVER MODE 3 IS NOT RECOMMENDED FOR THE RAC INTERCONNECT
Recommendation
 Even though the mode 3 option is available for the Linux bonding driver, testing has proven that it duplicates all UDP packets and transmits them on every path.  This increases CPU overhead for processing data from the interconnect thereby making the interconnect less efficient.  Mode 3 is not needed for fault tolerance as other modes are available for that purpose and which do not duplicate the packets.  There is a separate Success Factor which discusses Linux NIC bonding.

A couple of relevant bugs:

Bug 7238620 - ORA-600 [2032]

REDISCOVERY INFORMATION:
If you are using a RAC IPC module over an unreliable protocol,
like ipc_g link targets, and your network is duplicating packets
at a high rate, you may have hit this bug.

WORKAROUND:
Ensure network is not duplicating any packets.

Bug 9081436 - GC CR REQUEST WAIT CAUSING SESSIONS TO WAIT

This bug is a side effect of the fix for BUG 7238620 which allowed an invalid/corrupt packet to make it through to higher layers in Oracle code, instead of it being discarded and re-requested.     The fact that the bad packet was not discarded and re-request a new one is a result of the new code where we are attempting to ignore duplicate packets.   But that is now fixed, so what should happen now is that Oracle will still discard duplicate packets, but will also still be on the lookout for any bad/corrupt packets, and if any are received they should be thrown away and re-requested, instead of allowing these bad packets through, to potentially corrupt or overwrite other buffers in memory.

So while there have been some enhancements to more effectively deal with duplicate packets it still is not a good idea to generate a large number of duplicate packets that are just going to be thrown away and which imposes additional overhead on the interconnect and CPUs.  If you must use mode 3 bonding then you should at least have the patches for the two bugs mentioned.
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => NIC bonding mode is not set to Broadcast(3) for public network


DATA FROM odahost-01 - MCLDB DATABASE - NIC BONDING MODE PUBLIC



Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:21:28:d6:14:4c
Slave queue ID: 0

...More

Status on odahost-02: PASS => NIC bonding mode is not set to Broadcast(3) for public network


Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth2
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:21:28:d6:14:6a
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:21:28:d6:14:6b
...More
Top

Top

VIP NIC bonding config.

Success FactorCONFIGURE NIC BONDING FOR 10G VIP (LINUX)
Recommendation
 To avoid single point of failure for VIPs, Oracle highly recommends to configure redundant network for VIPs using NIC BONDING.  Follow below note for more information on how to configure bonding in linux
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => NIC bonding is configured for public network (VIP)


DATA FROM odahost-01 - MCLDB DATABASE - VIP NIC BONDING CONFIG.



bond0     Link encap:Ethernet  HWaddr 00:21:28:D6:14:4C
inet addr:172.24.192.10  Bcast:172.24.192.31  Mask:255.255.255.224
inet6 addr: fe80::221:28ff:fed6:144c/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
RX packets:523004 errors:0 dropped:0 overruns:0 frame:0
TX packets:550805 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:169655490 (161.7 MiB)  TX bytes:204334663 (194.8 MiB)


Status on odahost-02: PASS => NIC bonding is configured for public network (VIP)


bond0     Link encap:Ethernet  HWaddr 00:21:28:D6:14:6A
inet addr:172.24.192.12  Bcast:172.24.192.31  Mask:255.255.255.224
inet6 addr: fe80::221:28ff:fed6:146a/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
RX packets:319154 errors:0 dropped:0 overruns:0 frame:0
TX packets:319670 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:91229743 (87.0 MiB)  TX bytes:124328809 (118.5 MiB)

Top

Top

CRS and ASM version comparison

Success FactorGENERIC OS DATA COLLECTIONS
Recommendation
 you should always run equal or higher version of CRS than ASM. running higher ASM version than CRS is non-supported configuration and may run into issues.
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => CRS version is higher or equal to ASM version.


DATA FROM odahost-01 - MCLDB DATABASE - CRS AND ASM VERSION COMPARISON



CRS_ACTIVE_VERSION = 112030
ASM Version = 112030

Status on odahost-02: PASS => CRS version is higher or equal to ASM version.


CRS_ACTIVE_VERSION = 112030
ASM Version = 112030
Top

Top

Local listener set to node VIP

Success FactorSQL DATA COLLECTIONS AND CHECKS
Recommendation
 The LOCAL_LISTENER parameter should be set to the node VIP. If you need fully qualified domain names, ensure that LOCAL_LISTENER is set to the fully qualified domain name (node-vip.mycompany.com). By default a local listener is created during cluster configuration that runs out of the grid infrastructure home and listens on the specified port(default is 1521) of the node VIP.
 
Links
Needs attention onodahost-01:MCL6DB
Passed onodahost-01:tstdb1, odahost-01:MCL2DB, odahost-01:MCL3DB, odahost-01:MCL4DB, odahost-01:MCL5DB, odahost-01:MCL7DB, odahost-01:MCLDB, odahost-02:tstdb1, odahost-02:MCL2DB, odahost-02:MCL3DB, odahost-02:MCL4DB, odahost-02:MCL5DB, odahost-02:MCL7DB, odahost-02:MCLDB

Status on odahost-01:tstdb1: PASS => Local listener init parameter is set to local node VIP


DATA FROM odahost-01 - tstdb1 DATABASE - LOCAL LISTENER SET TO NODE VIP



Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.14)(PORT=1522)))) VIP Names=odahost-01-vip VIP IPs=172.24.192.14

Status on odahost-01:MCL2DB: PASS => Local listener init parameter is set to local node VIP


DATA FROM odahost-01 - MCL2DB DATABASE - LOCAL LISTENER SET TO NODE VIP



Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.14)(PORT=1522)))) VIP Names=odahost-01-vip VIP IPs=172.24.192.14

Status on odahost-01:MCL3DB: PASS => Local listener init parameter is set to local node VIP


DATA FROM odahost-01 - MCL3DB DATABASE - LOCAL LISTENER SET TO NODE VIP



Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.14)(PORT=1522)))) VIP Names=odahost-01-vip VIP IPs=172.24.192.14

Status on odahost-01:MCL4DB: PASS => Local listener init parameter is set to local node VIP


DATA FROM odahost-01 - MCL4DB DATABASE - LOCAL LISTENER SET TO NODE VIP



Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.14)(PORT=1522)))) VIP Names=odahost-01-vip VIP IPs=172.24.192.14

Status on odahost-01:MCL5DB: PASS => Local listener init parameter is set to local node VIP


DATA FROM odahost-01 - MCL5DB DATABASE - LOCAL LISTENER SET TO NODE VIP



Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.14)(PORT=1522)))) VIP Names=odahost-01-vip VIP IPs=172.24.192.14

Status on odahost-01:MCL6DB: WARNING => Local listener init parameter is not set to local node VIP


DATA FROM odahost-01 - MCL6DB DATABASE - LOCAL LISTENER SET TO NODE VIP



Local Listener=LISTENER_MCL6DB VIP Names=odahost-01-vip VIP IPs=172.24.192.14

Status on odahost-01:MCL7DB: PASS => Local listener init parameter is set to local node VIP


DATA FROM odahost-01 - MCL7DB DATABASE - LOCAL LISTENER SET TO NODE VIP



Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.14)(PORT=1522)))) VIP Names=odahost-01-vip VIP IPs=172.24.192.14

Status on odahost-01:MCLDB: PASS => Local listener init parameter is set to local node VIP


DATA FROM odahost-01 - MCLDB DATABASE - LOCAL LISTENER SET TO NODE VIP



Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.14)(PORT=1522)))) VIP Names=odahost-01-vip VIP IPs=172.24.192.14

Status on odahost-02:tstdb1: PASS => Local listener init parameter is set to local node VIP


Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.15)(PORT=1522)))) VIP Names=odahost-02-vip VIP IPs=172.24.192.15

Status on odahost-02:MCL2DB: PASS => Local listener init parameter is set to local node VIP


Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.15)(PORT=1522)))) VIP Names=odahost-02-vip VIP IPs=172.24.192.15

Status on odahost-02:MCL3DB: PASS => Local listener init parameter is set to local node VIP


Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.15)(PORT=1522)))) VIP Names=odahost-02-vip VIP IPs=172.24.192.15

Status on odahost-02:MCL4DB: PASS => Local listener init parameter is set to local node VIP


Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.15)(PORT=1522)))) VIP Names=odahost-02-vip VIP IPs=172.24.192.15

Status on odahost-02:MCL5DB: PASS => Local listener init parameter is set to local node VIP


Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.15)(PORT=1522)))) VIP Names=odahost-02-vip VIP IPs=172.24.192.15

Status on odahost-02:MCL7DB: PASS => Local listener init parameter is set to local node VIP


Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.15)(PORT=1522)))) VIP Names=odahost-02-vip VIP IPs=172.24.192.15

Status on odahost-02:MCLDB: PASS => Local listener init parameter is set to local node VIP


Local Listener= (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=172.24.192.15)(PORT=1522)))) VIP Names=odahost-02-vip VIP IPs=172.24.192.15
Top

Top

Voting disk status

Success FactorUSE EXTERNAL OR ORACLE PROVIDED REDUNDANCY FOR OCR
Recommendation
 Benefit / Impact:

Stability, Availability

Risk:

Cluster instability

Action / Repair:

Voting disks that are not online would indicate a problem with the clusterware
and should be investigated as soon as possible.  All voting disks are expected to be ONLINE.

Use the following command to list the status of the voting disks

$CRS_HOME/bin/crsctl query css votedisk|sed 's/^ //g'|grep ^[0-9]

The output should look similar to the following, one row were voting disk, all disks should indicate ONLINE

1. ONLINE   192c8f030e5a4fb3bf77e43ad3b8479a (o/192.168.10.102/DBFS_DG_CD_02_sclcgcel01) [DBFS_DG]
2. ONLINE   2612d8a72d194fa4bf3ddff928351c41 (o/192.168.10.104/DBFS_DG_CD_02_sclcgcel03) [DBFS_DG]
3. ONLINE   1d3cceb9daeb4f0bbf23ee0218209f4c (o/192.168.10.103/DBFS_DG_CD_02_sclcgcel02) [DBFS_DG]
 
Needs attention on-
Passed onodahost-01

Status on odahost-01: PASS => All voting disks are online


DATA FROM odahost-01 - MCLDB DATABASE - VOTING DISK STATUS



##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   259195f90ac74fe1bf0e03704f1ada65 (/dev/mapper/HDD_E0_S00_966615931p1) [DATA]
2. ONLINE   8b2c4ef77a004f04bf133ebb2d66008b (/dev/mapper/HDD_E0_S01_966589563p1) [DATA]
3. ONLINE   1f90d1d134444f94bf8e21ffda45aefd (/dev/mapper/HDD_E1_S02_966586507p1) [DATA]
4. ONLINE   a67f53ac3e5a4f5cbf9bb013088289d2 (/dev/mapper/HDD_E1_S03_967034659p1) [DATA]
5. ONLINE   74cdfd0c9f1d4f03bf665940b0665b94 (/dev/mapper/HDD_E0_S04_966044031p1) [DATA]
Located 5 voting disk(s).
Top

Top

ip_local_port_range

Success FactorLINUX DATA COLLECTIONS AND AUDIT CHECKS
Recommendation
 The minimum for ip_local_port_range for Oracle 11gR1 and 11gR2 should be 9000 while the maximum should be 65500.  

If both the minimum and maximum are found to be within that range then this audit check will evaluate as TRUE, otherwise it will evaluate as FALSE.
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => ip_local_port_range is configured according to recommendation


DATA FROM odahost-01 - MCLDB DATABASE - IP_LOCAL_PORT_RANGE



minimum port range = 9000
maximum port range = 65500

Status on odahost-02: PASS => ip_local_port_range is configured according to recommendation


minimum port range = 9000
maximum port range = 65500
Top

Top

Linux Swap Size

Success FactorCORRECTLY SIZE THE SWAP SPACE
Recommendation
 The following table describes the relationship between installed RAM and the configured swap space requirement:

Note:
On Linux, the Hugepages feature allocates non-swappable memory for large page tables using memory-mapped files. If you enable Hugepages, then you should deduct the memory allocated to Hugepages from the available RAM before calculating swap space.

RAM between 1 GB and 2 GB, Swap 1.5 times the size of RAM (minus memory allocated to Hugepages)

RAM between 2 GB and 16 GB, Swap equal to the size of RAM (minus memory allocated to Hugepages)

RAM (minus memory allocated to Hugepages)
more than 16 GB,  Swap 16 GB

In other words the maximum swap size for Linux that Oracle would recommend would be 16GB
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Linux Swap Configuration meets or exceeds Recommendation


DATA FROM odahost-01 - MCLDB DATABASE - LINUX SWAP SIZE



MemTotal:       98929496 kB
MemFree:        23132104 kB
Buffers:          485384 kB
Cached:         15381060 kB
SwapCached:            0 kB
Active:         10214284 kB
Inactive:        8716056 kB
Active(anon):    3409032 kB
Inactive(anon):   408148 kB
Active(file):    6805252 kB
Inactive(file):  8307908 kB
Unevictable:      387400 kB
Mlocked:          387416 kB
SwapTotal:      25165816 kB
SwapFree:       25165816 kB
Dirty:              3780 kB
...More

Status on odahost-02: PASS => Linux Swap Configuration meets or exceeds Recommendation


MemTotal:       98929496 kB
MemFree:        37068472 kB
Buffers:          346608 kB
Cached:          2748692 kB
SwapCached:            0 kB
Active:          3521824 kB
Inactive:        1977668 kB
Active(anon):    2702692 kB
Inactive(anon):   413692 kB
Active(file):     819132 kB
Inactive(file):  1563976 kB
Unevictable:      385516 kB
Mlocked:          385516 kB
SwapTotal:      25165816 kB
SwapFree:       25165816 kB
Dirty:              2828 kB
Writeback:             0 kB
AnonPages:       2812760 kB
Mapped:           297652 kB
Shmem:            623948 kB
...More
Top

Top

oradism executable ownership

Success FactorVERIFY OWNERSHIP OF ORADISM EXECUTABLE IF LMS PROCESS NOT RUNNING IN REAL TIME
Recommendation
 enefit / Impact:

The oradism executable is invoked after database startup to change the scheduling priority of LMS and other database background processes to the realtime scheduling class in order to maximize the ability of these key processes to be scheduled on the CPU in a timely way at times of high CPU utilization.

Risk:

The oradism executable should be owned by root and the owner s-bit should be set, eg. -rwsr-x---, where the s is the setuid bit (s-bit) for root in this case.  If the LMS process is not running at the proper scheduling priority it can lead to instance evictions due to IPC send timeouts or ORA-29740 errors.  oradism must be owned by root and it's s-bit set in order to be able to change the scheduling priority.   If oradism ownership is not root and the owner s-bit is not set then something must have gone wrong in the installation process or the ownership or the permission was otherwise changed.  

Action / Repair:

Please check with Oracle Support to determine the best course to take for your platform to correct the problem.
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => $ORACLE_HOME/bin/oradism ownership is root


DATA FROM odahost-01 - MCLDB DATABASE - ORADISM EXECUTABLE OWNERSHIP



-rwsr-x--- 1 root oinstall 71758 Sep 17  2011 /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oradism

Status on odahost-02: PASS => $ORACLE_HOME/bin/oradism ownership is root


-rwsr-x--- 1 root oinstall 71758 Sep 17  2011 /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oradism
Top

Top

oradism executable permission

Success FactorVERIFY OWNERSHIP OF ORADISM EXECUTABLE IF LMS PROCESS NOT RUNNING IN REAL TIME
Recommendation
 Benefit / Impact:

The oradism executable is invoked after database startup to change the scheduling priority of LMS and other database background processes to the realtime scheduling class in order to maximize the ability of these key processes to be scheduled on the CPU in a timely way at times of high CPU utilization.

Risk:

The oradism executable should be owned by root and the owner s-bit should be set, eg. -rwsr-x---, where the s is the setuid bit (s-bit) for root in this case.  If the LMS process is not running at the proper scheduling priority it can lead to instance evictions due to IPC send timeouts or ORA-29740 errors.  oradism must be owned by root and it's s-bit set in order to be able to change the scheduling priority.   If oradism ownership is not root and the owner s-bit is not set then something must have gone wrong in the installation process or the ownership or the permission was otherwise changed.  

Action / Repair:

Please check with Oracle Support to determine the best course to take for your platform to correct the problem.
 
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => $ORACLE_HOME/bin/oradism setuid bit is set


DATA FROM odahost-01 - MCLDB DATABASE - ORADISM EXECUTABLE PERMISSION



-rwsr-x--- 1 root oinstall 71758 Sep 17  2011 /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oradism

Status on odahost-02: PASS => $ORACLE_HOME/bin/oradism setuid bit is set


-rwsr-x--- 1 root oinstall 71758 Sep 17  2011 /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oradism
Top

Top

Session Failover configuration

Success FactorCONFIGURE ORACLE NET SERVICES LOAD BALANCING PROPERLY TO DISTRIBUTE CONNECTIONS
Recommendation
 Benefit / Impact:

Higher application availability

Risk:

Application availability problems in case of failed nodes or database instances

Action / Repair:

Application connection failover and load balancing is highly recommended for OLTP environments but may not apply for DSS workloads.  DSS application customers may want to ignore this warning.


The following query will identify the application user sessions that do not have basic connection failover configured:

select username, sid, serial#,process,failover_type,failover_method FROM gv$session where upper(failover_method) != 'BASIC' and upper(failover_type) !='SELECT' and upper(username) not in ('SYS','SYSTEM','SYSMAN','DBSNMP');

 
Links
Needs attention on-
Passed ontstdb1, MCL2DB, MCL3DB, MCL4DB, MCL5DB, MCL6DB, MCL7DB, MCLDB

Status on tstdb1: PASS => Failover method (SELECT) and failover mode (BASIC) are configured properly


DATA FOR tstdb1 FOR SESSION FAILOVER CONFIGURATION




If no rows returned means query did not return any row and SQL check passed


Status on MCL2DB: PASS => Failover method (SELECT) and failover mode (BASIC) are configured properly


DATA FOR MCL2DB FOR SESSION FAILOVER CONFIGURATION




If no rows returned means query did not return any row and SQL check passed


Status on MCL3DB: PASS => Failover method (SELECT) and failover mode (BASIC) are configured properly


DATA FOR MCL3DB FOR SESSION FAILOVER CONFIGURATION




If no rows returned means query did not return any row and SQL check passed


Status on MCL4DB: PASS => Failover method (SELECT) and failover mode (BASIC) are configured properly


DATA FOR MCL4DB FOR SESSION FAILOVER CONFIGURATION




If no rows returned means query did not return any row and SQL check passed


Status on MCL5DB: PASS => Failover method (SELECT) and failover mode (BASIC) are configured properly


DATA FOR MCL5DB FOR SESSION FAILOVER CONFIGURATION




If no rows returned means query did not return any row and SQL check passed


Status on MCL6DB: PASS => Failover method (SELECT) and failover mode (BASIC) are configured properly


DATA FOR MCL6DB FOR SESSION FAILOVER CONFIGURATION




If no rows returned means query did not return any row and SQL check passed


Status on MCL7DB: PASS => Failover method (SELECT) and failover mode (BASIC) are configured properly


DATA FOR MCL7DB FOR SESSION FAILOVER CONFIGURATION




If no rows returned means query did not return any row and SQL check passed


Status on MCLDB: PASS => Failover method (SELECT) and failover mode (BASIC) are configured properly


DATA FOR MCLDB FOR SESSION FAILOVER CONFIGURATION




If no rows returned means query did not return any row and SQL check passed

Top

Top

Check for parameter net.core.rmem_max

Success FactorLINUX DATA COLLECTIONS AND AUDIT CHECKS
Recommendation
 net.core.rmem_max should be set >= 4194304
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Kernel Parameter net.core.rmem_max OK

net.core.rmem_max = 4194304

Status on odahost-02: PASS => Kernel Parameter net.core.rmem_max OK

net.core.rmem_max = 4194304
Top

Top

Automatic segment storage management

Success FactorSQL DATA COLLECTIONS AND CHECKS
Recommendation
 Starting with Oracle 9i Auto Segment Space Management (ASSM) can be used by specifying the SEGMENT SPACE MANAGEMENT clause, set to AUTO in the CREATE TABLESPACE statement. Implementing the ASSM feature allows Oracle to use bitmaps to manage the free space within segments. The bitmap describes the status of each data block within a segment, with respect to the amount of space in the block available for inserting rows. The current status of the space available in a data block is reflected in the bitmap allowing for Oracle to manage free space automatically with ASSM. ASSM tablespaces automate freelist management and remove the requirement/ability to specify PCTUSED, FREELISTS, and FREELIST GROUPS storage parameters for individual tables and indexes created in these tablespaces. 
 
Links
Needs attention on-
Passed ontstdb1, MCL2DB, MCL3DB, MCL4DB, MCL5DB, MCL6DB, MCL7DB, MCLDB

Status on tstdb1: PASS => All tablespaces are using Automatic segment storage management


DATA FOR tstdb1 FOR AUTOMATIC SEGMENT STORAGE MANAGEMENT




If no rows returned means query did not return any row and SQL check passed


Status on MCL2DB: PASS => All tablespaces are using Automatic segment storage management


DATA FOR MCL2DB FOR AUTOMATIC SEGMENT STORAGE MANAGEMENT




If no rows returned means query did not return any row and SQL check passed


Status on MCL3DB: PASS => All tablespaces are using Automatic segment storage management


DATA FOR MCL3DB FOR AUTOMATIC SEGMENT STORAGE MANAGEMENT




If no rows returned means query did not return any row and SQL check passed


Status on MCL4DB: PASS => All tablespaces are using Automatic segment storage management


DATA FOR MCL4DB FOR AUTOMATIC SEGMENT STORAGE MANAGEMENT




If no rows returned means query did not return any row and SQL check passed


Status on MCL5DB: PASS => All tablespaces are using Automatic segment storage management


DATA FOR MCL5DB FOR AUTOMATIC SEGMENT STORAGE MANAGEMENT




If no rows returned means query did not return any row and SQL check passed


Status on MCL6DB: PASS => All tablespaces are using Automatic segment storage management


DATA FOR MCL6DB FOR AUTOMATIC SEGMENT STORAGE MANAGEMENT




If no rows returned means query did not return any row and SQL check passed


Status on MCL7DB: PASS => All tablespaces are using Automatic segment storage management


DATA FOR MCL7DB FOR AUTOMATIC SEGMENT STORAGE MANAGEMENT




If no rows returned means query did not return any row and SQL check passed


Status on MCLDB: PASS => All tablespaces are using Automatic segment storage management


DATA FOR MCLDB FOR AUTOMATIC SEGMENT STORAGE MANAGEMENT




If no rows returned means query did not return any row and SQL check passed

Top

Top

Locally managed tablespaces

Success FactorSQL DATA COLLECTIONS AND CHECKS
Recommendation
 In order to reduce contention to the data dictionary, rollback data, and reduce the amount of generated redo, locally managed tablespaces should be used rather than dictionary managed tablespaces.Please refer to the below referenced notes for more information about benefits of locally managed tablespace and how to migrate a tablesapce from dictionary manged to locally managed.
 
Links
Needs attention on-
Passed ontstdb1, MCL2DB, MCL3DB, MCL4DB, MCL5DB, MCL6DB, MCL7DB, MCLDB

Status on tstdb1: PASS => All tablespaces are locally manged tablespace


DATA FOR tstdb1 FOR LOCALLY MANAGED TABLESPACES




no_of_dictionary_managed_tablespace = 0

Status on MCL2DB: PASS => All tablespaces are locally manged tablespace


DATA FOR MCL2DB FOR LOCALLY MANAGED TABLESPACES




no_of_dictionary_managed_tablespace = 0

Status on MCL3DB: PASS => All tablespaces are locally manged tablespace


DATA FOR MCL3DB FOR LOCALLY MANAGED TABLESPACES




no_of_dictionary_managed_tablespace = 0

Status on MCL4DB: PASS => All tablespaces are locally manged tablespace


DATA FOR MCL4DB FOR LOCALLY MANAGED TABLESPACES




no_of_dictionary_managed_tablespace = 0

Status on MCL5DB: PASS => All tablespaces are locally manged tablespace


DATA FOR MCL5DB FOR LOCALLY MANAGED TABLESPACES




no_of_dictionary_managed_tablespace = 0

Status on MCL6DB: PASS => All tablespaces are locally manged tablespace


DATA FOR MCL6DB FOR LOCALLY MANAGED TABLESPACES




no_of_dictionary_managed_tablespace = 0

Status on MCL7DB: PASS => All tablespaces are locally manged tablespace


DATA FOR MCL7DB FOR LOCALLY MANAGED TABLESPACES




no_of_dictionary_managed_tablespace = 0

Status on MCLDB: PASS => All tablespaces are locally manged tablespace


DATA FOR MCLDB FOR LOCALLY MANAGED TABLESPACES




no_of_dictionary_managed_tablespace = 0
Top

Top

Check for parameter semmns

Success FactorLINUX DATA COLLECTIONS AND AUDIT CHECKS
Recommendation
 SEMMNS should be set >= 32000
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Kernel Parameter SEMMNS OK

semmns = 32000

Status on odahost-02: PASS => Kernel Parameter SEMMNS OK

semmns = 32000
Top

Top

Check for parameter semmsl

Success FactorLINUX DATA COLLECTIONS AND AUDIT CHECKS
Recommendation
 SEMMSL should be set >= 250
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Kernel Parameter SEMMSL OK

semmsl = 250

Status on odahost-02: PASS => Kernel Parameter SEMMSL OK

semmsl = 250
Top

Top

Check for parameter semmni

Success FactorLINUX DATA COLLECTIONS AND AUDIT CHECKS
Recommendation
 SEMMNI should be set >= 128
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Kernel Parameter SEMMNI OK

semmni = 142

Status on odahost-02: PASS => Kernel Parameter SEMMNI OK

semmni = 142
Top

Top

Check for parameter semopm

Success FactorLINUX DATA COLLECTIONS AND AUDIT CHECKS
Recommendation
 SEMOPM should be set >= 100 
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => Kernel Parameter SEMOPM OK

semopm = 100

Status on odahost-02: PASS => Kernel Parameter SEMOPM OK

semopm = 100
Top

Top

Hostname Formating

Success FactorDO NOT USE UNDERSCORE IN HOST OR DOMAIN NAME
Recommendation
 Underscores should not be used in a  host or domainname..

 According to RFC952 - DoD Internet host table specification 
The same applies for Net, Host, Gateway, or Domain name.


 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => None of the hostnames contains an underscore character


DATA FROM odahost-01 - MCLDB DATABASE - HOSTNAME FORMATING



odahost-01
odahost-02

Status on odahost-02: PASS => None of the hostnames contains an underscore character


odahost-01
odahost-02
Top

Top

Check for parameter net.core.rmem_default

Success FactorVALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings as they will appear in next release of documentation:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144

Status on odahost-02: PASS => net.core.rmem_default Is Configured Properly

net.core.rmem_default = 262144
Top

Top

Check for parameter net.core.wmem_max

Success FactorVALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings as they will appear in next release of documentation:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576

Status on odahost-02: PASS => net.core.wmem_max Is Configured Properly

net.core.wmem_max = 1048576
Top

Top

Check for parameter net.core.wmem_default

Success FactorVALIDATE UDP BUFFER SIZE FOR RAC CLUSTER (LINUX)
Recommendation
 Summary of settings as they will appear in next release of documentation:

net.core.rmem_default =262144
net.core.rmem_max = 2097152 (10g)
net.core.rmem_max = 4194304 (11g)  

net.core.wmem_default =262144
net.core.wmem_max =1048576
 
Links
Needs attention on-
Passed onodahost-01, odahost-02

Status on odahost-01: PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144

Status on odahost-02: PASS => net.core.wmem_default Is Configured Properly

net.core.wmem_default = 262144
Top

Top

AUDSES$ sequence cache size

Success FactorCACHE APPLICATION SEQUENCES AND SOME SYSTEM SEQUENCES FOR BETTER PERFORMANCE
Recommendation
 Use large cache value of maybe 10,000 or more. NOORDER most effective, but impact on strict ordering. Performance. Might not get strict time ordering of sequence numbers.
There are problems reported with Audses$ and ora_tq_base$ which are both internal sequences  . Also particularly if the order of the application sequence is not important or this is used during the login process and hence can be involved in a login storm then this needs to be taken care of. Some sequences need to be presented in a particular order and hence caching those is not a good idea but in the interest of performance if order does not matter then this could be cached and presented. This also manifests itself as waits in "rowcache" for "dc_sequences" which is a rowcache type for sequences. 


For Applications this can cause significant issues especially with Transactional Sequences.  
Please see note attached.

Oracle General Ledger - Version: 11.5.0 to 11.5.10
Oracle Payables - Version: 11.5.0 to 11.5.10
Oracle Receivables - Version: 11.5.10.2
Information in this document applies to any platform.
ARXTWAI,ARXRWMAI 

Increase IDGEN1$ to a value of 1000, see notes below.  This is the default as of 11.2.0.1.
 
Links
Needs attention on-
Passed ontstdb1, MCL2DB, MCL3DB, MCL4DB, MCL5DB, MCL6DB, MCL7DB, MCLDB

Status on tstdb1: PASS => SYS.AUDSES$ sequence cache size >= 10,000


DATA FOR tstdb1 FOR AUDSES$ SEQUENCE CACHE SIZE




audses$.cache_size = 10000

Status on MCL2DB: PASS => SYS.AUDSES$ sequence cache size >= 10,000


DATA FOR MCL2DB FOR AUDSES$ SEQUENCE CACHE SIZE




audses$.cache_size = 10000

Status on MCL3DB: PASS => SYS.AUDSES$ sequence cache size >= 10,000


DATA FOR MCL3DB FOR AUDSES$ SEQUENCE CACHE SIZE




audses$.cache_size = 10000

Status on MCL4DB: PASS => SYS.AUDSES$ sequence cache size >= 10,000


DATA FOR MCL4DB FOR AUDSES$ SEQUENCE CACHE SIZE




audses$.cache_size = 10000

Status on MCL5DB: PASS => SYS.AUDSES$ sequence cache size >= 10,000


DATA FOR MCL5DB FOR AUDSES$ SEQUENCE CACHE SIZE




audses$.cache_size = 10000

Status on MCL6DB: PASS => SYS.AUDSES$ sequence cache size >= 10,000


DATA FOR MCL6DB FOR AUDSES$ SEQUENCE CACHE SIZE




audses$.cache_size = 10000

Status on MCL7DB: PASS => SYS.AUDSES$ sequence cache size >= 10,000


DATA FOR MCL7DB FOR AUDSES$ SEQUENCE CACHE SIZE




audses$.cache_size = 10000

Status on MCLDB: PASS => SYS.AUDSES$ sequence cache size >= 10,000


DATA FOR MCLDB FOR AUDSES$ SEQUENCE CACHE SIZE




audses$.cache_size = 10000
Top

Top

IDGEN$ sequence cache size

Success FactorCACHE APPLICATION SEQUENCES AND SOME SYSTEM SEQUENCES FOR BETTER PERFORMANCE
Recommendation
 Sequence contention (SQ enqueue) can occur if SYS.IDGEN1$ sequence is not cached to 1000.  This condition can lead to performance issues in RAC.  1000 is the default starting in version 11.2.0.1.
 
Links
Needs attention on-
Passed ontstdb1, MCL2DB, MCL3DB, MCL4DB, MCL5DB, MCL6DB, MCL7DB, MCLDB

Status on tstdb1: PASS => SYS.IDGEN1$ sequence cache size >= 1,000


DATA FOR tstdb1 FOR IDGEN$ SEQUENCE CACHE SIZE




idgen1$.cache_size = 1000

Status on MCL2DB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000


DATA FOR MCL2DB FOR IDGEN$ SEQUENCE CACHE SIZE




idgen1$.cache_size = 1000

Status on MCL3DB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000


DATA FOR MCL3DB FOR IDGEN$ SEQUENCE CACHE SIZE




idgen1$.cache_size = 1000

Status on MCL4DB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000


DATA FOR MCL4DB FOR IDGEN$ SEQUENCE CACHE SIZE




idgen1$.cache_size = 1000

Status on MCL5DB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000


DATA FOR MCL5DB FOR IDGEN$ SEQUENCE CACHE SIZE




idgen1$.cache_size = 1000

Status on MCL6DB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000


DATA FOR MCL6DB FOR IDGEN$ SEQUENCE CACHE SIZE




idgen1$.cache_size = 1000

Status on MCL7DB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000


DATA FOR MCL7DB FOR IDGEN$ SEQUENCE CACHE SIZE




idgen1$.cache_size = 1000

Status on MCLDB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000


DATA FOR MCLDB FOR IDGEN$ SEQUENCE CACHE SIZE




idgen1$.cache_size = 1000
Top