Oracle Database Appliance Assessment Report
|
Cluster Name | odacluster01-c |
OS Version | LINUX X86-64 OELRHEL 5 2.6.32-300.11.1.el5uek |
CRS Home - Version | /u01/app/11.2.0.3/grid - 11.2.0.3.0 |
DB Home - Version - Names | /u01/app/oracle/product/11.2.0.3/dbhome_1 - 11.2.0.3.0 - 8 |
Number of nodes | 2 |
Database Servers | 2 |
odachk Version | 2.1.5_20120524 |
Collection | odachk_MCLDB_081512_092757.zip |
Collection Date | 15-Aug-2012 09:30:35 |
Removing findings in page does not change the original html file. Please use browsers save page button (or press Ctrl+S) to save the report.
FAIL, WARNING, ERROR and INFO findings should be evaluated. INFO status is considered a significant finding and details for those should be reviewed in light of your environment.
Status | Type | Message | Status On | Details |
---|---|---|---|---|
WARNING | OS Check | One or more warnings for network and bonding interface checks | All Database Servers | View |
WARNING | Database Check | Local listener init parameter is not set to local node VIP | odahost-01:MCL6DB | View |
Status | Type | Message | Status On | Details |
---|---|---|---|---|
PASS | Database Check | Remote listener is set to SCAN name | All Databases | View |
PASS | Database Check | Value of remote_listener parameter is able to tnsping | All Databases | View |
PASS | SQL Parameter Check | Database Parameter parallel_execution_message_size is set to the recommended value | All Instances | View |
PASS | OS Check | The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr) | All Database Servers | View |
PASS | OS Check | pam_limits configured properly for shell limits | All Database Servers | View |
PASS | SQL Parameter Check | Database parameter DB_BLOCK_CHECKSUM is set to recommended value | All Instances | View |
PASS | SQL Parameter Check | Database parameter DB_BLOCK_CHECKING is set to the recommended value | All Instances | View |
PASS | SQL Parameter Check | ASM parameter MEMORY_TARGET is set according to recommended value. | All Instances | View |
PASS | SQL Check | All bigfile tablespaces have non-default maxbytes values set | All Databases | View |
PASS | OS Check | OS Disk Storage checks successful | All Database Servers | View |
PASS | OS Check | System component checks successful | All Database Servers | View |
PASS | OS Check | Shared storage checks successful | All Database Servers | View |
PASS | OS Check | All software and firmware versions are up to date with OAK repository. | All Database Servers | View |
PASS | Database Check | Database parameter Db_create_online_log_dest_n is set to recommended value | All Databases | View |
PASS | Database Check | Database parameter db_recovery_file_dest_size is set to recommended value | All Databases | View |
PASS | SQL Parameter Check | Database parameter GLOBAL_NAMES is set to recommended value | All Instances | View |
PASS | SQL Parameter Check | Database parameter DB_LOST_WRITE_PROTECT is set to recommended value | All Instances | View |
PASS | OS Check | Shell limit soft nproc for DB is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit hard stack for DB is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit hard nofile for DB is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit hard nproc for DB is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit hard nproc for GI is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit hard nofile for GI is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit soft nproc for GI is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Shell limit soft nofile for GI is configured according to recommendation | All Database Servers | View |
PASS | ASM Check | All disk groups have compatible.asm parameter set to recommended values | All ASM Instances | View |
PASS | ASM Check | All disk groups have allocation unit size set to 4MB | All ASM Instances | View |
PASS | OS Check | OSWatcher is running | All Database Servers | View |
PASS | OS Check | ohasd Log Ownership is Correct (root root) | All Database Servers | View |
PASS | OS Check | ohasd/orarootagent_root Log Ownership is Correct (root root) | All Database Servers | View |
PASS | OS Check | crsd/orarootagent_root Log Ownership is Correct (root root) | All Database Servers | View |
PASS | OS Check | crsd Log Ownership is Correct (root root) | All Database Servers | View |
PASS | OS Check | NIC bonding mode is not set to Broadcast(3) for public network | All Database Servers | View |
PASS | OS Check | NIC bonding is configured for public network (VIP) | All Database Servers | View |
PASS | OS Check | CRS version is higher or equal to ASM version. | All Database Servers | View |
PASS | Database Check | Local listener init parameter is set to local node VIP | odahost-01:tstdb1, odahost-01:MCL2DB, odahost-01:MCL3DB, odahost-01:MCL4DB, odahost-01:MCL5DB ... more | View |
PASS | OS Check | All voting disks are online | All Database Servers | View |
PASS | OS Check | ip_local_port_range is configured according to recommendation | All Database Servers | View |
PASS | OS Check | Linux Swap Configuration meets or exceeds Recommendation | All Database Servers | View |
PASS | OS Check | $ORACLE_HOME/bin/oradism ownership is root | All Database Servers | View |
PASS | OS Check | $ORACLE_HOME/bin/oradism setuid bit is set | All Database Servers | View |
PASS | SQL Check | Failover method (SELECT) and failover mode (BASIC) are configured properly | All Databases | View |
PASS | OS Check | Kernel Parameter net.core.rmem_max OK | All Database Servers | View |
PASS | SQL Check | All tablespaces are using Automatic segment storage management | All Databases | View |
PASS | SQL Check | All tablespaces are locally manged tablespace | All Databases | View |
PASS | OS Check | Kernel Parameter SEMMNS OK | All Database Servers | View |
PASS | OS Check | Kernel Parameter SEMMSL OK | All Database Servers | View |
PASS | OS Check | Kernel Parameter SEMMNI OK | All Database Servers | View |
PASS | OS Check | Kernel Parameter SEMOPM OK | All Database Servers | View |
PASS | OS Check | None of the hostnames contains an underscore character | All Database Servers | View |
PASS | OS Check | net.core.rmem_default Is Configured Properly | All Database Servers | View |
PASS | OS Check | net.core.wmem_max Is Configured Properly | All Database Servers | View |
PASS | OS Check | net.core.wmem_default Is Configured Properly | All Database Servers | View |
PASS | SQL Check | SYS.AUDSES$ sequence cache size >= 10,000 | All Databases | View |
PASS | SQL Check | SYS.IDGEN1$ sequence cache size >= 1,000 | All Databases | View |
Status | Type | Message | Status On | Details |
---|---|---|---|---|
PASS | Cluster Wide Check | RDBMS home /u01/app/oracle/product/11.2.0.3/dbhome_1 has same number of patches installed across the cluster | Cluster Wide | - |
PASS | Cluster Wide Check | RDBMS software version matches across cluster. | Cluster Wide | View |
PASS | Cluster Wide Check | All nodes are using same NTP server across cluster | Cluster Wide | View |
PASS | Cluster Wide Check | Time zone matches for root user across cluster | Cluster Wide | View |
PASS | Cluster Wide Check | Time zone matches for GI/CRS software owner across cluster | Cluster Wide | View |
PASS | Cluster Wide Check | OS Kernel version(uname -r) matches across cluster. | Cluster Wide | View |
PASS | Cluster Wide Check | Clusterware active version matches across cluster. | Cluster Wide | View |
PASS | Cluster Wide Check | Timezone matches for current user across cluster. | Cluster Wide | View |
PASS | Cluster Wide Check | Public network interface names are the same across cluster | Cluster Wide | View |
PASS | Cluster Wide Check | RDBMS software owner UID matches across cluster | Cluster Wide | View |
PASS | Cluster Wide Check | Private interconnect interface names are the same across cluster | Cluster Wide | View |
Best Practices and Other Recommendations are generally items documented in various sources which could be overlooked. odachk assesses them and calls attention to any findings.
Status on Cluster Wide: PASS => RDBMS software version matches across cluster. |
odahost-01 = 112030 odahost-02 = 112030 |
Status on Cluster Wide: PASS => All nodes are using same NTP server across cluster |
odahost-01 = 10.15.2.1 odahost-02 = 10.15.2.1 |
Status on Cluster Wide: PASS => Time zone matches for root user across cluster |
odahost-01 = == odahost-02 = == |
Status on Cluster Wide: PASS => Time zone matches for GI/CRS software owner across cluster |
odahost-01 = == odahost-02 = == |
Status on Cluster Wide: PASS => OS Kernel version(uname -r) matches across cluster. |
odahost-01 = 2632-300111el5uek odahost-02 = 2632-300111el5uek |
Status on Cluster Wide: PASS => Clusterware active version matches across cluster. |
odahost-01 = 112030 odahost-02 = 112030 |
Status on Cluster Wide: PASS => Timezone matches for current user across cluster. |
odahost-01 = CEST odahost-02 = CEST |
Status on Cluster Wide: PASS => Public network interface names are the same across cluster |
odahost-01 = bond0 odahost-02 = bond0 |
Status on Cluster Wide: PASS => RDBMS software owner UID matches across cluster |
odahost-01 = 1001 odahost-02 = 1001 |
Status on Cluster Wide: PASS => Private interconnect interface names are the same across cluster |
odahost-01 = eth1 odahost-02 = eth1 |
Status on odahost-02:tstdb1: PASS => Remote listener is set to SCAN name |
remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01 |
Status on odahost-02:MCL2DB: PASS => Remote listener is set to SCAN name |
remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01 |
Status on odahost-02:MCL3DB: PASS => Remote listener is set to SCAN name |
remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01 |
Status on odahost-02:MCL4DB: PASS => Remote listener is set to SCAN name |
remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01 |
Status on odahost-02:MCL5DB: PASS => Remote listener is set to SCAN name |
remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01 |
Status on odahost-02:MCL7DB: PASS => Remote listener is set to SCAN name |
remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01 |
Status on odahost-02:MCLDB: PASS => Remote listener is set to SCAN name |
remote listener name=gv-oradbc-t01 scan name= gv-oradbc-t01 |
Status on tstdb11: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
tstdb11.parallel_execution_message_size = 16384 |
Status on MCL2DB1: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCL2DB1.parallel_execution_message_size = 16384 |
Status on MCL3DB1: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCL3DB1.parallel_execution_message_size = 16384 |
Status on MCL4DB1: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCL4DB1.parallel_execution_message_size = 16384 |
Status on MCL5DB1: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCL5DB1.parallel_execution_message_size = 16384 |
Status on MCL6DB: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCL6DB.parallel_execution_message_size = 16384 |
Status on MCL7DB1: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCL7DB1.parallel_execution_message_size = 16384 |
Status on MCLDB1: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCLDB1.parallel_execution_message_size = 16384 |
Status on tstdb12: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
tstdb12.parallel_execution_message_size = 16384 |
Status on MCL2DB2: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCL2DB2.parallel_execution_message_size = 16384 |
Status on MCL3DB2: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCL3DB2.parallel_execution_message_size = 16384 |
Status on MCL4DB2: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCL4DB2.parallel_execution_message_size = 16384 |
Status on MCL5DB2: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCL5DB2.parallel_execution_message_size = 16384 |
Status on MCL7DB2: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCL7DB2.parallel_execution_message_size = 16384 |
Status on MCLDB2: PASS => Database Parameter parallel_execution_message_size is set to the recommended value |
MCLDB2.parallel_execution_message_size = 16384 |
Success Factor | ORACLE DATABASE APPLIANCE (ODA) |
Recommendation | All network and bonding interface checks are expected to be successful |
Needs attention on | odahost-01, odahost-02 |
Passed on | - |
Status on odahost-01: WARNING => One or more warnings for network and bonding interface checks |
DATA FROM odahost-01 FOR NETWORK AND BONDING INTERFACES STATUS INFO: Doing oak network checks RESULT: Detected active link for interface eth0 with link speed 1000Mb/s RESULT: Detected active link for interface eth1 with link speed 1000Mb/s RESULT: Detected active link for interface eth2 with link speed 1000Mb/s RESULT: Detected active link for interface eth3 with link speed 1000Mb/s RESULT: Detected active link for interface eth4 with link speed 1000Mb/s RESULT: Detected active link for interface eth5 with link speed 1000Mb/s WARNING: No Link detected for interface eth6 WARNING: No Link detected for interface eth7 WARNING: No Link detected for interface eth8 WARNING: No Link detected for interface eth9 INFO: Checking bonding interface status RESULT: Bond interface bond0 is up configured in mode:fault-tolerance (active-backup) with current active interface as eth2 Slave1 interface is eth2 with status:up Link fail count=0 Maccaddr:00:21:28:d6:14:4c Slave2 interface is eth3 with status:up Link fail count=0 Maccaddr:00:21:28:d6:14:4d RESULT: Bond interface bond1 is up configured in mode:fault-tolerance (active-backup) with current active interface as eth4 |
Status on odahost-02: WARNING => One or more warnings for network and bonding interface checks |
DATA FROM odahost-02 FOR NETWORK AND BONDING INTERFACES STATUS INFO: Doing oak network checks RESULT: Detected active link for interface eth0 with link speed 1000Mb/s RESULT: Detected active link for interface eth1 with link speed 1000Mb/s RESULT: Detected active link for interface eth2 with link speed 1000Mb/s RESULT: Detected active link for interface eth3 with link speed 1000Mb/s RESULT: Detected active link for interface eth4 with link speed 1000Mb/s RESULT: Detected active link for interface eth5 with link speed 1000Mb/s WARNING: No Link detected for interface eth6 WARNING: No Link detected for interface eth7 WARNING: No Link detected for interface eth8 WARNING: No Link detected for interface eth9 INFO: Checking bonding interface status RESULT: Bond interface bond0 is up configured in mode:fault-tolerance (active-backup) with current active interface as eth2 Slave1 interface is eth2 with status:up Link fail count=0 Maccaddr:00:21:28:d6:14:6a Slave2 interface is eth3 with status:up Link fail count=0 Maccaddr:00:21:28:d6:14:6b RESULT: Bond interface bond1 is up configured in mode:fault-tolerance (active-backup) with current active interface as eth4 |
Status on odahost-01: PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr) |
DATA FROM odahost-01 - MCLDB DATABASE - MAXIMUM PARALLEL ASYNCH IO aio-max-nr = 3145728 |
Status on odahost-02: PASS => The number of async IO descriptors is sufficient (/proc/sys/fs/aio-max-nr) |
aio-max-nr = 3145728 |
Status on odahost-01: PASS => pam_limits configured properly for shell limits |
DATA FROM odahost-01 - MCLDB DATABASE - PAM_LIMITS CHECK #%PAM-1.0 # This file is auto-generated. # User changes will be destroyed the next time authconfig is run. auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account required pam_unix.so account sufficient pam_succeed_if.so uid < 500 quiet account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password required pam_deny.so |
Status on tstdb11: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
tstdb11.db_block_checksum = FULL |
Status on MCL2DB1: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCL2DB1.db_block_checksum = FULL |
Status on MCL3DB1: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCL3DB1.db_block_checksum = FULL |
Status on MCL4DB1: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCL4DB1.db_block_checksum = FULL |
Status on MCL5DB1: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCL5DB1.db_block_checksum = FULL |
Status on MCL6DB: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCL6DB.db_block_checksum = FULL |
Status on MCL7DB1: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCL7DB1.db_block_checksum = FULL |
Status on MCLDB1: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCLDB1.db_block_checksum = FULL |
Status on tstdb12: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
tstdb12.db_block_checksum = FULL |
Status on MCL2DB2: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCL2DB2.db_block_checksum = FULL |
Status on MCL3DB2: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCL3DB2.db_block_checksum = FULL |
Status on MCL4DB2: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCL4DB2.db_block_checksum = FULL |
Status on MCL5DB2: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCL5DB2.db_block_checksum = FULL |
Status on MCL7DB2: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCL7DB2.db_block_checksum = FULL |
Status on MCLDB2: PASS => Database parameter DB_BLOCK_CHECKSUM is set to recommended value |
MCLDB2.db_block_checksum = FULL |
Status on tstdb11: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
tstdb11.db_block_checking = FULL |
Status on MCL2DB1: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCL2DB1.db_block_checking = FULL |
Status on MCL3DB1: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCL3DB1.db_block_checking = FULL |
Status on MCL4DB1: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCL4DB1.db_block_checking = FULL |
Status on MCL5DB1: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCL5DB1.db_block_checking = FULL |
Status on MCL6DB: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCL6DB.db_block_checking = FULL |
Status on MCL7DB1: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCL7DB1.db_block_checking = FULL |
Status on MCLDB1: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCLDB1.db_block_checking = FULL |
Status on tstdb12: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
tstdb12.db_block_checking = FULL |
Status on MCL2DB2: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCL2DB2.db_block_checking = FULL |
Status on MCL3DB2: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCL3DB2.db_block_checking = FULL |
Status on MCL4DB2: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCL4DB2.db_block_checking = FULL |
Status on MCL5DB2: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCL5DB2.db_block_checking = FULL |
Status on MCL7DB2: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCL7DB2.db_block_checking = FULL |
Status on MCLDB2: PASS => Database parameter DB_BLOCK_CHECKING is set to the recommended value |
MCLDB2.db_block_checking = FULL |
Status on +ASM1: PASS => ASM parameter MEMORY_TARGET is set according to recommended value. |
+ASM1.memory_target = 1073741824 |
Status on +ASM2: PASS => ASM parameter MEMORY_TARGET is set according to recommended value. |
+ASM2.memory_target = 1073741824 |
Success Factor | ORACLE DATABASE APPLIANCE (ODA) |
Recommendation | All OS Disk Storage checks are expected to be successful |
Needs attention on | - |
Passed on | odahost-01, odahost-02 |
Status on odahost-01: PASS => OS Disk Storage checks successful |
DATA FROM odahost-01 FOR OS DISK STORAGE STATUS INFO: Checking Operating System Storage SUCCESS: The OS disks have the boot stamp RESULT: Raid device /dev/md0 found clean RESULT: Raid device /dev/md1 found clean RESULT: Physical Volume /dev/md1 in VolGroupSys has 270213.84M out of total 499994.59M RESULT: Volumegroup VolGroupSys consist of 1 physical volumes,contains 4 logical volumes, has 0 volume snaps with total size of 499994.59M and free space of 270213.84M RESULT: Logical Volume LogVolOpt in VolGroupSys Volume group is of size 60.00G RESULT: Logical Volume LogVolRoot in VolGroupSys Volume group is of size 30.00G RESULT: Logical Volume LogVolSwap in VolGroupSys Volume group is of size 24.00G RESULT: Logical Volume LogVolU01 in VolGroupSys Volume group is of size 100.00G RESULT: Device /dev/mapper/VolGroupSys-LogVolRoot is mounted on / of type ext3 in (rw) RESULT: Device /dev/md0 is mounted on /boot of type ext3 in (rw) RESULT: Device /dev/mapper/VolGroupSys-LogVolOpt is mounted on /opt of type ext3 in (rw) RESULT: Device /dev/mapper/VolGroupSys-LogVolU01 is mounted on /u01 of type ext3 in (rw) RESULT: / has 25042 MB free out of total 29758 MB RESULT: /boot has 42 MB free out of total 99 MB |
Status on odahost-02: PASS => OS Disk Storage checks successful |
DATA FROM odahost-02 FOR OS DISK STORAGE STATUS INFO: Checking Operating System Storage SUCCESS: The OS disks have the boot stamp RESULT: Raid device /dev/md0 found clean RESULT: Raid device /dev/md1 found clean RESULT: Physical Volume /dev/md1 in VolGroupSys has 270213.84M out of total 499994.59M RESULT: Volumegroup VolGroupSys consist of 1 physical volumes,contains 4 logical volumes, has 0 volume snaps with total size of 499994.59M and free space of 270213.84M RESULT: Logical Volume LogVolOpt in VolGroupSys Volume group is of size 60.00G RESULT: Logical Volume LogVolRoot in VolGroupSys Volume group is of size 30.00G RESULT: Logical Volume LogVolSwap in VolGroupSys Volume group is of size 24.00G RESULT: Logical Volume LogVolU01 in VolGroupSys Volume group is of size 100.00G RESULT: Device /dev/mapper/VolGroupSys-LogVolRoot is mounted on / of type ext3 in (rw) RESULT: Device /dev/md0 is mounted on /boot of type ext3 in (rw) RESULT: Device /dev/mapper/VolGroupSys-LogVolOpt is mounted on /opt of type ext3 in (rw) RESULT: Device /dev/mapper/VolGroupSys-LogVolU01 is mounted on /u01 of type ext3 in (rw) RESULT: / has 25036 MB free out of total 29758 MB RESULT: /boot has 42 MB free out of total 99 MB |
Success Factor | ORACLE DATABASE APPLIANCE (ODA) |
Recommendation | All system component checks are expected to be successful |
Needs attention on | - |
Passed on | odahost-01, odahost-02 |
Status on odahost-01: PASS => System component checks successful |
DATA FROM odahost-01 FOR SYSTEM COMPONENT STATUS INFO: oak system information and Validations RESULT: System Software inventory details Reading the metadata. It takes a while... System Version Component Name Installed Version Supported Version -------------- --------------- ------------------ ----------------- 2.3.0.0.0 Controller 05.00.29.00 Up-to-date Expander 0342 Up-to-date SSD_SHARED E125 Up-to-date HDD_LOCAL SA03 Up-to-date HDD_SHARED 0B25 Up-to-date ILOM 3.0.16.22 r73911 Up-to-date BIOS 12010309 Up-to-date IPMI 1.8.10.4 Up-to-date HMP 2.2.4 Up-to-date OAK 2.3.0.0.0 Up-to-date |
Status on odahost-02: PASS => System component checks successful |
DATA FROM odahost-02 FOR SYSTEM COMPONENT STATUS INFO: oak system information and Validations RESULT: System Software inventory details Reading the metadata. It takes a while... System Version Component Name Installed Version Supported Version -------------- --------------- ------------------ ----------------- 2.3.0.0.0 Controller 05.00.29.00 Up-to-date Expander 0342 Up-to-date SSD_SHARED E125 Up-to-date HDD_LOCAL SA03 Up-to-date HDD_SHARED 0B25 Up-to-date ILOM 3.0.16.22 r73911 Up-to-date BIOS 12010309 Up-to-date IPMI 1.8.10.4 Up-to-date HMP 2.2.4 Up-to-date OAK 2.3.0.0.0 Up-to-date |
Success Factor | ORACLE DATABASE APPLIANCE (ODA) |
Recommendation | All shared storage checks are expected to be successful |
Needs attention on | - |
Passed on | odahost-01, odahost-02 |
Status on odahost-01: PASS => Shared storage checks successful |
DATA FROM odahost-01 FOR VALIDATE SHARED STORAGE INFO: Checking Shared Storage RESULT: Disk HDD_E0_S00_966615931 path1 status active device sdc with status active path2 status enabled device sdam with status active SUCCESS: HDD_E0_S00_966615931 has both the paths up and current active path is sdc RESULT: Disk HDD_E0_S01_966589563 path1 status active device sdm with status active path2 status enabled device sdaw with status active SUCCESS: HDD_E0_S01_966589563 has both the paths up and current active path is sdm RESULT: Disk HDD_E0_S04_966044031 path1 status active device sdd with status active path2 status enabled device sdan with status active SUCCESS: HDD_E0_S04_966044031 has both the paths up and current active path is sdd RESULT: Disk HDD_E0_S05_966615123 path1 status active device sdn with status active path2 status enabled device sdax with status active SUCCESS: HDD_E0_S05_966615123 has both the paths up and current active path is sdn RESULT: Disk HDD_E0_S08_967037407 path1 status active device sde with status active path2 status enabled device sdao with status active SUCCESS: HDD_E0_S08_967037407 has both the paths up and current active path is sde RESULT: Disk HDD_E0_S09_966788687 path1 status active device sdk with status active path2 status enabled device sdau with status active SUCCESS: HDD_E0_S09_966788687 has both the paths up and current active path is sdk RESULT: Disk HDD_E0_S12_966579103 path1 status active device sdf with status active path2 status enabled device sdap with status active SUCCESS: HDD_E0_S12_966579103 has both the paths up and current active path is sdf RESULT: Disk HDD_E0_S13_967038227 path1 status active device sdl with status active path2 status enabled device sdav with status active ...More |
Status on odahost-02: PASS => Shared storage checks successful |
DATA FROM odahost-02 FOR VALIDATE SHARED STORAGE INFO: Checking Shared Storage RESULT: Disk HDD_E0_S00_966615931 path1 status active device sdc with status active path2 status enabled device sdam with status active SUCCESS: HDD_E0_S00_966615931 has both the paths up and current active path is sdc RESULT: Disk HDD_E0_S01_966589563 path1 status active device sdm with status active path2 status enabled device sdaw with status active SUCCESS: HDD_E0_S01_966589563 has both the paths up and current active path is sdm RESULT: Disk HDD_E0_S04_966044031 path1 status active device sdd with status active path2 status enabled device sdan with status active SUCCESS: HDD_E0_S04_966044031 has both the paths up and current active path is sdd RESULT: Disk HDD_E0_S05_966615123 path1 status active device sdn with status active path2 status enabled device sdax with status active SUCCESS: HDD_E0_S05_966615123 has both the paths up and current active path is sdn RESULT: Disk HDD_E0_S08_967037407 path1 status active device sde with status active path2 status enabled device sdao with status active SUCCESS: HDD_E0_S08_967037407 has both the paths up and current active path is sde RESULT: Disk HDD_E0_S09_966788687 path1 status active device sdk with status active path2 status enabled device sdau with status active SUCCESS: HDD_E0_S09_966788687 has both the paths up and current active path is sdk RESULT: Disk HDD_E0_S12_966579103 path1 status active device sdf with status active path2 status enabled device sdap with status active SUCCESS: HDD_E0_S12_966579103 has both the paths up and current active path is sdf RESULT: Disk HDD_E0_S13_967038227 path1 status active device sdl with status active path2 status enabled device sdav with status active ...More |
Status on odahost-02:tstdb1: PASS => Database parameter db_recovery_file_dest_size is set to recommended value |
90% of RECO Total Space = 5740GB db_recovery_file_dest_size= 1800GB |
Status on odahost-02:MCL2DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value |
90% of RECO Total Space = 5740GB db_recovery_file_dest_size= 10GB |
Status on odahost-02:MCL3DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value |
90% of RECO Total Space = 5740GB db_recovery_file_dest_size= 10GB |
Status on odahost-02:MCL4DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value |
90% of RECO Total Space = 5740GB db_recovery_file_dest_size= 10GB |
Status on odahost-02:MCL5DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value |
90% of RECO Total Space = 5740GB db_recovery_file_dest_size= 10GB |
Status on odahost-02:MCL7DB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value |
90% of RECO Total Space = 5740GB db_recovery_file_dest_size= 10GB |
Status on odahost-02:MCLDB: PASS => Database parameter db_recovery_file_dest_size is set to recommended value |
90% of RECO Total Space = 5740GB db_recovery_file_dest_size= 10GB |
Status on tstdb11: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
tstdb11.global_names = TRUE |
Status on MCL2DB1: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCL2DB1.global_names = TRUE |
Status on MCL3DB1: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCL3DB1.global_names = TRUE |
Status on MCL4DB1: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCL4DB1.global_names = TRUE |
Status on MCL5DB1: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCL5DB1.global_names = TRUE |
Status on MCL6DB: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCL6DB.global_names = TRUE |
Status on MCL7DB1: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCL7DB1.global_names = TRUE |
Status on MCLDB1: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCLDB1.global_names = TRUE |
Status on tstdb12: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
tstdb12.global_names = TRUE |
Status on MCL2DB2: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCL2DB2.global_names = TRUE |
Status on MCL3DB2: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCL3DB2.global_names = TRUE |
Status on MCL4DB2: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCL4DB2.global_names = TRUE |
Status on MCL5DB2: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCL5DB2.global_names = TRUE |
Status on MCL7DB2: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCL7DB2.global_names = TRUE |
Status on MCLDB2: PASS => Database parameter GLOBAL_NAMES is set to recommended value |
MCLDB2.global_names = TRUE |
Status on tstdb11: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
tstdb11.db_lost_write_protect = TYPICAL |
Status on MCL2DB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCL2DB1.db_lost_write_protect = TYPICAL |
Status on MCL3DB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCL3DB1.db_lost_write_protect = TYPICAL |
Status on MCL4DB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCL4DB1.db_lost_write_protect = TYPICAL |
Status on MCL5DB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCL5DB1.db_lost_write_protect = TYPICAL |
Status on MCL6DB: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCL6DB.db_lost_write_protect = TYPICAL |
Status on MCL7DB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCL7DB1.db_lost_write_protect = TYPICAL |
Status on MCLDB1: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCLDB1.db_lost_write_protect = TYPICAL |
Status on tstdb12: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
tstdb12.db_lost_write_protect = TYPICAL |
Status on MCL2DB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCL2DB2.db_lost_write_protect = TYPICAL |
Status on MCL3DB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCL3DB2.db_lost_write_protect = TYPICAL |
Status on MCL4DB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCL4DB2.db_lost_write_protect = TYPICAL |
Status on MCL5DB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCL5DB2.db_lost_write_protect = TYPICAL |
Status on MCL7DB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCL7DB2.db_lost_write_protect = TYPICAL |
Status on MCLDB2: PASS => Database parameter DB_LOST_WRITE_PROTECT is set to recommended value |
MCLDB2.db_lost_write_protect = TYPICAL |
Status on odahost-01: PASS => Shell limit soft nproc for DB is configured according to recommendation |
DATA FROM odahost-01 - MCLDB DATABASE - DB SHELL LIMITS SOFT NPROC oracle soft nproc 131072 |
Status on odahost-02: PASS => Shell limit soft nproc for DB is configured according to recommendation |
oracle soft nproc 131072 |
Status on odahost-01: PASS => Shell limit hard stack for DB is configured according to recommendation |
DATA FROM odahost-01 - MCLDB DATABASE - DB SHELL LIMITS HARD STACK oracle hard stack unlimited |
Status on odahost-02: PASS => Shell limit hard stack for DB is configured according to recommendation |
oracle hard stack unlimited |
Status on odahost-01: PASS => Shell limit hard nofile for DB is configured according to recommendation |
DATA FROM odahost-01 - MCLDB DATABASE - DB SHELL LIMITS HARD NOFILE oracle hard nofile 131072 |
Status on odahost-02: PASS => Shell limit hard nofile for DB is configured according to recommendation |
oracle hard nofile 131072 |
Status on odahost-01: PASS => Shell limit hard nproc for DB is configured according to recommendation |
DATA FROM odahost-01 - MCLDB DATABASE - DB SHELL LIMITS HARD NPROC oracle hard nproc 131072 |
Status on odahost-02: PASS => Shell limit hard nproc for DB is configured according to recommendation |
oracle hard nproc 131072 |
Status on odahost-01: PASS => Shell limit hard nproc for GI is configured according to recommendation |
DATA FROM odahost-01 - MCLDB DATABASE - GI SHELL LIMITS HARD NPROC grid soft nofile 131072 grid hard nofile 131072 grid soft nproc 131072 grid hard nproc 131072 grid soft core unlimited grid hard core unlimited grid soft memlock 72000000 grid hard memlock 72000000 oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 |
Status on odahost-02: PASS => Shell limit hard nproc for GI is configured according to recommendation |
grid soft nofile 131072 grid hard nofile 131072 grid soft nproc 131072 grid hard nproc 131072 grid soft core unlimited grid hard core unlimited grid soft memlock 72000000 grid hard memlock 72000000 oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited |
Status on odahost-01: PASS => Shell limit hard nofile for GI is configured according to recommendation |
DATA FROM odahost-01 - MCLDB DATABASE - GI SHELL LIMITS HARD NOFILE grid soft nofile 131072 grid hard nofile 131072 grid soft nproc 131072 grid hard nproc 131072 grid soft core unlimited grid hard core unlimited grid soft memlock 72000000 grid hard memlock 72000000 oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 |
Status on odahost-02: PASS => Shell limit hard nofile for GI is configured according to recommendation |
grid soft nofile 131072 grid hard nofile 131072 grid soft nproc 131072 grid hard nproc 131072 grid soft core unlimited grid hard core unlimited grid soft memlock 72000000 grid hard memlock 72000000 oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited |
Status on odahost-01: PASS => Shell limit soft nproc for GI is configured according to recommendation |
DATA FROM odahost-01 - MCLDB DATABASE - GI SHELL LIMITS SOFT NPROC grid soft nofile 131072 grid hard nofile 131072 grid soft nproc 131072 grid hard nproc 131072 grid soft core unlimited grid hard core unlimited grid soft memlock 72000000 grid hard memlock 72000000 oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 |
Status on odahost-02: PASS => Shell limit soft nproc for GI is configured according to recommendation |
grid soft nofile 131072 grid hard nofile 131072 grid soft nproc 131072 grid hard nproc 131072 grid soft core unlimited grid hard core unlimited grid soft memlock 72000000 grid hard memlock 72000000 oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited |
Status on odahost-01: PASS => Shell limit soft nofile for GI is configured according to recommendation |
DATA FROM odahost-01 - MCLDB DATABASE - GI SHELL LIMITS SOFT NOFILE grid soft nofile 131072 grid hard nofile 131072 grid soft nproc 131072 grid hard nproc 131072 grid soft core unlimited grid hard core unlimited grid soft memlock 72000000 grid hard memlock 72000000 oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 |
Status on odahost-02: PASS => Shell limit soft nofile for GI is configured according to recommendation |
grid soft nofile 131072 grid hard nofile 131072 grid soft nproc 131072 grid hard nproc 131072 grid soft core unlimited grid hard core unlimited grid soft memlock 72000000 grid hard memlock 72000000 oracle soft nofile 131072 oracle hard nofile 131072 oracle soft nproc 131072 oracle hard nproc 131072 oracle soft core unlimited |
Status on odahost-02: PASS => OSWatcher is running |
root 12597 1 0 Aug14 ? 00:00:19 /usr/bin/ksh ./OSWatcher.sh 10 504 gzip root 12829 12597 0 Aug14 ? 00:00:04 /usr/bin/ksh ./OSWatcherFM.sh 504 |
Success Factor | VERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY |
Recommendation | Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support. These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics. So it would be wise to verify that the ownership of the following files is root:root: $ls -l $GRID_HOME/log/`hostname`/crsd/* $ls -l $GRID_HOME/log/`hostname`/ohasd/* $ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/* $ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/* If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root): # chown root:root $GRID_HOME/log/`hostname`/crsd/* # chown root:root $GRID_HOME/log/`hostname`/ohasd/* # chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/* # chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/* |
Links |
|
Needs attention on | - |
Passed on | odahost-01, odahost-02 |
Status on odahost-02: PASS => ohasd Log Ownership is Correct (root root) |
total 6532 -rw-r--r-- 1 root root 6670908 Aug 15 09:38 ohasd.log -rw-r--r-- 1 root root 1158 Aug 14 14:21 ohasdOUT.log |
Success Factor | VERIFY OWNERSHIP OF IMPORTANT CLUSTERWARE LOG FILES NOT CHANGED INCORRECTLY |
Recommendation | Due to Bug 9837321 or if for any other reason the ownership of certain clusterware related log files is changed incorrectly it could result in important diagnostics not being available when needed by Support. These logs are rotated periodically to keep them from growing unmanageably large and if the ownership of the files is incorrect when it is time to rotate the logs that operation could fail and while that doesn't effect the operation of the clusterware itself it would effect the logging and therefore problem diagnostics. So it would be wise to verify that the ownership of the following files is root:root: $ls -l $GRID_HOME/log/`hostname`/crsd/* $ls -l $GRID_HOME/log/`hostname`/ohasd/* $ls -l $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/* $ls -l $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/* If any of those files' ownership is NOT root:root then you should change the ownership of the files individually or as follows (as root): # chown root:root $GRID_HOME/log/`hostname`/crsd/* # chown root:root $GRID_HOME/log/`hostname`/ohasd/* # chown root:root $GRID_HOME/log/`hostname`/agent/crsd/orarootagent_root/* # chown root:root $GRID_HOME/log/`hostname`/agent/ohasd/orarootagent_root/* |
Links |
|
Needs attention on | - |
Passed on | odahost-01, odahost-02 |
Status on odahost-02: PASS => crsd Log Ownership is Correct (root root) |
total 6744 -rw-r--r-- 1 root root 6886449 Aug 15 09:38 crsd.log -rw-r--r-- 1 root root 756 Aug 14 15:00 crsdOUT.log |
Status on odahost-01: PASS => NIC bonding mode is not set to Broadcast(3) for public network |
DATA FROM odahost-01 - MCLDB DATABASE - NIC BONDING MODE PUBLIC Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth2 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth2 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:21:28:d6:14:4c Slave queue ID: 0 |
Status on odahost-02: PASS => NIC bonding mode is not set to Broadcast(3) for public network |
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth2 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth2 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:21:28:d6:14:6a Slave queue ID: 0 Slave Interface: eth3 MII Status: up Link Failure Count: 0 Permanent HW addr: 00:21:28:d6:14:6b |
Status on odahost-01: PASS => CRS version is higher or equal to ASM version. |
DATA FROM odahost-01 - MCLDB DATABASE - CRS AND ASM VERSION COMPARISON CRS_ACTIVE_VERSION = 112030 ASM Version = 112030 |
Status on odahost-02: PASS => CRS version is higher or equal to ASM version. |
CRS_ACTIVE_VERSION = 112030 ASM Version = 112030 |
Status on odahost-02: PASS => ip_local_port_range is configured according to recommendation |
minimum port range = 9000 maximum port range = 65500 |
Status on odahost-01: PASS => Linux Swap Configuration meets or exceeds Recommendation |
DATA FROM odahost-01 - MCLDB DATABASE - LINUX SWAP SIZE MemTotal: 98929496 kB MemFree: 23132104 kB Buffers: 485384 kB Cached: 15381060 kB SwapCached: 0 kB Active: 10214284 kB Inactive: 8716056 kB Active(anon): 3409032 kB Inactive(anon): 408148 kB Active(file): 6805252 kB Inactive(file): 8307908 kB Unevictable: 387400 kB Mlocked: 387416 kB SwapTotal: 25165816 kB SwapFree: 25165816 kB Dirty: 3780 kB |
Status on odahost-02: PASS => Linux Swap Configuration meets or exceeds Recommendation |
MemTotal: 98929496 kB MemFree: 37068472 kB Buffers: 346608 kB Cached: 2748692 kB SwapCached: 0 kB Active: 3521824 kB Inactive: 1977668 kB Active(anon): 2702692 kB Inactive(anon): 413692 kB Active(file): 819132 kB Inactive(file): 1563976 kB Unevictable: 385516 kB Mlocked: 385516 kB SwapTotal: 25165816 kB SwapFree: 25165816 kB Dirty: 2828 kB Writeback: 0 kB AnonPages: 2812760 kB Mapped: 297652 kB Shmem: 623948 kB |
Status on odahost-02: PASS => $ORACLE_HOME/bin/oradism ownership is root |
-rwsr-x--- 1 root oinstall 71758 Sep 17 2011 /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oradism |
Status on odahost-02: PASS => $ORACLE_HOME/bin/oradism setuid bit is set |
-rwsr-x--- 1 root oinstall 71758 Sep 17 2011 /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oradism |
Success Factor | LINUX DATA COLLECTIONS AND AUDIT CHECKS |
Recommendation | net.core.rmem_max should be set >= 4194304 |
Links | |
Needs attention on | - |
Passed on | odahost-01, odahost-02 |
Status on odahost-01: PASS => Kernel Parameter net.core.rmem_max OK |
net.core.rmem_max = 4194304 |
Status on odahost-02: PASS => Kernel Parameter net.core.rmem_max OK |
net.core.rmem_max = 4194304 |
Status on tstdb1: PASS => All tablespaces are locally manged tablespace |
DATA FOR tstdb1 FOR LOCALLY MANAGED TABLESPACES no_of_dictionary_managed_tablespace = 0 |
Status on MCL2DB: PASS => All tablespaces are locally manged tablespace |
DATA FOR MCL2DB FOR LOCALLY MANAGED TABLESPACES no_of_dictionary_managed_tablespace = 0 |
Status on MCL3DB: PASS => All tablespaces are locally manged tablespace |
DATA FOR MCL3DB FOR LOCALLY MANAGED TABLESPACES no_of_dictionary_managed_tablespace = 0 |
Status on MCL4DB: PASS => All tablespaces are locally manged tablespace |
DATA FOR MCL4DB FOR LOCALLY MANAGED TABLESPACES no_of_dictionary_managed_tablespace = 0 |
Status on MCL5DB: PASS => All tablespaces are locally manged tablespace |
DATA FOR MCL5DB FOR LOCALLY MANAGED TABLESPACES no_of_dictionary_managed_tablespace = 0 |
Status on MCL6DB: PASS => All tablespaces are locally manged tablespace |
DATA FOR MCL6DB FOR LOCALLY MANAGED TABLESPACES no_of_dictionary_managed_tablespace = 0 |
Status on MCL7DB: PASS => All tablespaces are locally manged tablespace |
DATA FOR MCL7DB FOR LOCALLY MANAGED TABLESPACES no_of_dictionary_managed_tablespace = 0 |
Status on MCLDB: PASS => All tablespaces are locally manged tablespace |
DATA FOR MCLDB FOR LOCALLY MANAGED TABLESPACES no_of_dictionary_managed_tablespace = 0 |
Success Factor | LINUX DATA COLLECTIONS AND AUDIT CHECKS |
Recommendation | SEMMNS should be set >= 32000 |
Links | |
Needs attention on | - |
Passed on | odahost-01, odahost-02 |
Status on odahost-01: PASS => Kernel Parameter SEMMNS OK |
semmns = 32000 |
Status on odahost-02: PASS => Kernel Parameter SEMMNS OK |
semmns = 32000 |
Success Factor | LINUX DATA COLLECTIONS AND AUDIT CHECKS |
Recommendation | SEMMSL should be set >= 250 |
Links | |
Needs attention on | - |
Passed on | odahost-01, odahost-02 |
Status on odahost-01: PASS => Kernel Parameter SEMMSL OK |
semmsl = 250 |
Status on odahost-02: PASS => Kernel Parameter SEMMSL OK |
semmsl = 250 |
Success Factor | LINUX DATA COLLECTIONS AND AUDIT CHECKS |
Recommendation | SEMMNI should be set >= 128 |
Links | |
Needs attention on | - |
Passed on | odahost-01, odahost-02 |
Status on odahost-01: PASS => Kernel Parameter SEMMNI OK |
semmni = 142 |
Status on odahost-02: PASS => Kernel Parameter SEMMNI OK |
semmni = 142 |
Success Factor | LINUX DATA COLLECTIONS AND AUDIT CHECKS |
Recommendation | SEMOPM should be set >= 100 |
Links | |
Needs attention on | - |
Passed on | odahost-01, odahost-02 |
Status on odahost-01: PASS => Kernel Parameter SEMOPM OK |
semopm = 100 |
Status on odahost-02: PASS => Kernel Parameter SEMOPM OK |
semopm = 100 |
Status on odahost-01: PASS => None of the hostnames contains an underscore character |
DATA FROM odahost-01 - MCLDB DATABASE - HOSTNAME FORMATING odahost-01 odahost-02 |
Status on odahost-02: PASS => None of the hostnames contains an underscore character |
odahost-01 odahost-02 |
Status on odahost-01: PASS => net.core.rmem_default Is Configured Properly |
net.core.rmem_default = 262144 |
Status on odahost-02: PASS => net.core.rmem_default Is Configured Properly |
net.core.rmem_default = 262144 |
Status on odahost-01: PASS => net.core.wmem_max Is Configured Properly |
net.core.wmem_max = 1048576 |
Status on odahost-02: PASS => net.core.wmem_max Is Configured Properly |
net.core.wmem_max = 1048576 |
Status on odahost-01: PASS => net.core.wmem_default Is Configured Properly |
net.core.wmem_default = 262144 |
Status on odahost-02: PASS => net.core.wmem_default Is Configured Properly |
net.core.wmem_default = 262144 |
Status on tstdb1: PASS => SYS.AUDSES$ sequence cache size >= 10,000 |
DATA FOR tstdb1 FOR AUDSES$ SEQUENCE CACHE SIZE audses$.cache_size = 10000 |
Status on MCL2DB: PASS => SYS.AUDSES$ sequence cache size >= 10,000 |
DATA FOR MCL2DB FOR AUDSES$ SEQUENCE CACHE SIZE audses$.cache_size = 10000 |
Status on MCL3DB: PASS => SYS.AUDSES$ sequence cache size >= 10,000 |
DATA FOR MCL3DB FOR AUDSES$ SEQUENCE CACHE SIZE audses$.cache_size = 10000 |
Status on MCL4DB: PASS => SYS.AUDSES$ sequence cache size >= 10,000 |
DATA FOR MCL4DB FOR AUDSES$ SEQUENCE CACHE SIZE audses$.cache_size = 10000 |
Status on MCL5DB: PASS => SYS.AUDSES$ sequence cache size >= 10,000 |
DATA FOR MCL5DB FOR AUDSES$ SEQUENCE CACHE SIZE audses$.cache_size = 10000 |
Status on MCL6DB: PASS => SYS.AUDSES$ sequence cache size >= 10,000 |
DATA FOR MCL6DB FOR AUDSES$ SEQUENCE CACHE SIZE audses$.cache_size = 10000 |
Status on MCL7DB: PASS => SYS.AUDSES$ sequence cache size >= 10,000 |
DATA FOR MCL7DB FOR AUDSES$ SEQUENCE CACHE SIZE audses$.cache_size = 10000 |
Status on MCLDB: PASS => SYS.AUDSES$ sequence cache size >= 10,000 |
DATA FOR MCLDB FOR AUDSES$ SEQUENCE CACHE SIZE audses$.cache_size = 10000 |
Success Factor | CACHE APPLICATION SEQUENCES AND SOME SYSTEM SEQUENCES FOR BETTER PERFORMANCE |
Recommendation | Sequence contention (SQ enqueue) can occur if SYS.IDGEN1$ sequence is not cached to 1000. This condition can lead to performance issues in RAC. 1000 is the default starting in version 11.2.0.1. |
Links |
|
Needs attention on | - |
Passed on | tstdb1, MCL2DB, MCL3DB, MCL4DB, MCL5DB, MCL6DB, MCL7DB, MCLDB |
Status on tstdb1: PASS => SYS.IDGEN1$ sequence cache size >= 1,000 |
DATA FOR tstdb1 FOR IDGEN$ SEQUENCE CACHE SIZE idgen1$.cache_size = 1000 |
Status on MCL2DB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000 |
DATA FOR MCL2DB FOR IDGEN$ SEQUENCE CACHE SIZE idgen1$.cache_size = 1000 |
Status on MCL3DB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000 |
DATA FOR MCL3DB FOR IDGEN$ SEQUENCE CACHE SIZE idgen1$.cache_size = 1000 |
Status on MCL4DB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000 |
DATA FOR MCL4DB FOR IDGEN$ SEQUENCE CACHE SIZE idgen1$.cache_size = 1000 |
Status on MCL5DB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000 |
DATA FOR MCL5DB FOR IDGEN$ SEQUENCE CACHE SIZE idgen1$.cache_size = 1000 |
Status on MCL6DB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000 |
DATA FOR MCL6DB FOR IDGEN$ SEQUENCE CACHE SIZE idgen1$.cache_size = 1000 |
Status on MCL7DB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000 |
DATA FOR MCL7DB FOR IDGEN$ SEQUENCE CACHE SIZE idgen1$.cache_size = 1000 |
Status on MCLDB: PASS => SYS.IDGEN1$ sequence cache size >= 1,000 |
DATA FOR MCLDB FOR IDGEN$ SEQUENCE CACHE SIZE idgen1$.cache_size = 1000 |