Friday, 24 July 2020

Unable to Start ASM after Node reboot with error CRS-0223: Resource 'ora.asm' has placement error

Cause:

ASM instance not came up on node1 after node reboot, when I tried to start asm instance manually got below error

 

[grid@racnode01 ~]$ sqlplus "/as sysasm"

SQL*Plus: Release 18.0.0.0.0 - Production on Thu Jul 23 06:33:14 2020

Version 18.3.2.0.0

Copyright (c) 1982, 2018, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;

ORA-39511: Start of CRS resource for instance '223' failed with error:[CRS-2549: Resource 'ora.asm' cannot be placed on 'racnode01' as it is not a valid candidate as per the placement policy

CRS-0223: Resource 'ora.asm' has placement error.

clsr_start_resource:260 status:223

clsrapi_start_asm:start_asmdbs status:223

Check resource details:

[grid@racnode01 ~]$ crsctl stat server -f

NAME=racnode01

MEMORY_SIZE=128462

CPU_COUNT=8

CPU_CLOCK_RATE=2053

CPU_HYPERTHREADING=1

CPU_EQUIVALENCY=1000

DEPLOYMENT=other

CONFIGURED_CSS_ROLE=hub

RESOURCE_USE_ENABLED=0.  --- Issue

SERVER_LABEL=

PHYSICAL_HOSTNAME=

CSS_CRITICAL=no

CSS_CRITICAL_TOTAL=0

RESOURCE_TOTAL=0

SITE_NAME=poc19c-cluster

STATE=ONLINE

ACTIVE_POOLS=Free

STATE_DETAILS=

ACTIVE_CSS_ROLE=hub

 

NAME=racnode02

MEMORY_SIZE=128462

CPU_COUNT=8

CPU_CLOCK_RATE=2684

CPU_HYPERTHREADING=1

CPU_EQUIVALENCY=1000

DEPLOYMENT=other

CONFIGURED_CSS_ROLE=hub

RESOURCE_USE_ENABLED=1

SERVER_LABEL=

PHYSICAL_HOSTNAME=

CSS_CRITICAL=no

CSS_CRITICAL_TOTAL=0

RESOURCE_TOTAL=0

SITE_NAME=poc19c-cluster

STATE=ONLINE

ACTIVE_POOLS=Generic ora.poc19c

STATE_DETAILS=

ACTIVE_CSS_ROLE=hub

 

RESOURCE_USE_ENABLED:

The possible values are 1 or 0. If you set the value for this attribute to 1, which is the default, then the server can be used for resource placement.

If you set the value to 0, then Oracle Clusterware disallows starting server pool resources on the server. The server remains in the Free server pool.

 

Set RESOURCE_USE_ENABLED value to 1

root@racnode01:# ./crsctl set resource use 1

CRS-4416: Server attribute 'RESOURCE_USE_ENABLED' successfully changed. Restart Oracle High Availability Services for new value to take effect.

root@racnode01:#

 

restart Cluster

root@racnode01:# ./crsctl stop crs

root@racnode01:# ./crsctl start crs

 

Check ASM Status and Resource details

[grid@racnode01 ~]$ sql

SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jul 24 09:10:17 2020

Version 18.3.2.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

 

Connected to:

Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production

Version 18.3.2.0.0

 

SQL> select INSTANCE_NAME,host_name,status,logins, to_char(STARTUP_TIME,'DD/MM/YYYY HH24:MI:SS') "STARTUPSP2-0734: _TIME" from gv$instance;unknown command beginning "==========..." - rest of line ignored.

 

INSTANCE_NAME    HOST_NAME       STATUS       LOGINS     STARTUP_TIME

---------------- ------------   ------------ ---------- -------------------

+ASM1            racnode01.com    STARTED      ALLOWED    24/07/2020 09:05:32

+ASM2            racnode02.com    STARTED      ALLOWED    23/07/2020 06:14:17

 

 

[grid@racnode02 ~]$ crsctl stat server -f

NAME=racnode01

MEMORY_SIZE=128462

CPU_COUNT=8

CPU_CLOCK_RATE=2334

CPU_HYPERTHREADING=1

CPU_EQUIVALENCY=1000

DEPLOYMENT=other

CONFIGURED_CSS_ROLE=hub

RESOURCE_USE_ENABLED=1

SERVER_LABEL=

PHYSICAL_HOSTNAME=

CSS_CRITICAL=no

CSS_CRITICAL_TOTAL=0

RESOURCE_TOTAL=0

SITE_NAME=poc19c-cluster

STATE=ONLINE

ACTIVE_POOLS=Generic ora.poc19c

STATE_DETAILS=

ACTIVE_CSS_ROLE=hub

 

NAME=racnode02

MEMORY_SIZE=128462

CPU_COUNT=8

CPU_CLOCK_RATE=2684

CPU_HYPERTHREADING=1

CPU_EQUIVALENCY=1000

DEPLOYMENT=other

CONFIGURED_CSS_ROLE=hub

RESOURCE_USE_ENABLED=1

SERVER_LABEL=

PHYSICAL_HOSTNAME=

CSS_CRITICAL=no

CSS_CRITICAL_TOTAL=0

RESOURCE_TOTAL=0

SITE_NAME=poc19c-cluster

STATE=ONLINE

ACTIVE_POOLS=Generic ora.poc19c

STATE_DETAILS=

ACTIVE_CSS_ROLE=hub

 

 

ASM instance started without any issues after adjusting  RESOURCE_USE_ENABLED=1

 

Thursday, 23 July 2020

CRS-6706: Oracle Clusterware Release patch level ('2292884171') does not match Software patch level ('1844241963'). Oracle Clusterware

Cause:    

Post patch step failed and causing problem while starting crs

$GRID_HOME/crs/install/rootcrs.pl -postpatch

 

root@racnode01:# ./crsctl start crs

CRS-6706: Oracle Clusterware Release patch level ('2292884171') does not match Software patch level ('c'). Oracle Clusterware cannot be started.

CRS-4000: Command Start failed, or completed with errors.

 

Solution:

1.Run the following command as the root user to complete the patching set up behind the scenes:

root@racnode01:# $GRID_HOME/bin/clscfg -localpatch

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

 

2.  Run the following command as the root user to lock the GI home:

root@racnode01:# cd ../crs/install/

root@racnode01:# ./rootcrs.sh -lock

Using configuration parameter file: /u01/app/18.3.0.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

 /app/grid/crsdata/racnode01/crsconfig/crslock_ racnode01_2020-07-23_02-44-20AM.log

2020/07/23 02:44:22 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'

root@racnode01:#

 

3. Start crs

root@racnode01:# cd ../../bin/

root@racnode01:# ./crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

Thursday, 16 July 2020

19c upgrade Prechecks failed with error Clusterware patch level mismatch

19c upgrade Prechecks failed:

./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/18.3.0.0/grid -dest_crshome /u01/app/19.3.0.0/grid -dest_version 19.7.0.0

 

Errors:

Verifying Clusterware Version Consistency ...FAILED

Verifying cluster upgrade state ...FAILED

racnode02: PRVG-13411 : Oracle Clusterware active patch level

"1844241963" does not match software patch level "2292884171"

on node "racnode02".

 

racnode01: PRVG-13411 : Oracle Clusterware active patch level

"1844241963" does not match software patch level "2292884171"

on node "racnode01".

 

Activeversion and releasepatch are different in cluster nodes: 


[grid@racnode01 ~]$ crsctl query crs activeversion -f

Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [1844241963].

 

[grid@racnode01 ~]$ crsctl query crs releasepatch

Oracle Clusterware release patch level is [2292884171] and the complete list of patches [27494830 27908644 27923415 28090523 28090557 28090564 28256701 28553832 28790643 ] have been applied on the local node.

 

Patch level are different.. Looks patches not applied properly

Try by running pre and post patch steps


Execute Below Script from All cluster nodes :

1) crsctl stop crs

2) $GI_HOME/crs/install/rootcrs.sh -prepatch

3) $GI_HOME/crs/install/rootcrs.sh -postpatch

 

Verify Patch levels:

Node1 :

[grid@racnode01 grid]$ crsctl query crs softwarepatch

Oracle Clusterware patch level on node racnode01 is [2292884171].

[grid@racnode01 grid]$ crsctl query crs activeversion -f

Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2292884171].

[grid@racnode01 grid]$ crsctl query crs releasepatch

Oracle Clusterware release patch level is [2292884171] and the complete list of patches [27494830 27908644 27923415 28090523 28090557 28090564 28256701 28553832 28790643 ] have been applied on the local node. The release patch string is [18.3.2.0.0].

 

Node2 :

[grid@racnode02 ~]$ crsctl query crs softwarepatch

Oracle Clusterware patch level on node racnode02 is [2292884171].

[grid@racnode02 ~]$ crsctl query crs activeversion -f

Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2292884171].

[grid@racnode02 ~]$ crsctl query crs releasepatch

Oracle Clusterware release patch level is [2292884171] and the complete list of patches [27494830 27908644 27923415 28090523 28090557 28090564 28256701 28553832 28790643 ] have been applied on the local node. The release patch string is [18.3.2.0.0].

 

19c RAC Installation failed with Error CLSRSC-180: An error occurred while executing the command 'cluutil -isipv6 _`'

Issue:

19c grid installation failed while running root.sh with below errors


Error 1 from GUI:

- PRCZ-2010 : Failed to execute command "/u01/app/19.7.0.0/grid/root.sh" using 'sudo' from location "/usr/bin/sudo" as user "grid" within 7,200 seconds on nodes "racnode01"

- PRCZ-2010 : Failed to execute command "/u01/app/19.7.0.0/grid/root.sh" using 'sudo' from location "/usr/bin/sudo" as user "grid" within 7,200 seconds on nodes "racnode01"

 

Error 2 From Grig install log:

[grid@racnode01]$ tail -80 /tmp/GridSetupActions2020-07-15_10-11-08PM/gridSetupActions2020-07-15_10-11-08PM.log

Execution status of failed node:racnode01 

 Errors  : [sudo] password for grid:

 Standard output 

 : Performing root user operation.  The following environment variables are set as:

ORACLE_OWNER= grid ORACLE_HOME= /u01/app/19.7.0.0/grid Copying dbhome to /usr/local/bin ...

 Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... 

 Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script.

 Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file:

/u01/app/19.7.0.0/grid/crs/install/crsconfig_params The log of current session can be found at:

/app/grid/crsdata/racnode01/crsconfig/rootcrs_racnode01_2020-07-15_11-33-50PM.log 2020/07/15 23:33:57 CLSRSC-594:

Executing installation step 1 of 19: 'SetupTFA'. 2020/07/15 23:33:57 CLSRSC-594:

Executing installation step 2 of 19: 'ValidateEnv'. 2020/07/15 23:33:57 CLSRSC-594:

Executing installation step 3 of 19: 'CheckFirstNode'. 2020/07/15 23:33:57 CLSRSC-4002:

Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2020/07/15 23:34:00 CLSRSC-594:

Executing installation step 4 of 19: 'GenSiteGUIDs'. 2020/07/15 23:34:01 CLSRSC-594:

Executing installation step 5 of 19: 'SetupOSD'. 2020/07/15 23:34:01 CLSRSC-594:

Executing installation step 6 of 19: 'CheckCRSConfig'. 2020/07/15 23:34:02 CLSRSC-594:

Executing installation step 7 of 19: 'SetupLocalGPNP'. 2020/07/15 23:34:03 CLSRSC-594:

Executing installation step 8 of 19: 'CreateRootCert'. 2020/07/15 23:34:08 CLSRSC-594:

Executing installation step 9 of 19: 'ConfigOLR'. bash: -c: line 0: unexpected EOF while looking for matching ``' 

bash: -c: line 1: syntax error: unexpected end of file  2020/07/15 23:34:12 CLSRSC-180: An error occurred while

executing the command 'cluutil -isipv6 _`' Died at /u01/app/19.7.0.0/grid/crs/install/crsutils.pm line 14759.

 

Error 3 From root crs logs:

[grid@racnode01]$ tail -80f /app/grid/crsdata/racnode01/crsconfig/rootcrs_racnode01_2020-07-15_11-33-50PM.log

 

2020-07-16 01:19:49: FALSE

2020-07-16 01:19:49: '_' is not IPv6

2020-07-16 01:19:49: Checking if '__' is IPv6

2020-07-16 01:19:49: Invoking "/u01/app/19.7.0.0/grid/bin/cluutil -isipv6 __"

2020-07-16 01:19:49: trace file=/app/grid/crsdata/racnode01/crsconfig/cluutil5.log

2020-07-16 01:19:49: Running as user grid: /u01/app/19.7.0.0/grid/bin/cluutil -isipv6 __

2020-07-16 01:19:49: Removing file /tmp/M_7IvW6_cU

2020-07-16 01:19:49: Successfully removed file: /tmp/M_7IvW6_cU

2020-07-16 01:19:49: pipe exit code: 0

2020-07-16 01:19:49: /bin/su successfully executed

2020-07-16 01:19:49: FALSE

2020-07-16 01:19:49: '__' is not IPv6

2020-07-16 01:19:49: Checking if '_`' is IPv6

2020-07-16 01:19:49: Invoking "/u01/app/19.7.0.0/grid/bin/cluutil -isipv6 _`"

2020-07-16 01:19:49: trace file=/app/grid/crsdata/racnode01/crsconfig/cluutil6.log

2020-07-16 01:19:49: Running as user grid: /u01/app/19.7.0.0/grid/bin/cluutil -isipv6 _`

2020-07-16 01:19:49: Removing file /tmp/xvz5bmk3yU

2020-07-16 01:19:49: Successfully removed file: /tmp/xvz5bmk3yU

2020-07-16 01:19:49: pipe exit code: 256

2020-07-16 01:19:49: /bin/su exited with rc=1

2020-07-16 01:19:49: bash: -c: line 0: unexpected EOF while looking for matching ``'

 bash: -c: line 1: syntax error: unexpected end of file

2020-07-16 01:19:49: cluutil -isipv6 _` failed with status 1

2020-07-16 01:19:49: Executing cmd: /u01/app/19.7.0.0/grid/bin/clsecho -p has -f clsrsc -m 180 'cluutil -isipv6 _`'

2020-07-16 01:19:49: Executing cmd: /u01/app/19.7.0.0/grid/bin/clsecho -p has -f clsrsc -m 180 'cluutil -isipv6 _`'

2020-07-16 01:19:49: Command output:

>  CLSRSC-180: An error occurred while executing the command 'cluutil -isipv6 _`'

>End Command output

2020-07-16 01:19:49: CLSRSC-180: An error occurred while executing the command 'cluutil -isipv6 _`'


Cause:

When we run root.sh scripts it tries to validate isipv6 address on both nodes, it will run syntax from $GRID_HOME/crs/install/crsutils.pm internal config file and isipv6 check fails with error.

 

It’s  is due to Bug 30863405

 

Solution :

Bug 30863405 fixed in future release 21.1 Please contact Oracle Support for a backport request.

 

Workaround is to  Disable banner and re-run root.sh

 

Disable SSH banner in linux:

Comment below lines related to banner in /etc/ssh/sshd_config and restart sshd

Node1:

root@racnode01:# vi /etc/ssh/sshd_config

#Banner /etc/banner

#LogLevel VERBOSE

#PrintMotd yes

#PrintLastLog yes

root@racnode01:# service sshd restart

Redirecting to /bin/systemctl restart sshd.service

 

Node2:

root@racnode02:# vi /etc/ssh/sshd_config

#Banner /etc/banner

#LogLevel VERBOSE

#PrintMotd yes

#PrintLastLog yes

root@racnode02:# service sshd restart

Redirecting to /bin/systemctl restart sshd.service

 

Rerun root.sh will completed without any errors

Tuesday, 14 July 2020

Apply Conflicts patch to 18c Grid home

Issue: Trying to apply 28553832 to 18c grid home but it had a conflicts with  existing patch 27912127

  

1.  Present Patches in Grid home

[grid@racnode01 18c_Software]$ opatch lspatches

28790643;Database Release Update Revision : 18.3.2.0.190115 (28790643)

27912127;OCW Interim patch for 27912127

27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171

27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)

28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)

28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)

OPatch succeeded.

 

2.  Checking Conflicts with analyze

# opatchauto apply /tmp/28553832 -analyze

OPatchauto session is initiated at Mon Jul 13 06:45:06 2020

System initialization log file is /u01/app/18.3.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-07-13_06-45-07AM.log.

 

Session log file is /u01/app/18.3.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-07-13_06-49-14AM.log

The id for this session is A7J2

 

Executing OPatch prereq operations to verify patch applicability on home /u01/app/18.3.0.0/grid

Patch applicability verified successfully on home /u01/app/18.3.0.0/grid

OPatchAuto successful.

 

------------------------Summary-----------------------------

Analysis for applying patches has failed:

 

Host:racnode01

CRS Home:/u01/app/18.3.0.0/grid

Version:18.0.0.0.0

Analysis for patches has failed.

 

==Following patches FAILED in analysis for apply:

Patch: /tmp/28553832/28553832

Log: /u01/app/18.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-07-13_06-49-19AM_1.log

Reason: Failed during Analysis: CheckConflictAgainstOracleHome Failed, [ Prerequisite Status: FAILED, Prerequisite output: 

Summary of Conflict Analysis:

 

There are no patches that can be applied now.

 

Following patches have conflicts. Please contact Oracle Support and get the merged patch of the patches : 

27912127, 28553832


Conflicts/Supersets for each patch are:

Patch : 28553832

        Bug Conflict with 27912127

        Conflicting bugs are:

        27265816, 28045209, 27314512, 26587652, 27581484, 27346984, 27433163 ...

OPATCHAUTO-72053: Analysis for the patches failed.

OPATCHAUTO-72053: Command execution failed.

OPATCHAUTO-72053: Please check the summary for more details.

 

Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/product/11.2.0_64

OPatchauto session completed at Mon Jul 13 06:49:21 2020

Time taken to complete the session 4 minutes, 15 seconds

root@racnode01:#

 

Note : patch analysis failed due to conflicts patch


3.  Lets see various options in opatchauto apply   

root@ racnode01 # opatchauto apply -help

OPatchauto session is initiated at Mon Jul 13 07:18:06 2020

Oracle OPatchAuto Version 13.9.3.0.0

Copyright (c) 2016, Oracle Corporation.  All rights reserved.

 

DESCRIPTION

    This operation applies patch.

    Purpose:

        Apply a System Patch to Oracle Home. If patch location is not

        specified,  current directory will be taken as the patch location.

 

SYNTAX

    opatchauto apply [ <patch-location> ]

                     [ -phBaseDir <patch.base.directory> ]

                     [ -oh <home> ] [ -log <log> ]

                     [ -logLevel <log_priority> ] [ -binary ]

                     [ -analyze ]

                     [ -invPtrLoc <inventory.pointer.location> ]

                     [ -host <host> ] [ -wallet <wallet> ]

                     [ -force_conflict ] [ -skip_conflict ]

                     [ -no_relink ] [ -jre <jre> ]

                     [ -remote-image-location <remote.image.location> ]

                     [ -port <port> ] [ -inplace ] [ -sidb ] [ -sdb ]

                     [ -outofplace ] [ -rolling ]

                     [ -database <database> ] [ -silent <silent.key> ]

                     [ -generatesteps ] [ -prepare-clone ]

                     [ -norestart ] [ -ocmrf <ocmrf> ] [ -remote ]

                     [ -switch-clone ] [ -nonrolling ] [ -sid <sid> ]

-force_conflict

        If a conflict exist which prevents the patch from being applied, this flag can be used to force application of the patch.

        All the conflicting patches will be removed before applying the current patch.

 

4.  Apply Conflicts patch to grid home with  -force_conflicts option

patch apply Node 1:

root@racnode01#opatchauto apply /tmp/28553832  -force_conflict -oh /u01/app/18.3.0.0/grid

 

OPatchauto session is initiated at Mon Jul 13 10:37:47 2020

System initialization log file is /u01/app/18.3.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-07-13_10-37-48AM.log.

 

Session log file is /u01/app/18.3.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-07-13_10-39-53AM.log

The id for this session is NNUI

 

Executing OPatch prereq operations to verify patch applicability on home /u01/app/18.3.0.0/grid

Patch applicability verified successfully on home /u01/app/18.3.0.0/grid

 

Bringing down CRS service on home /u01/app/18.3.0.0/grid

CRS service brought down successfully on home /u01/app/18.3.0.0/grid

 

Start applying binary patch on home /u01/app/18.3.0.0/grid

Binary patch applied successfully on home /u01/app/18.3.0.0/grid

 

Starting CRS service on home /u01/app/18.3.0.0/grid

 CRS service started successfully on home /u01/app/18.3.0.0/grid

 

OPatchAuto successful.

---------------Summary---------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode01

CRS Home:/u01/app/18.3.0.0/grid

Version:18.0.0.0.0

Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /tmp/28553832/28553832

Log: /u01/app/18.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-07-13_10-41-19AM_1.log

 

OPatchauto session completed at Mon Jul 13 10:49:21 2020

Time taken to complete the session 11 minutes, 34 seconds

 

Verify patch details :

[grid@racnode01 19c_Software]$ opatch lspatches

28553832;OCW Interim patch for 28553832

28790643;Database Release Update Revision : 18.3.2.0.190115 (28790643)

27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171

27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)

28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)

28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)

OPatch succeeded.

 

Patch Node 2:

root@racnode02# opatchauto apply /u01/app/18c_Software/28553832  -force_conflict -oh /u01/app/18.3.0.0/grid

 

OPatchauto session is initiated at Mon Jul 13 10:57:27 2020

System initialization log file is /u01/app/18.3.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-07-13_10-57-28AM.log.

 

Session log file is /u01/app/18.3.0.0/grid/cfgtoollogs/opatchauto/opatchauto2020-07-13_10-59-35AM.log

The id for this session is KEK1

 

Executing OPatch prereq operations to verify patch applicability on home /u01/app/18.3.0.0/grid

Patch applicability verified successfully on home /u01/app/18.3.0.0/grid

 

Bringing down CRS service on home /u01/app/18.3.0.0/grid

CRS service brought down successfully on home /u01/app/18.3.0.0/grid

 

Start applying binary patch on home /u01/app/18.3.0.0/grid

Binary patch applied successfully on home /u01/app/18.3.0.0/grid

 

Starting CRS service on home /u01/app/18.3.0.0/grid


CRS service started successfully on home /u01/app/18.3.0.0/grid

OPatchAuto successful.

-------------------------Summary----------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode02

CRS Home:/u01/app/18.3.0.0/grid

Version:18.0.0.0.0

Summary:

==Following patches were SUCCESSFULLY applied

Patch: /u01/app/18c_Software/28553832/28553832

Log: /u01/app/18.3.0.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-07-13_11-01-39AM_1.log

OPatchauto session completed at Mon Jul 13 11:12:06 2020

Time taken to complete the session 14 minutes, 40 seconds

root@racnode02:# 

 

Verify patch details :

[grid@racnode02 18c_Software]$ opatch lspatches

28553832;OCW Interim patch for 28553832

28790643;Database Release Update Revision : 18.3.2.0.190115 (28790643)

27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171

27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)

28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)

28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)

OPatch succeeded.