Sunday, 13 April 2025

Smarter Patching in Oracle Database 23ai: Two-Stage Rolling Updates and Local Rolling Maintenance

Downtime is the eternal nemesis of enterprise systems, especially those powering critical workloads. With the release of Oracle Database 23ai,
Oracle has introduced a set of intelligent patching and maintenance features that drastically reduce downtime and improve database availability during upgrades and patching

Oracle RAC Two-Stage Rolling Updates
Starting with Oracle Database 23ai, the Oracle RAC two-stage rolling patches feature enables you to apply previously non-rolling patches in a rolling fashion.

Oracle RAC two-stage rolling patches are new types of patches, which you can apply in a rolling fashion in stages. Once the patch is applied on the first node, the second node is patched, and so on. When all the nodes are patched, you can enable the patches. Fixes that you apply using this feature are disabled by default.

You can view the patches applied via this method using:
SELECT * FROM V$RAC_TWO_STAGE_ROLLING_UPDATES;

Local Rolling Database Maintenance:
Another standout feature in 23ai is Local Rolling Database Maintenance—an enhancement designed to keep node-level downtime invisible to users during rolling patching. 

What It Does
During a rolling patch, Oracle can now start a second instance on the same node and relocate sessions to it, reducing the patching impact on connected applications.
This technique:
    Enables session failover on the same node, minimizing CPU and network overhead
    Reduces or eliminates application interruptions during patching
    Works great when paired with (Transparent) Application Continuity
Requirements
    The node must have enough CPU and memory resources to run two instances simultaneously.
    DBAs need to manage new ORACLE_HOME paths and instance configurations.
To prepare and perform local rolling maintenance:
srvctl modify database -d <dbname> -o $NEW_HOME --localrolling
srvctl transfer instance -d <dbname>
This makes it easier to patch a single node while keeping user sessions online and preventing workload relocation to other nodes in the cluster.

Thursday, 10 April 2025

How to Resolve "CheckActiveFilesAndExecutables" Failure in Oracle OPatch

When applying or rolling back a patch using Oracle OPatch, you might encounter the following error:
Prerequisite check "CheckActiveFilesAndExecutables" failed.
OPatch failed with error code 73
This typically happens when some files or libraries in the Oracle Home directory are currently being used by running processes

here i tried opatch rollback
[oracle@prdracdb01 ~]$ opatch rollback -id 35739076
Oracle Interim Patch Installer version 12.2.0.1.45
Copyright (c) 2025, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/oracle/product/19.0.0.0/dbhome_1
Central Inventory : /app/oraInventory
   from           : /u01/app/oracle/product/19.0.0.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.45
OUI version       : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19.0.0.0/dbhome_1/cfgtoollogs/opatch/opatch2025-04-10_10-47-49AM_1.log
Patches will be rolled back in the following order:
   35739076
Prerequisite check "CheckActiveFilesAndExecutables" failed.
The details are:
Following active files/executables/libs are used by ORACLE_HOME :/u01/app/oracle/product/19.0.0.0/dbhome_1
/u01/app/oracle/product/19.0.0.0/dbhome_1/lib/libclntsh.so.19.1

UtilSession failed: Prerequisite check "CheckActiveFilesAndExecutables" failed.
Log file location: /u01/app/oracle/product/19.0.0.0/dbhome_1/cfgtoollogs/opatch/opatch2025-04-10_10-47-49AM_1.log

OPatch failed with error code 73

[oracle@prdracdb01 ~]$

Patching failed with UtilSession failed: Prerequisite check "CheckActiveFilesAndExecutables" failed.

Analyzing the OPatch Log:
[oracle@prdracdb01 ~]$ cat /u01/app/oracle/product/19.0.0.0/dbhome_1/cfgtoollogs/opatch/opatch2025-04-10_10-47-49AM_1.log
[Apr 10, 2025 10:47:49 AM] [INFO]   CAS Dynamic Loading :
[Apr 10, 2025 10:47:49 AM] [INFO]   CUP_LOG: Trying to load HomeOperations object
[Apr 10, 2025 10:47:49 AM] [INFO]   CUP_LOG: HomeOperations object created. CUP1.0 is enabled
[Apr 10, 2025 10:47:49 AM] [INFO]   OPatch invoked as follows: 'rollback -id 35739076 -invPtrLoc /u01/app/oracle/product/19.0.0.0/dbhome_1/oraInst.loc '
[Apr 10, 2025 10:47:49 AM] [INFO]   Runtime args: [-Xverify:none, -Xmx3072m, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/u01/app/oracle/product/19.0.0.0/dbhome_1/cfgtoollogs/opatch, -DCommonLog.LOG_SESSION_ID=, -DCommonLog.COMMAND_NAME=rollback, -DOPatch.ORACLE_HOME=/u01/app/oracle/product/19.0.0.0/dbhome_1, -DOPatch.DEBUG=false, -DOPatch.MAKE=false, -DOPatch.RUNNING_DIR=/u01/app/oracle/product/19.0.0.0/dbhome_1/OPatch, -DOPatch.MW_HOME=, -DOPatch.WL_HOME=, -DOPatch.COMMON_COMPONENTS_HOME=, -DOPatch.OUI_LOCATION=/u01/app/oracle/product/19.0.0.0/dbhome_1/oui, -DOPatch.FMW_COMPONENT_HOME=, -DOPatch.OPATCH_CLASSPATH=, -DOPatch.WEBLOGIC_CLASSPATH=, -DOPatch.SKIP_OUI_VERSION_CHECK=, -DOPatch.NEXTGEN_HOME_CHECK=false, -DOPatch.PARALLEL_ON_FMW_OH=]
[Apr 10, 2025 10:47:49 AM] [INFO]   Heap in use : 112 MB
                                    Total memory: 1963 MB
                                    Free memory : 1850 MB
                                    Max memory  : 2731 MB
[Apr 10, 2025 10:47:49 AM] [INFO]   Oracle Home       : /u01/app/oracle/product/19.0.0.0/dbhome_1
                                    Central Inventory : /app/oraInventory
                                       from           : /u01/app/oracle/product/19.0.0.0/dbhome_1/oraInst.loc
                                    OPatch version    : 12.2.0.1.45
                                    OUI version       : 12.2.0.7.0
                                    OUI location      : /u01/app/oracle/product/19.0.0.0/dbhome_1/oui
                                    Log file location : /u01/app/oracle/product/19.0.0.0/dbhome_1/cfgtoollogs/opatch/opatch2025-04-10_10-47-49AM_1.log
[Apr 10, 2025 10:47:49 AM] [INFO]   Patch history file: /u01/app/oracle/product/19.0.0.0/dbhome_1/cfgtoollogs/opatch/opatch_history.txt
[Apr 10, 2025 10:47:51 AM] [INFO]   [OPSR-TIME] Loading raw inventory
[Apr 10, 2025 10:47:51 AM] [INFO]   [OPSR-MEMORY] Loaded all components from inventory. Heap memory in use: 153 (MB)
[Apr 10, 2025 10:47:51 AM] [INFO]   [OPSR-MEMORY] Loaded all one offs from inventory. Heap memory in use: 174 (MB)
[Apr 10, 2025 10:47:51 AM] [INFO]   [OPSR-TIME] Raw inventory loaded successfully
[Apr 10, 2025 10:47:51 AM] [INFO]   NRollback::no CAS enabled, OPatch runs with legacy process.
[Apr 10, 2025 10:47:51 AM] [INFO]   opatch-external.jar is in /u01/app/oracle/product/19.0.0.0/dbhome_1/OPatch/jlib/opatch-external.jar
[Apr 10, 2025 10:47:53 AM] [INFO]   [OPSR-TIME] Loading cooked inventory
[Apr 10, 2025 10:47:53 AM] [INFO]   [OPSR-MEMORY] : Loading cooked one offs. Heap memory used 215 (MB)
[Apr 10, 2025 10:47:55 AM] [INFO]   [OPSR-MEMORY] : Loaded cooked oneoffs. Heap memory used : 253 (MB)
[Apr 10, 2025 10:47:55 AM] [INFO]   [OPSR-TIME] Cooked inventory loaded successfully
[Apr 10, 2025 10:48:00 AM] [INFO]   [OPSR-TIME] buildFilesConflict begins
[Apr 10, 2025 10:48:00 AM] [INFO]   [OPSR-TIME] checkFileVersionConflict begins
[Apr 10, 2025 10:48:00 AM] [INFO]   Alias feature is enable?false
[Apr 10, 2025 10:48:00 AM] [INFO]   [OPSR-TIME] checkFileVersionConflict begins
[Apr 10, 2025 10:48:00 AM] [INFO]   [OPSR-TIME] buildFilesConflict ends
[Apr 10, 2025 10:48:00 AM] [INFO]   Subset Patch 29517242 remain inactive due to active superset patch 35643107
[Apr 10, 2025 10:48:00 AM] [INFO]   Subset Patch 29585399 remain inactive due to active superset patch 35655527
[Apr 10, 2025 10:48:00 AM] [INFO]   OPatchSessionHelper::sortOnOverlay() Sorting is not needed
[Apr 10, 2025 10:48:02 AM] [INFO]   Patches will be rolled back in the following order:
                                       35739076
[Apr 10, 2025 10:48:02 AM] [INFO]   Running prerequisite checks...
[Apr 10, 2025 10:48:02 AM] [INFO]   Start fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/oracle at Thu Apr 10 10:48:02 PDT 2025
[Apr 10, 2025 10:48:02 AM] [INFO]   Finish fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/oracle at Thu Apr 10 10:48:02 PDT 2025
[Apr 10, 2025 10:48:02 AM] [INFO]   SKIP_FUSER_WARNINGS is set to true (flag was set in opatch.properties)
[Apr 10, 2025 10:48:02 AM] [INFO]   Start fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/extjob at Thu Apr 10 10:48:02 PDT 2025
[Apr 10, 2025 10:48:02 AM] [INFO]   Finish fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/extjob at Thu Apr 10 10:48:02 PDT 2025
[Apr 10, 2025 10:48:02 AM] [INFO]   SKIP_FUSER_WARNINGS is set to true (flag was set in opatch.properties)
[Apr 10, 2025 10:48:02 AM] [INFO]   Start fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/extjobo at Thu Apr 10 10:48:02 PDT 2025
[Apr 10, 2025 10:48:02 AM] [INFO]   Finish fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/extjobo at Thu Apr 10 10:48:02 PDT 2025
[Apr 10, 2025 10:48:02 AM] [INFO]   SKIP_FUSER_WARNINGS is set to true (flag was set in opatch.properties)
[Apr 10, 2025 10:48:02 AM] [INFO]   Start fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/setasmgid at Thu Apr 10 10:48:02 PDT 2025
[Apr 10, 2025 10:48:03 AM] [INFO]   Finish fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/setasmgid at Thu Apr 10 10:48:03 PDT 2025
[Apr 10, 2025 10:48:03 AM] [INFO]   SKIP_FUSER_WARNINGS is set to true (flag was set in opatch.properties)
[Apr 10, 2025 10:48:03 AM] [INFO]   Start fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/kfod at Thu Apr 10 10:48:03 PDT 2025
[Apr 10, 2025 10:48:03 AM] [INFO]   Finish fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/kfod at Thu Apr 10 10:48:03 PDT 2025
[Apr 10, 2025 10:48:03 AM] [INFO]   SKIP_FUSER_WARNINGS is set to true (flag was set in opatch.properties)
[Apr 10, 2025 10:48:03 AM] [INFO]   Start fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/renamedg at Thu Apr 10 10:48:03 PDT 2025
[Apr 10, 2025 10:48:03 AM] [INFO]   Finish fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/bin/renamedg at Thu Apr 10 10:48:03 PDT 2025
[Apr 10, 2025 10:48:03 AM] [INFO]   SKIP_FUSER_WARNINGS is set to true (flag was set in opatch.properties)
[Apr 10, 2025 10:48:03 AM] [INFO]   Start fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/lib/libclntsh.so.19.1 at Thu Apr 10 10:48:03 PDT 2025
[Apr 10, 2025 10:48:03 AM] [INFO]   Finish fuser command /sbin/fuser /u01/app/oracle/product/19.0.0.0/dbhome_1/lib/libclntsh.so.19.1 at Thu Apr 10 10:48:03 PDT 2025
[Apr 10, 2025 10:48:03 AM] [INFO]   SKIP_FUSER_WARNINGS is set to true (flag was set in opatch.properties)
[Apr 10, 2025 10:48:03 AM] [INFO]   Files in use by a process: /u01/app/oracle/product/19.0.0.0/dbhome_1/lib/libclntsh.so.19.1 PID( 42574 92627 )
[Apr 10, 2025 10:48:03 AM] [INFO]   Printing more details of active processes:
[Apr 10, 2025 10:48:03 AM] [INFO]   START PARENT PROCESS DETAILS
                                    PID COMMAND
                                    83924 bash
                                    END PARENT PROCESS DETAILS
[Apr 10, 2025 10:48:03 AM] [INFO]   START CHILD PROCESS DETAILS FOR PARENT PROCESS: 83924
                                    PID COMMAND
                                    92627 rman
                                    END CHILD PROCESS DETAILS FOR PARENT PROCESS: 83924
[Apr 10, 2025 10:48:03 AM] [INFO]   START PARENT PROCESS DETAILS
                                    PID COMMAND
                                    42548 Standby_sync.sh
                                    END PARENT PROCESS DETAILS
[Apr 10, 2025 10:48:03 AM] [INFO]   START CHILD PROCESS DETAILS FOR PARENT PROCESS: 42548
                                    PID COMMAND
                                    42574 python
                                    END CHILD PROCESS DETAILS FOR PARENT PROCESS: 42548
[Apr 10, 2025 10:48:03 AM] [INFO]   Following active files/executables/libs are used by ORACLE_HOME :/u01/app/oracle/product/19.0.0.0/dbhome_1
                                    /u01/app/oracle/product/19.0.0.0/dbhome_1/lib/libclntsh.so.19.1
[Apr 10, 2025 10:48:03 AM] [INFO]   Prerequisite check "CheckActiveFilesAndExecutables" failed.
                                    The details are:                      
                                    
                                    Following active files/executables/libs are used by ORACLE_HOME :/u01/app/oracle/product/19.0.0.0/dbhome_1
                                    /u01/app/oracle/product/19.0.0.0/dbhome_1/lib/libclntsh.so.19.1
[Apr 10, 2025 10:48:03 AM] [SEVERE] OUI-67073:UtilSession failed: Prerequisite check "CheckActiveFilesAndExecutables" failed.
[Apr 10, 2025 10:48:03 AM] [INFO]   Finishing UtilSession at Thu Apr 10 10:48:03 PDT 2025
[Apr 10, 2025 10:48:03 AM] [INFO]   Log file location: /u01/app/oracle/product/19.0.0.0/dbhome_1/cfgtoollogs/opatch/opatch2025-04-10_10-47-49AM_1.log
[Apr 10, 2025 10:48:03 AM] [INFO]   Stack Description: java.lang.RuntimeException: Prerequisite check "CheckActiveFilesAndExecutables" failed.
                                        at oracle.opatch.OPatchSessionHelper.runRollbackPrereqs(OPatchSessionHelper.java:5253)
                                        at oracle.opatch.opatchutil.NRollback.legacy_process(NRollback.java:762)
                                        at oracle.opatch.opatchutil.NRollback.process(NRollback.java:217)
                                        at oracle.opatch.opatchutil.OUSession.nrollback(OUSession.java:1154)
                                        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                                        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                                        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                                        at java.lang.reflect.Method.invoke(Method.java:498)
                                        at oracle.opatch.UtilSession.process(UtilSession.java:355)
                                        at oracle.opatch.OPatchSession.process(OPatchSession.java:2640)
                                        at oracle.opatch.OPatch.process(OPatch.java:888)
                                        at oracle.opatch.OPatch.main(OPatch.java:945)
                                    Caused by: oracle.opatch.PrereqFailedException: Prerequisite check "CheckActiveFilesAndExecutables" failed.
                                        ... 12 more
[oracle@prdracdb01 ~]$

revealed that the rollback failed due to the shared library libclntsh.so.19.1 being used by running processes.
The log details also confirmed active processes holding the file:
Files in use by a process:
/u01/app/oracle/product/19.0.0.0/dbhome_1/lib/libclntsh.so.19.1 PID( 42574 92627 )
...
PID COMMAND
92627 rman
42574 python

Identify and Kill the Processes:
Check which processes were using libclntsh.so.19.1:
[oracle@prdracdb01 bin]$ lsof | grep libclntsh.so.19.1
lsof: WARNING: can't stat() tracefs file system /sys/kernel/debug/tracing
      Output information may be incomplete.
lsof: WARNING: can't stat() bpf file system /opt/sentinelone/ebpfs/bpf_mount
      Output information may be incomplete.
python    42574               oracle  mem       REG              252,0  82204624          25219363 /u01/app/oracle/product/19.0.0.0/dbhome_1/lib/libclntsh.so.19.1
rman      92627               oracle  mem       REG              252,0  82204624          25219363 /u01/app/oracle/product/19.0.0.0/dbhome_1/lib/libclntsh.so.19.1

Kill both sessions
[oracle@prdracdb01 bin]$ kill -9 42574
[oracle@prdracdb01 bin]$ lsof | grep libclntsh.so.19.1
lsof: WARNING: can't stat() tracefs file system /sys/kernel/debug/tracing
      Output information may be incomplete.
lsof: WARNING: can't stat() bpf file system /opt/sentinelone/ebpfs/bpf_mount
      Output information may be incomplete.
rman      92627               oracle  mem       REG              252,0  82204624          25219363 /u01/app/oracle/product/19.0.0.0/dbhome_1/lib/libclntsh.so.19.1
[oracle@prdracdb01 bin]$ ps -ef | grep rman
oracle   11618 88116  0 11:02 pts/4    00:00:00 grep --color=auto rman
oracle   92627 83924  0 Jan06 pts/1    00:00:05 rman
[oracle@prdracdb01 bin]$ kill -9 92627

tried optach and completed successfully  
[oracle@prdracdb01 OPatch]$ opatch rollback -id 35739076
Oracle Interim Patch Installer version 12.2.0.1.45
Copyright (c) 2025, Oracle Corporation.  All rights reserved.
Oracle Home       : /u01/app/oracle/product/19.0.0.0/dbhome_1
Central Inventory : /app/oraInventory
   from           : /u01/app/oracle/product/19.0.0.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.45
OUI version       : 12.2.0.7.0
Log file location : /u01/app/oracle/product/19.0.0.0/dbhome_1/cfgtoollogs/opatch/opatch2025-04-10_11-03-11AM_1.log
Patches will be rolled back in the following order:
   35739076
The following patch(es) will be rolled back: 35739076

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/product/19.0.0.0/dbhome_1')

Is the local system ready for patching? [y|n]
y
User Responded with: Y
Rolling back patch 35739076...
RollbackSession rolling back interim patch '35739076' from OH '/u01/app/oracle/product/19.0.0.0/dbhome_1'
Patching component oracle.rdbms, 19.0.0.0.0...
Patching component oracle.rdbms.rsf, 19.0.0.0.0...
RollbackSession removing interim patch '35739076' from inventory
Log file location: /u01/app/oracle/product/19.0.0.0/dbhome_1/cfgtoollogs/opatch/opatch2025-04-10_11-03-11AM_1.log
OPatch succeeded.
[oracle@prdracdb01 OPatch]$

 

Sunday, 6 April 2025

Log Generation Rate in Azure SQL Database Hyperscale Pool

What is Log Generation Rate?
Log generation rate refers to the speed at which transaction logs are produced in a database. In Hyperscale Pool, log generation is closely monitored and regulated to prevent overloading the system. Azure implements log rate governance to ensure that log generation stays within defined limits, keeping the system stable and performing efficiently.
Log Rate Governance in Hyperscale
By default, Hyperscale databases have a log generation limit of 105 MB/s, irrespective of the compute size. If everything is running smoothly, the log generation can reach 100 MiB/s. This is designed to ensure that logs are consistently processed and replicated without overwhelming system resources.
However, there may be situations where Azure needs to temporarily reduce the log generation rate. This happens when a secondary replica or page server falls behind in applying the transaction logs. The system will then throttle the log generation rate to allow the lagging components to catch up, ensuring the overall stability of the database.
When Does Log Generation Rate Get Reduced?
Log generation rate may be reduced for several reasons:
    Delayed log consumption by a page server or replica.
    A geo-secondary replica might be lagging in applying logs.
    Slow database checkpointing could delay log processing on the page server.
    Migration or reverse migration from Hyperscale to a non-Hyperscale database can also cause temporary delays in log consumption.
Monitoring Log Generation Rate with sys.dm_hs_database_log_rate
Azure provides the sys.dm_hs_database_log_rate dynamic management function (DMF) to monitor and troubleshoot log generation rates in Hyperscale. This function returns detailed information on which components are limiting the log generation rate, including:
    Current log rate limit
    Catch-up rate of components (bytes per second)
    Component-specific delays and logs that are behind
Key Columns in the DMF:
    current_max_log_rate: Maximum log rate limit in bytes per second.
    catchup_rate: The rate at which lagging components are catching up.
    catchup_bytes: The amount of log data that must be processed to catch up.
    role_desc: Describes the role of the component affecting log rate, such as a page server, replica, or geo-replica.
This tool helps you quickly identify any components causing delays and allows you to take corrective actions if needed.
How to Check Log Generation Rate in Your Database
To check the log generation rate for a specific database, use the following query:
SELECT current_max_log_rate_bps, role_desc, catchup_rate_bps, catchup_distance_bytes
FROM sys.dm_hs_database_log_rate(DB_ID(N'YourDatabaseName'));

For databases within an elastic pool, you can use NULL to get results for all databases in the pool:
SELECT current_max_log_rate_bps, role_desc, catchup_rate_bps, catchup_distance_bytes
FROM sys.dm_hs_database_log_rate(NULL);

Wait types appear in sys.dm_os_wait_stats when the log rate is reduced:
Wait type    Reason
RBIO_RG_STORAGE    Delayed log consumption by a page server
RBIO_RG_DESTAGE    Delayed log consumption by the long-term log storage
RBIO_RG_REPLICA    Delayed log consumption by an HA secondary replica or a named replica
RBIO_RG_GEOREPLICA    Delayed log consumption by a geo-secondary replica
RBIO_RG_DESTAGE    Delayed log consumption by the log service
RBIO_RG_LOCALDESTAGE    Delayed log consumption by the log service
RBIO_RG_STORAGE_CHECKPOINT    Delayed log consumption on by a page server due to slow database checkpoint
RBIO_RG_MIGRATION_TARGET    Delayed log consumption by the non-Hyperscale database during reverse migration
 


 

Sunday, 30 March 2025

VECTOR_DISTANCE

What is VECTOR_DISTANCE?
The VECTOR_DISTANCE function calculates the distance between two vectors (represented as expr1 and expr2). Depending on the context, the vectors can represent various types of data, such as images, text, or numbers.
Key Points:
    Purpose: Calculates the distance between two vectors.
    Optional Metric: You can specify a distance metric. If not specified:
        The default metric is Cosine Distance for general vectors.
        For binary vectors, the default is Hamming Distance.
If you do not specify a distance metric, Cosine Distance is used by default for most cases, and Hamming Distance for binary vectors.
Shorthand Functions for Common Distance Metrics
To make it easier to calculate distances, the VECTOR_DISTANCE function comes with shorthand functions for common distance metrics. These are equivalent to the more detailed functions, providing a more compact way to express vector distance calculations.
Here are the shorthand functions:
    L1_DISTANCE: Manhattan (L1) distance.
    L2_DISTANCE: Euclidean (L2) distance.
    COSINE_DISTANCE: Cosine similarity distance.
    INNER_PRODUCT: Negative dot product (used to compare similarity).
    HAMMING_DISTANCE: Hamming distance for binary vectors.
    JACCARD_DISTANCE: Jaccard distance for binary vectors.

Distance Metrics Available:
    COSINE: Measures the cosine of the angle between two vectors, useful for high-dimensional data like text.
    DOT: Calculates the negated dot product of two vectors, useful for measuring similarity.
    EUCLIDEAN: Measures the straight-line (L2) distance between two vectors, commonly used in spatial data.
    EUCLIDEAN_SQUARED: Euclidean distance without taking the square root, often used in optimization tasks.
    HAMMING: Counts the number of differing dimensions between two binary vectors, typically used in error correction.
    MANHATTAN: Also known as L1 distance, calculates the sum of absolute differences between vector components, useful for grid-based problems.
    JACCARD: Measures dissimilarity between binary vectors based on the ratio of the intersection to the union of the vectors.
    
Shorthand Operators for Distance Metrics:
Instead of specifying the distance metric explicitly, you can use shorthand operators for quicker calculations. These are especially handy when writing queries or performing similarity searches:
    <->: Equivalent to L2_DISTANCE (Euclidean distance).
    <=>: Equivalent to COSINE_DISTANCE (Cosine similarity).
    <#>: Equivalent to -1 * INNER_PRODUCT (Negative dot product).

Tuesday, 25 March 2025

Connecting to a Schema in Oracle 23AI PDB DB

In non-CDBs, we can connect directly to a schema using username/password. However, in PDBs, we must use a service name alias to connect to the database.
1. Connect to PDB
[oracle@poclab ~]$ sql
SQL*Plus: Release 23.0.0.0.0 - Production on Wed Mar 26 03:53:56 2025
Version 23.7.0.25.01
Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
SQL> alter session set container=freepdb1;
SQL> show con_name;
CON_NAME
------------------------------
FREEPDB1

2. Connecting to a User or Schema with password
SQL> conn aivector/aivector
ERROR:
ORA-01017: invalid credential or not authorized; logon denied

This error occurs because, unlike non-CDBs, PDBs require you to use a service name alias to specify the pluggable database in the connection string.
3. Correct Connection to the PDB Using Service Name Alias
SQL> conn aivector/aivector@//localhost:1521/freepdb1
SQL> show user
USER is "AIVECTOR"
 

Identifying Your Container: CDB or PDB in Oracle 23ai

In Oracle databases, particularly when working with Multitenant Architecture, it's essential to understand the distinction between the Container Database (CDB) and Pluggable Databases (PDBs). These are the core components that make up the Multitenant model, which is one of the highlights of modern Oracle database systems. But sometimes, it can be tricky to track whether you're working in a CDB or a PDB. Let's break it down based on a real-world session in Oracle Database 23ai.
Understanding CDB and PDB
    CDB (Container Database): CDB is the primary container that holds the system metadata and the necessary infrastructure for managing multiple PDBs. It has one root container (CDB$ROOT) and potentially many PDBs.
    PDB (Pluggable Database): A PDB is a self-contained, portable database that runs inside a CDB. Each PDB can have its own data, schemas, and users, but shares the same infrastructure and system resources as the CDB.

Let's take a look at an example session in Oracle 23ai. This will help us understand how we can identify where we are, whether in the CDB$ROOT or a PDB.
Step 1: Connecting to the CDB
Upon first logging into Oracle, you typically connect to the CDB as shown below:
[oracle@poclab ~]$ sql
SQL*Plus: Release 23.0.0.0.0 - Production on Wed Mar 26 03:04:12 2025
Version 23.7.0.25.01
Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.7.0.25.01
Once logged in, you can check the current instance by querying v$instance:
SQL> select instance_name, version, status, con_id from v$instance;

INSTANCE_NAME    VERSION           STATUS           CON_ID
---------------- ----------------- ------------ ----------
FREE             23.0.0.0.0        OPEN            0
CON_ID = 0 indicates that we're in the CDB$ROOT container.

Now, let’s confirm the current container:
SQL> show con_id
CON_ID
------------------------------
1
Here, CON_ID = 1 corresponds to the root container, CDB$ROOT.
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT

Step 2: Switching to a PDB
To move from the CDB to a specific PDB, you can connect to the PDB directly. In this example, let's connect to FREEPDB1:
SQL> conn sys/pwd@//localhost:1521/freepdb1 as sysdba
Connected.
Now, let's check the instance information for FREEPDB1:
SQL> select instance_name, version, status, con_id from v$instance;
INSTANCE_NAME    VERSION           STATUS           CON_ID
---------------- ----------------- ------------ ----------
FREE             23.0.0.0.0        OPEN            0
Again, the CON_ID = 0 shows that we’re connected to the FREEPDB1 PDB.
Confirm the current container name:
SQL> show con_id
CON_ID
------------------------------
3
Here, CON_ID = 3 refers to the FREEPDB1 pluggable database:
SQL> show con_name
CON_NAME
------------------------------
FREEPDB1

Step 3: Switching Back to the CDB
Once inside the PDB, you might want to switch back to the CDB$ROOT container. You can do this by using the alter session command:
SQL> alter session set container=CDB$ROOT;
Session altered.
Now, let's check the container ID and name:
SQL> show con_id
CON_ID
------------------------------
1
And the container name confirms you're back in the root container:
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT

Monday, 24 March 2025

Common Blocking Scenarios in Azure SQL Database: Causes & Resolutions

Blocking happens when one session (SPID) holds a lock on a resource, preventing another session from accessing it. Unlike deadlocks, where two or more processes are stuck indefinitely, blocking can eventually resolve—but it can still lead to performance bottlenecks.
Common Blocking Scenarios & Their Resolutions
Scenario Wait Type Open Transactions Status Resolves?
1 NOT NULL ≥ 0 Runnable ✅ Yes, when the query finishes.
2 NULL >0 Sleeping ❌ No, but SPID can be killed.
3 NULL ≥ 0 Runnable ❌ No, won’t resolve until the client fetches all rows or closes the connection.
 Killing SPID may take up to 30 seconds.
4 Varies ≥ 0 Runnable ❌ No, won’t resolve until the client cancels queries or closes connections.
 Killing SPIDs may take up to 30 seconds.
5 NULL >0 Rollback ✅ Yes.
6 NULL >0 Sleeping ⏳ Eventually. When Windows NT detects inactivity, the connection will break.

How to Identify Blocking in Azure SQL Database
1. Identify Blocked and Blocking Sessions
SELECT blocking_session_id, session_id, wait_type, wait_time, wait_resource  
FROM sys.dm_exec_requests  
WHERE blocking_session_id <> 0;
2. Check Open Transactions
SELECT session_id, open_transaction_count, status  
FROM sys.dm_exec_sessions  
WHERE open_transaction_count > 0;
3. Analyze Query Execution Details
SELECT r.session_id, s.host_name, s.program_name, r.command, r.wait_type, r.wait_time  
FROM sys.dm_exec_requests r  
JOIN sys.dm_exec_sessions s ON r.session_id = s.session_id  
WHERE r.blocking_session_id <> 0;

How to Resolve Blocking in Azure SQL Database
✅ Scenarios that resolve automatically
    Scenario 1: Query completes, releasing locks.
    Scenario 5: Rollback operation finishes.
    Scenario 6: Windows NT eventually disconnects the session.

❌ Scenarios requiring manual intervention
If blocking does not resolve, consider the following approaches:
1. Kill the Blocking SPID
If a transaction is stuck, you can terminate it:
KILL <session_id>;
Use this cautiously, as it may cause rollbacks.
2. Optimize Long-Running Queries
    Index Optimization: Ensure proper indexing to reduce query execution time.
    Query Tuning: Use QUERY_PLAN to optimize slow queries.
    Batch Processing: Process data in smaller batches to prevent long locks.
3. Handle Open Transactions Properly
    Regularly check sys.dm_tran_active_transactions for long-running transactions.
    Ensure all transactions explicitly COMMIT or ROLLBACK when completed.
4. Improve Connection Management
    Ensure clients properly fetch all rows or close connections.
    Avoid unnecessary long-running transactions that hold locks.

Saturday, 22 March 2025

Deadlocks with Bitmap Indexes in Oracle

Oracle's Bitmap Index is an efficient indexing method, particularly useful for columns with low cardinality (few distinct values). While it can significantly enhance query performance in read-heavy environments,it presents unique challenges in systems with heavy DML  operations,One of the most significant challenges is the risk of deadlocks due to the nature of how bitmap indexes work.

In this blog, we'll explore the mechanics of bitmap indexes, how they work in Oracle, and why they can cause deadlocks and locking issues when there's heavy DML activity.
What is a Bitmap Index?
In a bitmap index, data is organized as a series of bitmaps (binary representations of 0s and 1s) that represent the presence or absence of a particular value for rows in the indexed column. Each entry in the bitmap index corresponds to a unique value in the indexed column and contains information about which rows in the table have that value.
The structure of a bitmap index involves:

  •     Key Value: The actual value in the indexed column.
  •     Low-Rowid: The starting rowid in the range of rows that this bitmap entry applies to.
  •     High-Rowid: The ending rowid in the range of rows that this bitmap entry applies to.
  •     Bitmap: A string of 0s and 1s, where each bit corresponds to a row in the table (within the specified range). A '1' means the value is present in that row, and a '0' means the value is not.

Deadlocks Due to Bitmap Index Updates
Let’s consider a scenario where DML operations occur and multiple transactions interact with the same bitmap index, causing locking issues.
Scenario 1: Updating a Record in PRODUCT
Let’s assume you have the following data in your PRODUCT table and bitmap index
CREATE TABLE product (
    product_id NUMBER,
    product_name VARCHAR2(100),
    category_id NUMBER
);
INSERT INTO product (product_id, product_name, category_id) VALUES (1001, 'Widget A', 5);
INSERT INTO product (product_id, product_name, category_id) VALUES (2002, 'Widget B', 8);
INSERT INTO product (product_id, product_name, category_id) VALUES (3003, 'Widget C', 5);

Your bitmap index might look like this:
CATEGORY_ID    LOW-ROWID    HIGH-ROWID    BITMAP
5              aaadf1000    aaadf1050    01010101010101
5              aaadf1060    aaadf1100    11010101010101
8              aaadf1200    aaadf1250    10101010101010

In this case, each bitmap entry represents a category (e.g., CATEGORY_ID = 5 or CATEGORY_ID = 8). The LOW-ROWID and HIGH-ROWID represent the range of rows that the bitmap entry applies to. The bitmap string (e.g., 01010101010101) corresponds to the product rows in that range, indicating which rows belong to that category (where "1" means the product belongs to the category, and "0" means it does not).
Let’s now assume you execute the following update:
UPDATE product SET category_id = 8 WHERE product_id = 1001;
This update changes the category of Widget A (product ID 1001) from category 5 to category 8. The bitmap index needs to be updated:
    The bitmap entry for CATEGORY_ID = 5 will remove the "1" at the position where Widget A (row 1001) was found.
    The bitmap entry for CATEGORY_ID = 8 will add a "1" at the position where Widget A (row 1001) is now moved.
At this point, the bitmap index entries for both CATEGORY_ID = 5 and CATEGORY_ID = 8 are locked by your transaction, since both bitmap entries need to be updated.
Scenario 2: A Conflicting Update
Now, assume another transaction tries to execute the following update:
UPDATE product SET category_id = 5 WHERE product_id = 2002;
This transaction is attempting to change Widget B (product ID 2002) from category 8 to category 5. Since Widget B is currently in category 8, the bitmap entry for CATEGORY_ID = 8 needs to be updated to remove the "1" for Widget B (row 2002), and the bitmap entry for CATEGORY_ID = 5 needs to be updated to add a "1" for Widget B (row 2002).
At this point, a deadlock can occur. Here’s why:
    The first transaction has already locked the bitmap entries for both CATEGORY_ID = 5 (to remove the "1" for Widget A) and CATEGORY_ID = 8 (to add the "1" for Widget A).
    The second transaction is attempting to update the same bitmap entries: it wants to remove the "1" from CATEGORY_ID = 8 (for Widget B) and add a "1" to CATEGORY_ID = 5 (for Widget B).
    Since both transactions are trying to update the same bitmap entries simultaneously (in this case, for both category 5 and category 8), they block each other, leading to a deadlock.
This occurs because both transactions are competing to modify the same bitmap index entries that represent overlapping rows in the PRODUCT table.
    

 

 

Thursday, 13 March 2025

How to Identify MAXDOP Value for Running/Completed Queries

 To find the MAXDOP (Maximum Degree of Parallelism) used by running queries in SQL Server, you can use Dynamic Management Views (DMVs) such as sys.dm_exec_requests and sys.dm_exec_query_profiles. These views provide details about query execution, including parallelism levels.
1. Checking MAXDOP for Running Queries
SELECT  
    r.session_id,  
    r.request_id,  
    r.start_time,  
    r.status,  
    r.cpu_time,  
    r.total_elapsed_time,  
    r.logical_reads,  
    r.writes,  
    r.dop AS MAXDOP,  -- Degree of Parallelism
    st.text AS sql_text  
FROM sys.dm_exec_requests r  
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) st  
WHERE r.dop > 1  -- Filtering only parallel queries
ORDER BY r.start_time DESC;

Explanation:
    r.dop: Shows the degree of parallelism (i.e., the number of CPU cores used for execution).
    r.session_id: Identifies the session running the query.
    r.status: Shows the execution status (e.g., running, suspended).
    st.text: Displays the actual SQL query text.
    Note: If dop = 1, the query is running serially without parallelism.
2. Checking MAXDOP for Completed Queries
SELECT  
    qs.execution_count,  
    qs.total_worker_time / qs.execution_count AS avg_worker_time,  
    qs.max_dop,  -- MAXDOP used
    st.text AS sql_text  
FROM sys.dm_exec_query_stats qs  
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st  
ORDER BY qs.total_worker_time DESC;

3. Checking MAXDOP for Running Query Execution Plans
SELECT  
    er.session_id,  
    qp.query_plan  
FROM sys.dm_exec_requests er  
CROSS APPLY sys.dm_exec_query_plan(er.plan_handle) qp  
WHERE er.dop > 1;

Look for Parallelism (Gather Streams) in the query plan XML to confirm parallel execution.

4. Checking MAXDOP Setting at Database Level
EXEC sp_configure 'show advanced options', 1;  
RECONFIGURE;  
EXEC sp_configure 'max degree of parallelism';

To check the database-level MAXDOP setting in Azure SQL Database:
SELECT *  
FROM sys.database_scoped_configurations  
WHERE name = 'MAXDOP';

5. Checking MAXDOP for Index Operations

SELECT  
    r.session_id,  
    r.command,  
    r.dop,  
    st.text  
FROM sys.dm_exec_requests r  
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) st  
WHERE r.command LIKE '%INDEX%';

Wednesday, 12 March 2025

Intelligent Query Processing (IQP) in SQL Databases

Efficient query performance is crucial for modern applications, as databases handle massive amounts of data. Traditionally, query optimization relied on static cost-based estimations, which sometimes led to suboptimal execution plans due to incorrect assumptions.
To address this, modern databases—particularly Microsoft SQL Server—have introduced Intelligent Query Processing (IQP). IQP enhances query execution by automatically adapting, optimizing, and learning from past executions. This minimizes performance issues without requiring code changes.
What is Intelligent Query Processing (IQP)?
Intelligent Query Processing (IQP) is a set of advanced query optimization features in SQL Server (starting from SQL Server 2017 and significantly expanded in SQL Server 2019 and later).
IQP enhances query performance dynamically by making real-time adjustments based on execution statistics, feedback loops, and AI-driven techniques.

How is IQP different from Traditional Query Processing?

AspectTraditional Query ProcessingIntelligent Query Processing (IQP)
Optimization StageStatic, before executionDynamic, adjusts during execution
Query Plan AdjustmentsBased on fixed statisticsAdapts based on real-time data
Handling Plan RegressionRequires manual interventionAutomatically detects & corrects
Performance TuningDBA-driven tuning requiredMinimal or no code changes needed
Machine Learning InfluenceNoneUses feedback loops & AI

Why Do We Need Intelligent Query Processing?
Traditional query optimization relies on cardinality estimation—predicting the number of rows a query will process. However, real-world queries often face:
✅ Bad Cardinality Estimates – Outdated statistics or complex predicates lead to poor execution plans.
✅ Query Plan Regressions – A once-efficient query suddenly slows down due to a bad plan.
✅ Memory Allocation Issues – Queries either over-allocate (wasting resources) or under-allocate (causing spills to disk).
✅ Suboptimal Join Strategies – Poor join selection (Nested Loop instead of Hash Join) causes performance degradation.
IQP fixes these problems automatically, reducing the need for manual performance tuning.


🚀 Key Features of Intelligent Query Processing
IQP introduces a range of powerful enhancements that improve query performance dynamically. Let’s explore some of its most impactful features.

1️⃣ Batch Mode on Rowstore
📌 What it does:
Originally available only for Columnstore indexes, Batch Mode Execution improves the performance of queries running on rowstore tables (traditional tables with B-tree indexes).
📈 Benefits:
    Uses vectorized execution, reducing CPU usage.
    Drastically improves performance for aggregations, joins, and large scans.
    No changes needed—SQL Server automatically enables it when beneficial.
💡 Example:
SELECT CustomerID, COUNT(*)  FROM Sales.Orders  GROUP BY CustomerID;
Without batch mode, this query processes one row at a time. With batch mode, SQL Server processes thousands of rows at once, leading to faster execution.
2️⃣ Adaptive Joins
📌 What it does:
Instead of selecting a Nested Loop Join, Hash Join, or Merge Join at compile time, Adaptive Joins allow SQL Server to switch the join strategy dynamically at runtime.
📈 Benefits:
    Prevents bad join choices due to incorrect row estimates.
    Ensures optimal join selection for varying input sizes.
💡 Example:
If SQL Server expects 100 rows but actually gets 10 million rows, it will switch from a Nested Loop Join to a Hash Join automatically.
3️⃣ Adaptive Memory Grants
📌 What it does:
Allocates just the right amount of memory for query execution instead of over- or under-allocating.
📈 Benefits:
    Prevents out-of-memory issues for large queries.
    Reduces spilling to tempdb, which slows down execution.
💡 Example:
A complex report query initially requests 500MB but actually needs 5GB. SQL Server dynamically adjusts memory allocation for future executions.
4️⃣ Interleaved Execution for Multi-Statement Table-Valued Functions (MSTVFs)
📌 What it does:
Traditional table-valued functions (TVFs) always assumed fixed row estimates. This often led to poor query plans.
With Interleaved Execution, SQL Server delays optimization until runtime to get an accurate row estimate.
📈 Benefits:
    Prevents underestimating or overestimating TVF outputs.
    Optimizes execution plans based on real row counts.
💡 Example:
SELECT * FROM dbo.GetCustomerOrders(@CustomerID);
Before IQP, SQL Server guessed a default row count. Now, it waits until the function runs and then optimizes the query plan dynamically.
5️⃣ Table Variable Deferred Compilation
📌 What it does:
Table variables previously used fixed row estimates, often leading to poor execution plans. IQP defers their compilation until runtime, allowing SQL Server to optimize based on actual data size.
📈 Benefits:
    Improves performance of queries using table variables.
    Prevents incorrect join and index choices.
💡 Example:
DECLARE @TempTable TABLE (ID INT, Value VARCHAR(50));  
INSERT INTO @TempTable SELECT ID, Value FROM LargeTable;  
SELECT * FROM @TempTable JOIN AnotherTable ON @TempTable.ID = AnotherTable.ID;

SQL Server waits until the actual row count is known before optimizing the execution plan.


SQL Server Extended Events: Monitoring Queries Running Longer Than X Minutes

What Are Extended Events in SQL Server?
Extended Events provide a flexible and lightweight framework to capture detailed performance data in SQL Server. They help in diagnosing slow-running queries, deadlocks, waits, and other issues affecting database performance.
Why Use Extended Events Instead of SQL Profiler?
Low Overhead: Uses fewer system resources.
More Powerful: Captures granular event data.
Better Filtering: Allows precise filtering on execution time, database, users, etc.
Replaces SQL Trace/Profiler: Profiler is deprecated in newer SQL Server versions.

Step-by-Step: Configuring Extended Events for Queries Running More Than 5 Minutes
1. Create an Extended Events Session
We will create an Extended Events session to capture queries that take longer than 300 seconds (5 minutes) to execute.
Using SSMS GUI:
    Open SQL Server Management Studio (SSMS).
    Expand Management > Extended Events > Sessions.
    Right-click Sessions and choose New Session....
    Provide a name, e.g., Long_Running_Queries.
    Under Events, click "Add Event", search for sql_statement_completed, and add it.
    Under the Global Fields (Actions) tab, select:
        sql_text (to capture the query text)
        session_id (to track the session)
        database_id (to identify the database)
    Apply a Filter (Predicate):
        Click Configure, then Filter (Predicate).
        Select duration, set it to >= 3000000000 (300 seconds in microseconds).
    Configure Data Storage:
        Choose Event File as the target.
        Specify a file path for saving captured events.
    Click OK, then right-click the session and select Start Session.

Using T-SQL:
Alternatively, use the following T-SQL script to create the session:

CREATE EVENT SESSION [Long_Running_Queries]  
ON SERVER  
ADD EVENT sqlserver.sql_statement_completed (  
    WHERE duration >= 3000000000  -- 300 seconds (5 minutes) in microseconds  
)  
ADD TARGET package0.event_file (  
    SET filename = 'C:\Temp\LongRunningQueries.xel', max_file_size = 50MB  
)  
WITH (STARTUP_STATE = ON);  
GO  

2. Viewing and Analyzing the Captured Events
Using SSMS:
    Expand Management > Extended Events > Sessions.
    Right-click your session (Long_Running_Queries) and choose Watch Live Data.
    Execute long-running queries and monitor captured events in real-time.


Using T-SQL to Read the Event File:
To analyze captured events from the event file:
SELECT  
    event_data.value('(event/@name)', 'VARCHAR(100)') AS event_name,  
    event_data.value('(event/data[@name="sql_text"]/value)', 'NVARCHAR(MAX)') AS sql_text,  
    event_data.value('(event/data[@name="duration"]/value)', 'BIGINT') / 1000000 AS duration_seconds  
FROM  
(  
    SELECT CAST(event_data AS XML) AS event_data  
    FROM sys.fn_xe_file_target_read_file('C:\Temp\LongRunningQueries*.xel', NULL, NULL, NULL)  
) AS xevents  
ORDER BY duration_seconds DESC;

To stop the session:
ALTER EVENT SESSION [Long_Running_Queries] ON SERVER STATE = STOP;

To drop (delete) the session:
DROP EVENT SESSION [Long_Running_Queries] ON SERVER;
 

Monday, 24 February 2025

MAXDOP in SQL Server and Azure SQL Database

 


MAXDOP (Maximum Degree of Parallelism) is a crucial setting in SQL Server and Azure SQL Database that controls the level of intra-query parallelism. By adjusting MAXDOP, database administrators can optimize query execution speed while managing CPU resource utilization.
In Azure SQL Database, the default MAXDOP is set to 8 for new single databases and elastic pool databases. This setting was introduced in September 2020, based on years of telemetry, to prevent excessive parallelism issues while ensuring good performance. Before this, the default was MAXDOP = 0, allowing SQL Server to use all available logical processors.
How MAXDOP Works?
When a query is executed, SQL Server determines whether to use parallelism. If parallelism is enabled, multiple CPU cores work together to process different parts of the query, often improving execution time. However, excessive parallelism can overload the CPU, leading to contention and degraded performance for other queries.
The following table summarizes the behavior of different MAXDOP settings:
MAXDOP Value Behavior
 1    Forces single-threaded execution (no parallelism).
>1    Allows multiple parallel threads but limits the number of schedulers used to the smaller of MAXDOP or the total number of logical processors.
 0    Allows SQL Server to use up to 64 logical processors for parallel execution (or fewer, depending on system configuration).

Note: Each query executes with at least one scheduler and one worker thread. A parallel query can use multiple schedulers and threads, sometimes exceeding the specified MAXDOP value.
Considerations for Configuring MAXDOP
1. Changing MAXDOP in Azure SQL Database
In Azure SQL Database, you can modify MAXDOP using:
    Query-level configuration: By adding the OPTION (MAXDOP N) hint to specific queries.
    Database-level configuration: Using the ALTER DATABASE SCOPED CONFIGURATION statement.
2. Impact on Performance
    Too Low (MAXDOP = 1) → Queries run sequentially, which may slow down execution, especially for large, complex queries.
    Too High (MAXDOP > Optimal Value) → Excessive CPU consumption, leading to performance issues for concurrent workloads.
    Balanced Setting (Recommended MAXDOP) → Optimizes query execution without overwhelming system resources.
3. Index Operations and MAXDOP
Operations such as index creation, rebuilds, and drops can be CPU-intensive. You can override the database-level MAXDOP setting for index operations by specifying the MAXDOP option in CREATE INDEX or ALTER INDEX statements.
Example:
CREATE INDEX IX_Customer ON Customers (LastName) WITH (MAXDOP = 4);
4. Additional Parallel Operations
MAXDOP also affects parallel execution of:
    DBCC CHECKTABLE
    DBCC CHECKDB
    DBCC CHECKFILEGROUP

These operations may consume excessive CPU if MAXDOP is too high.
Best Practices and Recommendations
1. Avoid MAXDOP = 0
Although MAXDOP = 0 allows full CPU utilization, it can lead to excessive parallelism, starving other queries of resources. This is especially critical in Azure SQL Database, where resource governance is stricter.
2. Consider Workload-Specific MAXDOP Settings
Different workloads may benefit from different MAXDOP settings:
    OLTP Workloads (high concurrency, short queries) → Lower MAXDOP (e.g., 1-4).
    OLAP/Data Warehousing (complex queries, large datasets) → Higher MAXDOP (e.g., 8+).
3. Test Before Modifying MAXDOP
    Load test the workload with realistic concurrent queries before changing MAXDOP.
    Monitor CPU usage, query execution time, and worker thread contention.
4. Configure MAXDOP Independently for Replicas
For read scale-out, geo-replication, and Hyperscale replicas, MAXDOP can be set independently for primary and secondary replicas, allowing better optimization for read-write vs. read-only workloads.
Modifying MAXDOP in SQL Server and Azure SQL Database
1. Changing MAXDOP at the Database Level
To change MAXDOP for an entire Azure SQL Database, use:
ALTER DATABASE SCOPED CONFIGURATION SET MAXDOP = 4;
2. Setting MAXDOP for Specific Queries
To override the database setting at the query level, use:
SELECT * FROM Sales OPTION (MAXDOP 2);
3. Setting MAXDOP for Index Operations
ALTER INDEX IX_Customer ON Customers REBUILD WITH (MAXDOP = 4);

Tuesday, 18 February 2025

CVE-2025-1094: PostgreSQL psql SQL injection

CVE-2025-1094 is a critical SQL injection vulnerability discovered in PostgreSQL's interactive terminal, psql. This issue stems from improper handling of quoting syntax in the PostgreSQL libpq functions—namely PQescapeLiteral(), PQescapeIdentifier(), PQescapeString(), and PQescapeStringConn(). When these functions process untrusted input, they may fail to correctly neutralize quoting syntax, allowing attackers to execute arbitrary SQL commands.

What makes this vulnerability especially dangerous is its potential to lead to arbitrary code execution. By exploiting this flaw, an attacker can exploit psql’s ability to execute meta-commands, such as the exclamation mark (!) symbol, which in turn can run operating system shell commands. A successful attack could allow attackers to run arbitrary commands on the host system.

This vulnerability affects PostgreSQL versions prior to 17.3, 16.7, 15.11, 14.16, and 13.19. To mitigate the risk, organizations should promptly upgrade to the latest patched versions of PostgreSQL. The PostgreSQL Global Development Group has released patches to address this security issue.

The emergence of CVE-2025-1094 highlights the need for regular software updates and strong security practices. Organizations are strongly advised to apply the necessary patches without delay and to conduct regular security assessments. Additionally, implementing rigorous input validation can further safeguard systems from similar vulnerabilities.


Sunday, 16 February 2025

Managing "installed but disabled" module bug fixes - DBMS_OPTIM_BUNDLE

Oracle provides powerful tools for managing bug fixes that may impact execution plans. Two key features—_FIX_CONTROL and DBMS_OPTIM_BUNDLE—enable administrators to selectively enable or disable certain bug fixes in a controlled manner.
This post will help you understand these features, their usage, and how they can be applied to resolve issues like those that affect query optimization.
What is _FIX_CONTROL?
Introduced in Oracle 10.2, the _FIX_CONTROL parameter is used to manage specific bug fixes in Oracle Database.
These bug fixes are tied to issues that could affect query execution plans, optimizer behavior, or other performance-related aspects of the database. By using _FIX_CONTROL, administrators can enable or disable specific fixes without requiring a full patch rollback.
The key point here is that some bug fixes may need to be selectively enabled or disabled depending on their impact on system performance or query execution.
You can do this by using the V$SYSTEM_FIX_CONTROL view, which shows the status of each bug fix and its associated behavior.
Example: Bug Fix for cyclic join selectivity of 31487332 for serial plans
Let’s walk through an example of how _FIX_CONTROL works to manage a specific bug fix related to
cyclic join selectivity of 31487332 for serial plans,
Query the V$SYSTEM_FIX_CONTROL View:
To check the current status of the bug fix for Bug 34044661, use the following SQL query:
SELECT bugno, value, description FROM v$system_fix_control WHERE bugno = 34044661;
BUGNO      VALUE DESCRIPTION
---------- ---------- ----------------------------------------------------------------------
34044661   1        cyclic join selectivity of 31487332 for serial plans

Disabling the Fix:
To disable the fix and revert to the default behavior, use the following command:
ALTER SYSTEM SET "_fix_control" = '34044661:OFF';
After running this, you can check the status again:
SELECT bugno, value, description FROM v$system_fix_control WHERE bugno = 34044661;
BUGNO      VALUE DESCRIPTION
---------- ---------- ----------------------------------------------------------------------
34044661   0        cyclic join selectivity of 31487332 for serial plans

Log Entry:
This action will also be recorded in the alert log:
2025-01-16T09:04:02.371313-04:00
ALTER SYSTEM SET _fix_control='34044661:OFF' SCOPE=BOTH;

What is DBMS_OPTIM_BUNDLE?
Introduced in Oracle 12.1.0.2, the DBMS_OPTIM_BUNDLE package offers a more robust approach to managing "installed but disabled" execution plan bug fixes that are installed during a patching event. These bug fixes are generally installed but remain disabled by default to prevent unintended changes in execution plans.
The DBMS_OPTIM_BUNDLE package provides more flexibility in managing these fixes by ensuring that bug fixes affecting execution plans are either enabled or preserved based on the patching status.
Key Features:
Automatic Fix Control Persistence: This package ensures that fixes are managed even after patching, and they can be enabled or disabled automatically based on the configuration.
Scope Flexibility: Administrators can apply fixes at the system level (BOTH), or for a specific instance or session (MEMORY or SPFILE).
Managing Execution Plan Bug Fixes: It allows administrators to explicitly enable or disable execution plan bug fixes that could change query performance.
Managing Bug Fixes Using DBMS_OPTIM_BUNDLE:
The DBMS_OPTIM_BUNDLE package simplifies the management of bug fixes that might impact query execution plans. After a patching event, Oracle does not activate these fixes automatically; they must be manually enabled if necessary. Here’s how you can manage them using the package.
we can list the available potentially behavior changing optimizer fixes in the current patch bundle:
SQL> set serveroutput on;
SQL> execute dbms_optim_bundle.getBugsforBundle;
19.21.0.0.231017DBRU:
    Bug: 34044661,  fix_controls: 34044661
    Bug: 34544657,  fix_controls: 33549743
    Bug: 34816383,  fix_controls: 34816383
    Bug: 35330506,  fix_controls: 35330506
PL/SQL procedure successfully completed.

These are all the fixes being installed BUT disabled.
SQL> execute dbms_optim_bundle.getBugsforBundle(231017); 

19.19.0.0.230418DBRU:
    Bug: 34027770,  fix_controls: 34244753
    Bug: 34467295,  fix_controls: 34467295
    Bug: 23220873,  fix_controls: 23220873
    Bug: 32550281,  fix_controls: 32061341
    Bug: 33548186,  fix_controls: 33548186
    Bug: 33421972,  fix_controls: 33421972
    Bug: 34605306,  fix_controls: 32616683
19.20.0.0.230718DBRU:
    Bug: 33627879,  fix_controls: 33627879
    Bug: 32005394,  fix_controls: 32005394
    Bug: 33069936,  fix_controls: 33069936
    Bug: 35012562,  fix_controls: 35012562
    Bug: 34685578,  fix_controls: 34685578
    Bug: 34862366,  fix_controls: 31184370
    Bug: 35313797,  fix_controls: 35313797
    Bug: 35412607,  fix_controls: 35412607
19.21.0.0.231017DBRU:
    Bug: 34044661,  fix_controls: 34044661
    Bug: 34544657,  fix_controls: 33549743
    Bug: 34816383,  fix_controls: 34816383
    Bug: 35330506,  fix_controls: 35330506
PL/SQL procedure successfully completed.

To enable all "installed but disabled" execution plan bug fixes after applying a patch, use the following command:
EXEC dbms_optim_bundle.enable_optim_fixes('ON', 'BOTH', 'NO');
This will enable the fixes across all instances. After executing, Oracle will ensure that the bug fixes affecting the execution plans are applied as needed.
Enabling/Disabling Specific Fixes Using SET_FIX_CONTROLS
The SET_FIX_CONTROLS procedure is part of DBMS_OPTIM_BUNDLE and allows you to control the status of specific bug fixes. Here's how to use it to manage individual bug fixes like the one for Bug 34044661:
Enable the Bug Fix:
EXEC dbms_optim_bundle.set_fix_controls('34044661:1', '*', 'BOTH', 'NO');
This command enables the fix for Bug 34044661 across all instances.
Disable the Bug Fix:
EXEC dbms_optim_bundle.set_fix_controls('34044661:0', '*', 'BOTH', 'NO');
This command disables the fix for Bug 34044661 across all instances.
Example Output from SET_FIX_CONTROLS:
Here is the process for enabling and disabling the fix:
SQL> EXEC dbms_optim_bundle.set_fix_controls('34044661:1', '*', 'BOTH', 'NO');
PL/SQL procedure successfully completed.
SQL> SELECT bugno, value, description FROM v$system_fix_control WHERE bugno = 34044661;
BUGNO      VALUE DESCRIPTION
---------- ---------- ----------------------------------------------------------------------
34044661   1        cyclic join selectivity of 31487332 for serial plans
SQL> EXEC dbms_optim_bundle.set_fix_controls('34044661:0', '*', 'BOTH', 'NO');
PL/SQL procedure successfully completed.
SQL> SELECT bugno, value, description FROM v$system_fix_control WHERE bugno = 34044661;
BUGNO      VALUE DESCRIPTION
---------- ---------- ----------------------------------------------------------------------
34044661   0        cyclic join selectivity of 31487332 for serial plans