What is VECTOR_DISTANCE?
The VECTOR_DISTANCE function calculates the distance between two vectors (represented as expr1 and expr2). Depending on the context, the vectors can represent various types of data, such as images, text, or numbers.
Key Points:
Purpose: Calculates the distance between two vectors.
Optional Metric: You can specify a distance metric. If not specified:
The default metric is Cosine Distance for general vectors.
For binary vectors, the default is Hamming Distance.
If you do not specify a distance metric, Cosine Distance is used by default for most cases, and Hamming Distance for binary vectors.
Shorthand Functions for Common Distance Metrics
To make it easier to calculate distances, the VECTOR_DISTANCE function comes with shorthand functions for common distance metrics. These are equivalent to the more detailed functions, providing a more compact way to express vector distance calculations.
Here are the shorthand functions:
L1_DISTANCE: Manhattan (L1) distance.
L2_DISTANCE: Euclidean (L2) distance.
COSINE_DISTANCE: Cosine similarity distance.
INNER_PRODUCT: Negative dot product (used to compare similarity).
HAMMING_DISTANCE: Hamming distance for binary vectors.
JACCARD_DISTANCE: Jaccard distance for binary vectors.
Distance Metrics Available:
COSINE: Measures the cosine of the angle between two vectors, useful for high-dimensional data like text.
DOT: Calculates the negated dot product of two vectors, useful for measuring similarity.
EUCLIDEAN: Measures the straight-line (L2) distance between two vectors, commonly used in spatial data.
EUCLIDEAN_SQUARED: Euclidean distance without taking the square root, often used in optimization tasks.
HAMMING: Counts the number of differing dimensions between two binary vectors, typically used in error correction.
MANHATTAN: Also known as L1 distance, calculates the sum of absolute differences between vector components, useful for grid-based problems.
JACCARD: Measures dissimilarity between binary vectors based on the ratio of the intersection to the union of the vectors.
Shorthand Operators for Distance Metrics:
Instead of specifying the distance metric explicitly, you can use shorthand operators for quicker calculations. These are especially handy when writing queries or performing similarity searches:
<->: Equivalent to L2_DISTANCE (Euclidean distance).
<=>: Equivalent to COSINE_DISTANCE (Cosine similarity).
<#>: Equivalent to -1 * INNER_PRODUCT (Negative dot product).
Sunday, 30 March 2025
VECTOR_DISTANCE
Tuesday, 25 March 2025
Connecting to a Schema in Oracle 23AI PDB DB
In non-CDBs, we can connect directly to a schema using username/password. However, in PDBs, we must use a service name alias to connect to the database.
1. Connect to PDB
[oracle@poclab ~]$ sql
SQL*Plus: Release 23.0.0.0.0 - Production on Wed Mar 26 03:53:56 2025
Version 23.7.0.25.01
Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
SQL> alter session set container=freepdb1;
SQL> show con_name;
CON_NAME
------------------------------
FREEPDB1
2. Connecting to a User or Schema with password
SQL> conn aivector/aivector
ERROR:
ORA-01017: invalid credential or not authorized; logon denied
This error occurs because, unlike non-CDBs, PDBs require you to use a service name alias to specify the pluggable database in the connection string.
3. Correct Connection to the PDB Using Service Name Alias
SQL> conn aivector/aivector@//localhost:1521/freepdb1
SQL> show user
USER is "AIVECTOR"
Identifying Your Container: CDB or PDB in Oracle 23ai
In Oracle databases, particularly when working with Multitenant Architecture, it's essential to understand the distinction between the Container Database (CDB) and Pluggable Databases (PDBs). These are the core components that make up the Multitenant model, which is one of the highlights of modern Oracle database systems. But sometimes, it can be tricky to track whether you're working in a CDB or a PDB. Let's break it down based on a real-world session in Oracle Database 23ai.
Understanding CDB and PDB
CDB (Container Database): CDB is the primary container that holds the system metadata and the necessary infrastructure for managing multiple PDBs. It has one root container (CDB$ROOT) and potentially many PDBs.
PDB (Pluggable Database): A PDB is a self-contained, portable database that runs inside a CDB. Each PDB can have its own data, schemas, and users, but shares the same infrastructure and system resources as the CDB.
Let's take a look at an example session in Oracle 23ai. This will help us understand how we can identify where we are, whether in the CDB$ROOT or a PDB.
Step 1: Connecting to the CDB
Upon first logging into Oracle, you typically connect to the CDB as shown below:
[oracle@poclab ~]$ sql
SQL*Plus: Release 23.0.0.0.0 - Production on Wed Mar 26 03:04:12 2025
Version 23.7.0.25.01
Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.7.0.25.01
Once logged in, you can check the current instance by querying v$instance:
SQL> select instance_name, version, status, con_id from v$instance;
INSTANCE_NAME VERSION STATUS CON_ID
---------------- ----------------- ------------ ----------
FREE 23.0.0.0.0 OPEN 0
CON_ID = 0 indicates that we're in the CDB$ROOT container.
Now, let’s confirm the current container:
SQL> show con_id
CON_ID
------------------------------
1
Here, CON_ID = 1 corresponds to the root container, CDB$ROOT.
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
Step 2: Switching to a PDB
To move from the CDB to a specific PDB, you can connect to the PDB directly. In this example, let's connect to FREEPDB1:
SQL> conn sys/pwd@//localhost:1521/freepdb1 as sysdba
Connected.
Now, let's check the instance information for FREEPDB1:
SQL> select instance_name, version, status, con_id from v$instance;
INSTANCE_NAME VERSION STATUS CON_ID
---------------- ----------------- ------------ ----------
FREE 23.0.0.0.0 OPEN 0
Again, the CON_ID = 0 shows that we’re connected to the FREEPDB1 PDB.
Confirm the current container name:
SQL> show con_id
CON_ID
------------------------------
3
Here, CON_ID = 3 refers to the FREEPDB1 pluggable database:
SQL> show con_name
CON_NAME
------------------------------
FREEPDB1
Step 3: Switching Back to the CDB
Once inside the PDB, you might want to switch back to the CDB$ROOT container. You can do this by using the alter session command:
SQL> alter session set container=CDB$ROOT;
Session altered.
Now, let's check the container ID and name:
SQL> show con_id
CON_ID
------------------------------
1
And the container name confirms you're back in the root container:
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
Monday, 24 March 2025
Common Blocking Scenarios in Azure SQL Database: Causes & Resolutions
Common Blocking Scenarios & Their Resolutions
Scenario | Wait Type | Open Transactions | Status | Resolves? |
1 | NOT NULL | ≥ 0 | Runnable | ✅ Yes, when the query finishes. |
2 | NULL | >0 | Sleeping | ❌ No, but SPID can be killed. |
3 | NULL | ≥ 0 | Runnable | ❌
No, won’t resolve until the client fetches all rows or closes the
connection. Killing SPID may take up to 30 seconds. |
4 | Varies | ≥ 0 | Runnable | ❌
No, won’t resolve until the client cancels queries or closes
connections. Killing SPIDs may take up to 30 seconds. |
5 | NULL | >0 | Rollback | ✅ Yes. |
6 | NULL | >0 | Sleeping | ⏳ Eventually. When Windows NT detects inactivity, the connection will break. |
How to Identify Blocking in Azure SQL Database
1. Identify Blocked and Blocking Sessions
SELECT blocking_session_id, session_id, wait_type, wait_time, wait_resource
FROM sys.dm_exec_requests
WHERE blocking_session_id <> 0;
2. Check Open Transactions
SELECT session_id, open_transaction_count, status
FROM sys.dm_exec_sessions
WHERE open_transaction_count > 0;
3. Analyze Query Execution Details
SELECT r.session_id, s.host_name, s.program_name, r.command, r.wait_type, r.wait_time
FROM sys.dm_exec_requests r
JOIN sys.dm_exec_sessions s ON r.session_id = s.session_id
WHERE r.blocking_session_id <> 0;
How to Resolve Blocking in Azure SQL Database
✅ Scenarios that resolve automatically
Scenario 1: Query completes, releasing locks.
Scenario 5: Rollback operation finishes.
Scenario 6: Windows NT eventually disconnects the session.
❌ Scenarios requiring manual intervention
If blocking does not resolve, consider the following approaches:
1. Kill the Blocking SPID
If a transaction is stuck, you can terminate it:
KILL <session_id>;
Use this cautiously, as it may cause rollbacks.
2. Optimize Long-Running Queries
Index Optimization: Ensure proper indexing to reduce query execution time.
Query Tuning: Use QUERY_PLAN to optimize slow queries.
Batch Processing: Process data in smaller batches to prevent long locks.
3. Handle Open Transactions Properly
Regularly check sys.dm_tran_active_transactions for long-running transactions.
Ensure all transactions explicitly COMMIT or ROLLBACK when completed.
4. Improve Connection Management
Ensure clients properly fetch all rows or close connections.
Avoid unnecessary long-running transactions that hold locks.
Saturday, 22 March 2025
Deadlocks with Bitmap Indexes in Oracle
Oracle's Bitmap Index is an efficient indexing method, particularly useful for columns with low cardinality (few distinct values). While it can significantly enhance query performance in read-heavy environments,it presents unique challenges in systems with heavy DML operations,One of the most significant challenges is the risk of deadlocks due to the nature of how bitmap indexes work.
In this blog, we'll explore the mechanics of bitmap indexes, how they work in Oracle, and why they can cause deadlocks and locking issues when there's heavy DML activity.
What is a Bitmap Index?
In a bitmap index, data is organized as a series of bitmaps (binary representations of 0s and 1s) that represent the presence or absence of a particular value for rows in the indexed column. Each entry in the bitmap index corresponds to a unique value in the indexed column and contains information about which rows in the table have that value.
The structure of a bitmap index involves:
- Key Value: The actual value in the indexed column.
- Low-Rowid: The starting rowid in the range of rows that this bitmap entry applies to.
- High-Rowid: The ending rowid in the range of rows that this bitmap entry applies to.
- Bitmap: A string of 0s and 1s, where each bit corresponds to a row in the table (within the specified range). A '1' means the value is present in that row, and a '0' means the value is not.
Deadlocks Due to Bitmap Index Updates
Let’s consider a scenario where DML operations occur and multiple transactions interact with the same bitmap index, causing locking issues.
Scenario 1: Updating a Record in PRODUCT
Let’s assume you have the following data in your PRODUCT table and bitmap index
CREATE TABLE product (
product_id NUMBER,
product_name VARCHAR2(100),
category_id NUMBER
);
INSERT INTO product (product_id, product_name, category_id) VALUES (1001, 'Widget A', 5);
INSERT INTO product (product_id, product_name, category_id) VALUES (2002, 'Widget B', 8);
INSERT INTO product (product_id, product_name, category_id) VALUES (3003, 'Widget C', 5);
Your bitmap index might look like this:
CATEGORY_ID LOW-ROWID HIGH-ROWID BITMAP
5 aaadf1000 aaadf1050 01010101010101
5 aaadf1060 aaadf1100 11010101010101
8 aaadf1200 aaadf1250 10101010101010
In this case, each bitmap entry represents a category (e.g., CATEGORY_ID = 5 or CATEGORY_ID = 8). The LOW-ROWID and HIGH-ROWID represent the range of rows that the bitmap entry applies to. The bitmap string (e.g., 01010101010101) corresponds to the product rows in that range, indicating which rows belong to that category (where "1" means the product belongs to the category, and "0" means it does not).
Let’s now assume you execute the following update:
UPDATE product SET category_id = 8 WHERE product_id = 1001;
This update changes the category of Widget A (product ID 1001) from category 5 to category 8. The bitmap index needs to be updated:
The bitmap entry for CATEGORY_ID = 5 will remove the "1" at the position where Widget A (row 1001) was found.
The bitmap entry for CATEGORY_ID = 8 will add a "1" at the position where Widget A (row 1001) is now moved.
At this point, the bitmap index entries for both CATEGORY_ID = 5 and CATEGORY_ID = 8 are locked by your transaction, since both bitmap entries need to be updated.
Scenario 2: A Conflicting Update
Now, assume another transaction tries to execute the following update:
UPDATE product SET category_id = 5 WHERE product_id = 2002;
This transaction is attempting to change Widget B (product ID 2002) from category 8 to category 5. Since Widget B is currently in category 8, the bitmap entry for CATEGORY_ID = 8 needs to be updated to remove the "1" for Widget B (row 2002), and the bitmap entry for CATEGORY_ID = 5 needs to be updated to add a "1" for Widget B (row 2002).
At this point, a deadlock can occur. Here’s why:
The first transaction has already locked the bitmap entries for both CATEGORY_ID = 5 (to remove the "1" for Widget A) and CATEGORY_ID = 8 (to add the "1" for Widget A).
The second transaction is attempting to update the same bitmap entries: it wants to remove the "1" from CATEGORY_ID = 8 (for Widget B) and add a "1" to CATEGORY_ID = 5 (for Widget B).
Since both transactions are trying to update the same bitmap entries simultaneously (in this case, for both category 5 and category 8), they block each other, leading to a deadlock.
This occurs because both transactions are competing to modify the same bitmap index entries that represent overlapping rows in the PRODUCT table.
Thursday, 13 March 2025
How to Identify MAXDOP Value for Running/Completed Queries
To find the MAXDOP (Maximum Degree of Parallelism) used by running queries in SQL Server, you can use Dynamic Management Views (DMVs) such as sys.dm_exec_requests and sys.dm_exec_query_profiles. These views provide details about query execution, including parallelism levels.
1. Checking MAXDOP for Running Queries
SELECT
r.session_id,
r.request_id,
r.start_time,
r.status,
r.cpu_time,
r.total_elapsed_time,
r.logical_reads,
r.writes,
r.dop AS MAXDOP, -- Degree of Parallelism
st.text AS sql_text
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) st
WHERE r.dop > 1 -- Filtering only parallel queries
ORDER BY r.start_time DESC;
Explanation:
r.dop: Shows the degree of parallelism (i.e., the number of CPU cores used for execution).
r.session_id: Identifies the session running the query.
r.status: Shows the execution status (e.g., running, suspended).
st.text: Displays the actual SQL query text.
Note: If dop = 1, the query is running serially without parallelism.
2. Checking MAXDOP for Completed Queries
SELECT
qs.execution_count,
qs.total_worker_time / qs.execution_count AS avg_worker_time,
qs.max_dop, -- MAXDOP used
st.text AS sql_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
ORDER BY qs.total_worker_time DESC;
3. Checking MAXDOP for Running Query Execution Plans
SELECT
er.session_id,
qp.query_plan
FROM sys.dm_exec_requests er
CROSS APPLY sys.dm_exec_query_plan(er.plan_handle) qp
WHERE er.dop > 1;
Look for Parallelism (Gather Streams) in the query plan XML to confirm parallel execution.
4. Checking MAXDOP Setting at Database Level
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'max degree of parallelism';
To check the database-level MAXDOP setting in Azure SQL Database:
SELECT *
FROM sys.database_scoped_configurations
WHERE name = 'MAXDOP';
5. Checking MAXDOP for Index Operations
SELECT
r.session_id,
r.command,
r.dop,
st.text
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) st
WHERE r.command LIKE '%INDEX%';
Wednesday, 12 March 2025
Intelligent Query Processing (IQP) in SQL Databases
To address this, modern databases—particularly Microsoft SQL Server—have introduced Intelligent Query Processing (IQP). IQP enhances query execution by automatically adapting, optimizing, and learning from past executions. This minimizes performance issues without requiring code changes.
What is Intelligent Query Processing (IQP)?
Intelligent Query Processing (IQP) is a set of advanced query optimization features in SQL Server (starting from SQL Server 2017 and significantly expanded in SQL Server 2019 and later).
IQP enhances query performance dynamically by making real-time adjustments based on execution statistics, feedback loops, and AI-driven techniques.
How is IQP different from Traditional Query Processing?
Aspect | Traditional Query Processing | Intelligent Query Processing (IQP) |
---|---|---|
Optimization Stage | Static, before execution | Dynamic, adjusts during execution |
Query Plan Adjustments | Based on fixed statistics | Adapts based on real-time data |
Handling Plan Regression | Requires manual intervention | Automatically detects & corrects |
Performance Tuning | DBA-driven tuning required | Minimal or no code changes needed |
Machine Learning Influence | None | Uses feedback loops & AI |
Why Do We Need Intelligent Query Processing?
Traditional query optimization relies on cardinality estimation—predicting the number of rows a query will process. However, real-world queries often face:
✅ Bad Cardinality Estimates – Outdated statistics or complex predicates lead to poor execution plans.
✅ Query Plan Regressions – A once-efficient query suddenly slows down due to a bad plan.
✅ Memory Allocation Issues – Queries either over-allocate (wasting resources) or under-allocate (causing spills to disk).
✅ Suboptimal Join Strategies – Poor join selection (Nested Loop instead of Hash Join) causes performance degradation.
IQP fixes these problems automatically, reducing the need for manual performance tuning.
🚀 Key Features of Intelligent Query Processing
IQP introduces a range of powerful enhancements that improve query performance dynamically. Let’s explore some of its most impactful features.
1️⃣ Batch Mode on Rowstore
📌 What it does:
Originally available only for Columnstore indexes, Batch Mode Execution improves the performance of queries running on rowstore tables (traditional tables with B-tree indexes).
📈 Benefits:
Uses vectorized execution, reducing CPU usage.
Drastically improves performance for aggregations, joins, and large scans.
No changes needed—SQL Server automatically enables it when beneficial.
💡 Example:
SELECT CustomerID, COUNT(*) FROM Sales.Orders GROUP BY CustomerID;
Without batch mode, this query processes one row at a time. With batch mode, SQL Server processes thousands of rows at once, leading to faster execution.
2️⃣ Adaptive Joins
📌 What it does:
Instead of selecting a Nested Loop Join, Hash Join, or Merge Join at compile time, Adaptive Joins allow SQL Server to switch the join strategy dynamically at runtime.
📈 Benefits:
Prevents bad join choices due to incorrect row estimates.
Ensures optimal join selection for varying input sizes.
💡 Example:
If SQL Server expects 100 rows but actually gets 10 million rows, it will switch from a Nested Loop Join to a Hash Join automatically.
3️⃣ Adaptive Memory Grants
📌 What it does:
Allocates just the right amount of memory for query execution instead of over- or under-allocating.
📈 Benefits:
Prevents out-of-memory issues for large queries.
Reduces spilling to tempdb, which slows down execution.
💡 Example:
A complex report query initially requests 500MB but actually needs 5GB. SQL Server dynamically adjusts memory allocation for future executions.
4️⃣ Interleaved Execution for Multi-Statement Table-Valued Functions (MSTVFs)
📌 What it does:
Traditional table-valued functions (TVFs) always assumed fixed row estimates. This often led to poor query plans.
With Interleaved Execution, SQL Server delays optimization until runtime to get an accurate row estimate.
📈 Benefits:
Prevents underestimating or overestimating TVF outputs.
Optimizes execution plans based on real row counts.
💡 Example:
SELECT * FROM dbo.GetCustomerOrders(@CustomerID);
Before IQP, SQL Server guessed a default row count. Now, it waits until the function runs and then optimizes the query plan dynamically.
5️⃣ Table Variable Deferred Compilation
📌 What it does:
Table variables previously used fixed row estimates, often leading to poor execution plans. IQP defers their compilation until runtime, allowing SQL Server to optimize based on actual data size.
📈 Benefits:
Improves performance of queries using table variables.
Prevents incorrect join and index choices.
💡 Example:
DECLARE @TempTable TABLE (ID INT, Value VARCHAR(50));
INSERT INTO @TempTable SELECT ID, Value FROM LargeTable;
SELECT * FROM @TempTable JOIN AnotherTable ON @TempTable.ID = AnotherTable.ID;
SQL Server waits until the actual row count is known before optimizing the execution plan.
SQL Server Extended Events: Monitoring Queries Running Longer Than X Minutes
Extended Events provide a flexible and lightweight framework to capture detailed performance data in SQL Server. They help in diagnosing slow-running queries, deadlocks, waits, and other issues affecting database performance.
Why Use Extended Events Instead of SQL Profiler?
Low Overhead: Uses fewer system resources.
More Powerful: Captures granular event data.
Better Filtering: Allows precise filtering on execution time, database, users, etc.
Replaces SQL Trace/Profiler: Profiler is deprecated in newer SQL Server versions.
Step-by-Step: Configuring Extended Events for Queries Running More Than 5 Minutes
1. Create an Extended Events Session
We will create an Extended Events session to capture queries that take longer than 300 seconds (5 minutes) to execute.
Using SSMS GUI:
Open SQL Server Management Studio (SSMS).
Expand Management > Extended Events > Sessions.
Right-click Sessions and choose New Session....
Provide a name, e.g., Long_Running_Queries.
Under Events, click "Add Event", search for sql_statement_completed, and add it.
Under the Global Fields (Actions) tab, select:
sql_text (to capture the query text)
session_id (to track the session)
database_id (to identify the database)
Apply a Filter (Predicate):
Click Configure, then Filter (Predicate).
Select duration, set it to >= 3000000000 (300 seconds in microseconds).
Configure Data Storage:
Choose Event File as the target.
Specify a file path for saving captured events.
Click OK, then right-click the session and select Start Session.
Using T-SQL:
Alternatively, use the following T-SQL script to create the session:
CREATE EVENT SESSION [Long_Running_Queries]
ON SERVER
ADD EVENT sqlserver.sql_statement_completed (
WHERE duration >= 3000000000 -- 300 seconds (5 minutes) in microseconds
)
ADD TARGET package0.event_file (
SET filename = 'C:\Temp\LongRunningQueries.xel', max_file_size = 50MB
)
WITH (STARTUP_STATE = ON);
GO
2. Viewing and Analyzing the Captured Events
Using SSMS:
Expand Management > Extended Events > Sessions.
Right-click your session (Long_Running_Queries) and choose Watch Live Data.
Execute long-running queries and monitor captured events in real-time.
Using T-SQL to Read the Event File:
To analyze captured events from the event file:
SELECT
event_data.value('(event/@name)', 'VARCHAR(100)') AS event_name,
event_data.value('(event/data[@name="sql_text"]/value)', 'NVARCHAR(MAX)') AS sql_text,
event_data.value('(event/data[@name="duration"]/value)', 'BIGINT') / 1000000 AS duration_seconds
FROM
(
SELECT CAST(event_data AS XML) AS event_data
FROM sys.fn_xe_file_target_read_file('C:\Temp\LongRunningQueries*.xel', NULL, NULL, NULL)
) AS xevents
ORDER BY duration_seconds DESC;
To stop the session:
ALTER EVENT SESSION [Long_Running_Queries] ON SERVER STATE = STOP;
To drop (delete) the session:
DROP EVENT SESSION [Long_Running_Queries] ON SERVER;